Nick Milo makes it look easy

I build Rube Goldberg machines. Show me a simple workflow and I will find three ways to bolt extra components onto it. Custom scripts. Half-built automations. Plugins solving problems I didn’t have. Nick Milo does the opposite. He just posted a 60-second video showing Claude and Obsidian working together through Cowork. No elaborate setup. No custom scripts. Point Cowork at your vault and start a conversation. That’s it. It is the most distilled demonstration of agent-assisted thinking I’ve seen in months.

Continue reading →


The joys of AI enhanced systems

My personal bookmarking manager Felix fell over.

Why?

Root cause found: claude-3-haiku-20240307 has been retired (returns 404). Need to update to claude-haiku-4-5-20251001.

Thanks Anthropic. Good to have progress but it’s not great to break software. I’m sure this is happening all over as they rapidly ramp up new models in a highly competitive environment.

Continue reading →


The joys of AI enhanced systems

My personal bookmarking manager Felix fell over.

Why?

Root cause found: claude-3-haiku-20240307 has been retired (returns 404). Need to update to claude-haiku-4-5-20251001.

Thanks Anthropic. Good to have progress but it’s not great to break software. I’m sure this is happening all over as they rapidly ramp up new models in a highly competitive environment.

Continue reading →


Cowork demonstrated Cowork

A friend of mine, let’s call her Jane, runs a marketing consultancy. Fifteen years in, small team, strong client list, had used Claude for chat only. I offered to spend an hour showing her what’s possible now we’re well beyond that. Wednesday night I realised winging a 75-minute demo next morning would burn too much time on navigation and too little on value. I opened Claude Cowork, uploaded a brief I’d quickly hacked together from emails and went on with my evening.

Continue reading →


Aaron Haspel just gave the 1911 Britannica the digital home it deserved

Aaron Haspel published britannica11.org last week. It is a complete, searchable, cross-referenced digital edition of the Encyclopædia Britannica’s celebrated 11th edition (1910 to 1911). One person built it. Project Gutenberg and Wikisource have been chipping away at the same artefact for over twenty years and are still not done.

The credits page is the giveaway. “Thanks most of all to Anthropic and Claude Code Opus, which did nearly all the heavy lifting, and to OpenAI and GPT Codex, which drafted the specification.” Haspel is a New York based aphorist and essayist who also writes code. That blend is the new shape of solo scholarship.

Continue reading →


Your wandering attention helped humans survive

We’ve spent decades framing distractibility as a deficit. A disorder. Something to medicate and manage. Anne-Laure Le Cunff, a neuroscientist at King’s College London, thinks we have it backwards.

In a compelling new essay for Aeon, Le Cunff makes the case that what we now label ADHD was once an evolutionary advantage. The hypercurious mind, the one that can’t stop scanning the horizon, that gets bored with routine, wasn’t broken. It was built for exploration.

“Human attention did not evolve in an environment saturated with infinite information and algorithmically optimised distraction. For most of our history, novelty was relatively rare and often meaningful; today, exposure to novelty is constant and difficult to escape.”

Continue reading →


Local models are not quite there yet

Daniel Vaughan ran Gemma 4 as a local model inside OpenAI’s Codex CLI on both a MacBook Pro and a Dell GB10 with NVIDIA Blackwell. The results are worth your time. The headline number is striking. Google’s Gemma 4 hit 86.4% tool-calling accuracy versus Gemma 3’s 6.6%. That’s not incremental improvement. That’s a generational leap in what a local model can do inside an agentic coding workflow. But the details tell a more familiar story.

Continue reading →


When you swap AI models, you don't change tools. You change staff.

tl;dr: Arthur Soares ran a 14-agent household AI fleet on Claude, got forced off it overnight by an Anthropic policy change, spent a full day migrating to GPT-5.4, and wrote up everything he learned. The short version: Claude and GPT-5.4 require fundamentally different prompting styles. A rigorous 8-model benchmark shows Claude Sonnet 4.6 still leads at 92% overall, but three Chinese open-weight models (GLM 5.1, Qwen 3.5, Kimi K2.5) are right behind GPT-5.

Continue reading →


And breathe

They’re home.

At 8:07 p.m. EDT on Friday, the Orion capsule carrying four astronauts splashed down in the Pacific Ocean off San Diego, completing a ten-day journey of nearly 1.1 million kilometres. Artemis II is the first crewed flight beyond low Earth orbit since Apollo 17 in 1972. More than half a century between drinks.

Continue reading →


The compiler that doesn't care about your feelings

Caleb Fenton spent last month writing 20,000 lines of code in a language he doesn’t know. He used AI agents to write it, and his verdict: it was easier than using the language he’d relied on for a decade.

That’s a sentence worth pondering.

Many developers have gravitated toward Python because it is forgiving. You can be loose, approximate and casual. Python mostly lets things slide and figures it out. That philosophy made sense when humans were doing the coding. Humans get frustrated. Humans lose patience. Humans quit and find a different approach when a tool keeps telling them they’re wrong.

Continue reading →