<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0">
  <channel>
    <description>Rambling Rows</description>
    
    <title>llm on Rambling Rows</title>
    <link>https://rrows.net/categories/llm/</link>
    
    <language>en</language>
    
    <lastBuildDate>Fri, 15 May 2026 08:43:16 +1000</lastBuildDate>
    <item>
      <title>Your AI is working. Your brain is paying for it.</title>
      <link>https://rrows.net/2026/05/15/your-ai-is-working-your.html?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=rrows</link>
      <pubDate>Fri, 15 May 2026 08:43:16 +1000</pubDate>
      
      <guid isPermaLink="false">http://rrows.micro.blog/2026/05/15/your-ai-is-working-your.html</guid>
      <description>&lt;img src=&#34;https://cdn.uploads.micro.blog/202171/2026/ai-induced-brain-fog-600px.jpg&#34; width=&#34;600&#34; height=&#34;338&#34; alt=&#34;&#34;&gt;
&lt;div class=&#34;epigraph&#34;&gt;
	&lt;blockquote&gt;
		
		&lt;p&gt;&amp;ldquo;I end each day exhausted — not from the work itself, but from the &lt;em&gt;managing&lt;/em&gt; of the work. Six worktrees open, four half-written features, two &amp;lsquo;quick fixes&amp;rsquo; that spawned rabbit holes, and a growing sense that I&amp;rsquo;m losing the plot entirely.&amp;rdquo;
— Francesco Bonacci, founder of Cua AI&lt;/p&gt;
		
	&lt;/blockquote&gt;
&lt;/div&gt;
&lt;p&gt;The pitch for AI in the workplace has always been about output: write faster, analyse more, respond at scale. And the tools deliver on that. They deliver relentlessly. The problem is that more output doesn&amp;rsquo;t automatically mean more throughput. Sometimes it means more to review, more to cross-check, more decisions to make before anything ships.&lt;/p&gt;
&lt;p&gt;A 2026 BCG study of 1,488 U.S. workers measured exactly this. Workers who had to actively oversee AI output reported 19% more information overload, 14% more mental effort, and 33% more decision fatigue than their peers with lighter AI supervision requirements. High AI oversight was the most mentally taxing form of AI engagement across the entire study.&lt;/p&gt;
&lt;p&gt;That third number is the one that deserves more than a passing read. Decision fatigue isn&amp;rsquo;t just feeling tired. It leads to slower decisions, mental fog and more mistakes - exactly the outcomes AI was supposed to reduce. The researchers gave this phenomenon a name: &lt;strong&gt;AI brain fry&lt;/strong&gt; - defined as mental fatigue from excessive use or oversight of AI tools beyond one&amp;rsquo;s cognitive capacity.&lt;/p&gt;
&lt;p&gt;This isn&amp;rsquo;t really a story about LLMs being bad at their jobs. They&amp;rsquo;re excellent at generating content. The problem is that generation and consumption operate at very different speeds, and the gap between them falls entirely on the human in the loop.&lt;/p&gt;
&lt;p&gt;An LLM can draft ten variations of a document in the time a person takes to read one. It can surface fifty relevant data points while a person is still forming their question. At small scales, that&amp;rsquo;s useful. At full workflow deployment - where multiple agents are producing output, routing it, flagging it for review - the human becomes the bottleneck, and not in a flattering way. They become a cognitive garbage collector, processing everything the system produces to find the small percentage that actually matters.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;Working harder to manage the tools than to actually solve the problem.&amp;rdquo;
— Senior engineering manager, quoted in the HBR study&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The framing that tends to get missed here: &lt;strong&gt;output design is not a UX problem&lt;/strong&gt;. It is a productivity variable.&lt;/p&gt;
&lt;p&gt;How an agent formats its output - whether it summarises or dumps, whether it routes or floods, whether it signals priority or leaves that work to the reader - directly determines how much of that output a human will actually consume, trust and act on. An agent that produces 2,000 words when 200 were needed hasn&amp;rsquo;t saved anyone time. It has created an oversight burden that eats the time savings and then some.&lt;/p&gt;
&lt;p&gt;The best implementations I&amp;rsquo;ve seen do a few things consistently. They constrain output length by default. They surface decisions rather than information. They distinguish between what the human must see and what they can trust the system handled. That last one is the hardest, and the most important - because it requires the system to have earned that trust, which takes time and iteration to build.&lt;/p&gt;
&lt;p&gt;The BCG researchers put it plainly:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;AI oversight cannot simply be layered on top of human oversight; nor can AI agents be stacked on one user ad infinitum.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The productivity gains from AI are real, but they are not automatic. They are contingent on the quality of the human-AI interface at each handoff point. Design that interface poorly and you don&amp;rsquo;t get 2x output. You get 1x output and a depleted person at the end of the day.&lt;/p&gt;
&lt;p&gt;Output is cheap. Attention is not. Design accordingly.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Sources:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://hbr.org/2026/03/when-using-ai-leads-to-brain-fry&#34;&gt;&lt;strong&gt;When Using AI Leads to Brain Fry&lt;/strong&gt; - Harvard Business Review&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description>
    </item>
    
  </channel>
</rss>
