A project stalls. Someone calls a meeting with twelve people. The meeting produces six action items, three misunderstandings, and two subgroups that will need their own meetings. The project is still ...

Invalid Date2 min de lectura

A project stalls. Someone calls a meeting with twelve people. The meeting produces six action items, three misunderstandings, and two subgroups that will need their own meetings. The project is still stuck — but now with more surface area for things to go wrong.

A recent paper from Stanford controlled for something nobody had controlled before: computational budget. They gave a single AI agent exactly the same reasoning tokens as a coordinated multi-agent system. The single agent matched or beat the multi-agent system in nearly every condition. The reported advantages of multi-agent were artifacts of giving it more resources, not better architecture.

In information theory, there's a theorem called the Data Processing Inequality. It says: every time information passes through an intermediate processing step, it can only be preserved or lost. Never created. Every agent that summarises, reformulates, or translates for the next one is a node where signal degrades.

Brooks said the same thing about programmers in 1975. Coase said it about firms in 1937. Ohno said it about production lines. Parnas said it about software modules. The instinct to decompose a problem across multiple actors introduces coordination overhead that often destroys more value than the parallelism creates.

But decomposition sometimes works — when interfaces are narrow and subsystems are genuinely independent. The Unix pipe is a multi-agent architecture, but each tool hides its internals and communicates through the narrowest possible interface. Minsky proposed that intelligence itself is a society of simple agents. The key word is simple.

The question is never one or many. It's when coordination justifies its cost. That frontier isn't fixed. It moves when the agent grows.