Using a sledgehammer to crack a nut

07/02/2026 << back to Debugging Myself

I am a highly creative person, and as such, I have collided with reality enough times to also become a radical realist. An individual's capacity to handle complex tasks has a limit. A multi-instrumentalist with the genius of Prince is capable of composing and even recording albums of extraordinary quality, but he can hardly put them on a live stage alone. In the same way, if music stems from a single composer, the contribution of each performer—who knows and brings individual brilliance to their instrument—is lost.

The reality is that there is a bottleneck. Beyond the record or—speaking of software—the MVP, there is an entire complex lifecycle that is difficult for a single person to cover. Agents are the wet dream of any creative engineer who feels limited by their own capacity: "if only I had more hands to bring my ideas to life." Agents are those extra hands, and the promise is that they can help you create that all-in-one planning software you’ve envisioned.

The hyperbole of figures like Sam Altman suggests that we will soon see $1B companies with a single employee. This prediction sounds like a charlatan selling miracle tonics—though it undoubtedly has some truth at its core. Can you give full credibility to a CEO trying to raise capital at all costs? As a radical realist, I question the idea. I cannot believe that one architect with agents (as much as I see myself in that person's shoes) can replace 20 engineers.

However, I can believe that 20 AI-augmented architects can do the work of 20 engineers. Pivoting the role so that a software engineer, as we understand them today, adds agents as tools to power their work is a much more feasible reality. Perhaps AI is the key to finally breaking Brooks' Law: agents that eliminate the massive "communication tax," allowing those 20 profiles to operate asynchronously and modularly without the structure collapsing under the weight of excessive meetings. I don’t want to be a killjoy, nor do I want to seem like a Luddite raising a fist against the machines. I wouldn't even mind delegating the work to a handful of robots and lying in a reclinium, dedicating my life to contemplation, philosophy, art, and why not, hedonism. I simply understand that reality is stubborn for several reasons.

1. The Collapse of Audit Capacity (The Cognitive Bottleneck)

Even if an agent generates 10,000 lines of code in seconds, the mental effort to audit, understand dependencies, and ensure the cohesion of that code remains human.

  • The Cognitive Ceiling: A single architect cannot maintain the technical complexity of an entire SaaS in their "working memory" if it grows at the speed of AI.

  • Necessary Fragmentation: Dividing a system into modules isn't just for technical order; it’s to allow human minds to process each part. 20 architects allow for a deep audit; one architect can only perform a superficial one.

And herein lies the invisible danger: Context Debt. AI-generated code is not code you "know"; it is code you have "seen." When you write a function, you mentally map the "why" behind every decision; when you validate it at a glance, you are merely performing pattern recognition. In the long run, delegating 100% to a single brain creates a system that no one truly understands.

2. Structural Risk (SPOF and Pareto)

  • Single Point of Failure (SPOF): Centralizing the strategic knowledge of a system in a single person is an unacceptable business risk. If the "Solo Architect" disappears, the agents are left without the context that gives them meaning.

  • Pareto in Leadership: Even in a team of 20 agent-powered architects, the 80/20 rule will apply. But having that base of 20 allows the "high-performance" percentage to drive the system forward while the rest ensure robustness and continuity.

3. Software as a Living Organism (Maintenance vs. Genesis)

The industry often confuses creation (the MVP) with software survival.

  • The Illusion of the Magic MVP: It’s easy to ask an agent to build a structure from scratch. The hard part is managing the "growing pains": technical debt, security patches, migrations, and the natural entropy of code.

  • The Lifecycle: Software "gets sick" and evolves. 20 architects can "diagnose" and treat different areas of the system simultaneously. A single architect would collapse under the weight of the corrective maintenance generated by their own agents.

Instead of seeing the engineer as someone who writes code, we should see them as the guardian of context. Code is cheap (the agent does it), but context is expensive (only the human has it).

20 architects are not there to write more code, but to manage 20 times more context, ensuring the system doesn't turn into a "black box" that no one knows how to repair when the agents fail.


Using a sledgehammer to crack a nut

I’ve opened with this title because it’s one of the hurdles that amuses me most when thinking about working with agents. When you only need to change a single line of code—something that happens often—does it make sense to use agents? It’s like firing up an industrial bakery's assembly line just to bake a single tray of cookies.

Using an agent for a surgical correction is not just unnecessary; it’s counterproductive. It is the failure of the natural language interface for micro-precision: the friction between the machine’s Inference (guessing based on probabilities) and your Intuition (which knows the business intent). The agent, like a factory, is designed for mass production. It requires setting up the machinery (context), pre-heating the ovens (prompting/warming), and supervising the conveyor belt (auditing the output). Meanwhile, the architect can use their hands as a faster, more precise instrument for a small fix.

There are a series of hidden costs when invoking an agent for a trivial task. The cognitive friction of having to explain in detail something you already know—the translation time is greater than the execution time of your fingers on the keyboard. Then there’s verification latency: the agent often rewrites the surrounding block or introduces subtle changes that force you to read more lines just to validate the change of one.


Ultimately, I am neither a skeptic nor a killjoy. AI and agents are here to stay, but interpreting their impact on the industry requires a necessary nuance to readjust expectations—and, of course, to help those of us with skin in the game understand that they are not our enemies. They aren't here to replace us, but to power our work.

The value of the engineer in 2026 is not measured in lines of code per minute, but in how much complexity they can orchestrate without losing their sanity.

exit(0);

<< back to Debugging Myself