Hello World. Are You Ready?
Milan, 2023. I Was Wrong. Sort Of.
In 2023, I told a room full of business leaders in Milan that AI replacing human work wasn't happening yet. That same year, GPT-4 launched and the landscape shifted fast.
What followed wasn’t gradual. It was a cascade. Model after model. Capability after capability. What took decades in previous tech cycles happened in months. By the time most organizations had finished debating whether to adopt AI, the conversation had already moved to how autonomous it should be.
Now we’re talking about fully autonomous AI agents — systems that don’t just assist, but plan, decide, execute, and report back. No hand-holding. No supervision. No human in the loop.
And I’ll be honest, my views are still swinging. Somewhere between uncomfortable and cautious. But how I feel about it is no longer the point.
I’ve Been Recalibrating Ever Since
I was an old-school programmer. Now I work with a coding agent daily. On a scale of 0 to 10, it’s a solid 10. Like having a sharp junior coder who never sleeps, never complains, and executes fast.
But here’s the footnote that changes everything: without my supervision, the output is technically plausible and practically wrong.
The agent executes. The expert makes it matter.
That’s not a limitation of the technology. That’s how autonomous workflow works.
A New Hierarchy Is Forming
Autonomous agents can run workflows. The workflows contain all the task-level work, chained, sequential, governed by rules. None of this is new. But let’s call it what it is: sophisticated automation, not intelligence.
True agentic work — the kind that reasons, adapts, and decides — still needs something more. It needs a domain expert as its north star. Not managing every step. Governing the outcome.
What’s emerging isn’t AI replacing the org chart. It’s a new supervision hierarchy sitting above it. Someone has to own what the agent knows. Someone must convert policies and procedures into repositories that define what it’s allowed to do — and when it’s wrong.
The Repository Problem
Here’s the unglamorous truth no vendor is talking about.
For agents to work intelligently inside your organization, they need to know how your organization actually works. Not the official version. The real one. The decisions, the exceptions, the tribal knowledge baked into your people over years.
That means building an internal knowledge repository: documented workflows, procedural logic, institutional memory. The raw material an agent needs to operate in your context, not just in theory.
The problem is that hundreds of years of human work culture didn't develop with documentation in mind. We built chains of command, assembly lines, approval hierarchies, all designed around human creativity, decision, supervision, human trust, human presence and experiences. The idea of that running unattended is not just a technical challenge. It’s a cultural one.
And then the harder question: who builds the repository, who maintains it, and who owns it when the business changes?
A repository is not a company handbook. Not policy papers filed in binders. Not an intranet. Think of it as building your company’s own MCP — a programmatic translation of how your organization actually thinks, decides, and operates. It converts human workflow into a structured library that an agent can navigate: the chains of command, the decision logic, the inputs and outputs that reflect real operations on the ground.
I don’t have a clean answer on how to build it. I’ve not seen anyone who has.
What I Do Know
Task-level automation has long been achievable. AI agents go further. Through machine reasoning, they can autonomously chain tasks into a mission. But intelligence requires context. Context requires humans to encode it. And encoding it requires a discipline most organizations have never had to develop. The blend of expertise an autonomous operation demands — strategic, procedural, technical — has no precedent in how organizations have traditionally been built. The business leader who thinks in ideas and the engineer who thinks in systems are being asked to speak the same language. That’s a bipolar challenge.
Agentic work is coming whether you’re ready or not. Being ready means your people have done the work of encoding how your organization actually thinks, decides, and operates. No AI agent can do that part for you.

