The Folder Full of Job Titles
A tweet went viral last week. Someone shared a screenshot of a folder. Inside the folder is a series of meticulously crafted AI prompts. Each file named after a position.
Financialmanager.md, graphicdesigner.md, contentwriter.md, projectmanager.md, etc.
The implication was clear. The future of work. An autonomous organisation. Every role reduced to instructions. Every function handled by a dedicated AI agent.
It looked elegant. It looked inevitable. It looked like someone had never worked before.
The Author Problem
Every prompt in that folder represents someone’s understanding of how a job should be done. Their assumptions. Their mental model of what good looks like.
That understanding came from somewhere. Years of doing the work. Watching what fails. Learning what customers actually want versus what they say they want.
The prompts are not the knowledge. The prompts are a photograph of knowledge at a single moment in time.
And like any photograph, they begin aging the instant they’re taken.
The Pipeline Illusion
Chain AI agents together. Output feeds input feeds output. It looks like an autonomous pipeline. A processes, then B processes, then C processes.
This works for assembly lines. Raw material enters, finished product exits.
But knowledge work isn’t an assembly line.
In a real meeting, the CFO’s expression shifts mid-presentation. The product lead notices. She adjusts her next point before he speaks. The CEO catches this exchange and decides to let it play out.
None of this was predefined. Three people processed the same moment simultaneously, responding to each other’s responses in real-time.
An AI agent receives input, processes, produces output. It cannot notice the CFO’s expression while processing the slide. It cannot adjust mid-generation based on signals it wasn’t told to watch.
It is, fundamentally, linear. One thing at a time. One direction. No peripheral vision.
The Drift
If you’ve coded with AI, you’ve seen this: AI generates code. You feed it to another AI for review. The review suggests changes. You feed those back.
Each handoff introduces drift. Small misunderstandings compound. Context gets lost. By the third cycle, you’re debugging a problem you never had.
Humans interrupt this constantly. “Wait, that’s not what I meant.” “Actually, forget that approach entirely.”
This spontaneous course-correction isn’t a feature you can add to a pipeline. It’s the ability to break the sequence when the sequence heads somewhere wrong.
You cannot predefine when to break. If you could, you wouldn’t need to.
The Judgment Gap
Even deterministic tasks with clear inputs, clear outputs, clear success criteria will take years to reach reliable accuracy. Anyone working with AI daily knows this.
But judgment is a different category entirely.
Judgment is: “The process says do X, but something feels wrong.”
Judgment is: “The data supports decision A, but I know this client.”
Judgment is: “Everyone agrees, which is exactly why I’m pushing back.”
You cannot predefine judgment because judgment exists for situations that weren’t predefined. It’s recognising when the rules don’t apply.
The folder captures the rules. It cannot capture the wisdom to break them.
What The Folder Actually Shows
That screenshot isn’t the future. It’s a documentation project.
Someone wrote down how they think work should be done. That’s valuable. Most organisations run on tribal knowledge that evaporates when people leave.
The prompts aren’t replacements for roles. They’re the beginning of understanding what roles actually do.
What’s Actually Coming
The autonomous organisation isn’t arriving. Not in the next twenty years, at least. Not because the technology fails. But because organisations exist to handle what can’t be specified in advance.
What is coming: humans managing multiple AI agents. The folder of prompts as starting point, not workforce. Value shifting from doing work to knowing which work matters.
Here’s what the screenshot got wrong. AI at work isn’t about automating what humans already do well. It’s about enabling what humans couldn’t do before. Not replacement. It is augmentation.
The person who shared that folder was right about one thing. The org chart is changing. And something is missing. Not another agent. A position responsible for unlocking value that didn’t exist before the agents arrived.

