The Backwards Brain
John Nosta, founder of the NostaLab think tank, recently observed that AI trains humans to think backwards. It provides answers before we understand the question. It favours fluency over comprehension. It flips human reasoning by delivering polished outputs before we’ve had time to think.
I actually see this has become the norm for most AI users. The pleasure is almost instant. Ask a question. Get a response. Feel empowered. Move on.
Do this enough times and something shifts. You stop forming the question properly because the answer comes regardless. Your predisposition to instant empowerment stops you from challenging the response. When another one is just seconds away, you stop thinking because the output already exists. You think now you know everything.
The conclusion arrives before the reasoning. The destination before the journey. The answer before you understood the question well enough to ask it.
This is not how we develop cognitive capability. It plays against everything we know about learning.
Learning requires friction. The wrong turns. The dead ends. The moments where you sit with not-knowing long enough to actually figure something out. The struggle that builds judgment.
Now, we traded all of it for speed.
Smooth outputs. Hollow understanding. And we are calling this progress?
The Reskilling Misdirection
The conversation about AI focuses on reskilling workers to use new tools. This intention has missed the point entirely.
The tools are too easy. It’s almost a no-brainer when anyone can prompt and get answers. Watch someone use ChatGPT for the first time. Within minutes they’re generating content, asking follow-ups, iterating on outputs. The interface is deliberately frictionless. The barrier to use is nearly zero.
So the gap isn’t in operating AI. Everyone can operate AI.
The real gap is in preparing for it.
Knowing what you actually need before you ask. Defining quality before you see output. Understanding your own standards well enough to recognise when they’ve been met, or missed.
Most people skip this entirely. They prompt first and evaluate later. They let the AI’s output shape their expectations rather than the other way around.
This is the backwards brain in action.
What Responsible AI Development Actually Looks Like?
Anthropic recently released Claude Code with something called SKILLS. The concept is deceptively simple: before the AI agent does anything, humans write down their best practices. Their standards. Their decision frameworks. The way they want work done.
The agent then executes within those boundaries.
This inverts the common pattern. Instead of AI producing output that humans react to, humans define the constraints that AI operates within.
As a developer, I love to see this deterministic preparation for probabilistic execution. I truly believe that Anthropic is making meaningful progress that benefits human.
Yes, it’s slower to start. It requires thinking before prompting. It demands that you know what good looks like before you ask for it.
Which is precisely why it works.
The skill architecture isn’t about making AI more capable. It’s about making human judgment explicit. Codifying expertise rather than hoping AI will replicate it. Transferring knowledge into structure that agents can follow.
This is hard work. It requires examining how you actually do things—not how you think you do them, or how you’d like to. It forces clarity.
Most organisations avoid this. They want the output without the preparation. They deploy AI into workflows they’ve never examined. They automate processes they don’t understand. They expect tools to provide clarity they never had.
Then they wonder why the results feel generic.
The Realistic Truth
The more capable AI becomes, the more preparation it demands. Not less. More.
This is the part nobody wants to hear. We adopted AI to reduce effort. But the real value requires more effort—just different effort. Earlier effort. Thinking effort.
The organisations getting results aren’t the ones with the best prompts. They’re the ones who did the homework before the first prompt was written.
They mapped their workflows. They defined their standards. They built the scaffolding that makes AI output useful rather than merely plausible.
This is why Anthropic’s SKILLS architecture matters beyond code. It’s proof of concept for something larger: AI doesn’t advance human work. Human work advances AI.
The skills we bring—cognitive skills, planning, applying knowledge, defining what good looks like—these aren’t inputs to AI. They’re the foundation AI runs on.
Without them, you get speed without direction. Output without outcome. Motion without progress.
Prompting is not a skill. Preparation is.
Right now, AI doesn’t have a thinking deficiency.
We do.

