Can everyone be an expert?
The iconic venture capitalist Marc Andreessen frames expertise as knowledge you can absorb from reading books, tutoring, and of course AI teaching you nowadays. I disagree.
Real expertise is learned from consequences. Most of the time, these consequences are failures, redos, and the hard judgment calls about when to push forward and when to let go. AI has read everything but experienced nothing. It has never felt the weight of a decision gone wrong.
The “superpowered individual” who skips the consequences is just an intern who learns by mimicking. That’s what AI is today. A very good mimic. And mimics fool people who've never seen the real thing.
There is a dark side of AI democratization that Andreessen completely misses. When a project manager with no design background uses AI to “become a designer,” they lack the trained eye to see what’s wrong. The AI output looks polished. The gradients are smooth. The spacing seems fine. They have no frame of reference for what “great” looks like versus a pre-trained “competent template.”
The expert designer sees: wrong visual hierarchy, derivative aesthetic, accessibility failures, brand inconsistency, layers of cognitive evaluation based on real experience.
The AI-empowered non-expert sees: “Wow, this is nice and this would have taken me hours.”
The Expertise Inversion
Andreessen argues AI makes everyone an expert. The opposite is true. AI makes everyone dependent.
When you outsource thinking, you stop learning how to think. The person who once struggled through a problem like making mistakes, hitting walls, developing judgment now skips straight to the answer. They get the output without building the circuitry to evaluate it.
Here’s the trap: an AI user without domain expertise prompts with common sense, intuition, or worse — misunderstanding. The AI responds confidently. It always responds confidently. The user has no basis to verify if the answer is correct, hallucinated, outdated, or subtly wrong in ways that only matter when the work ships.
This is the Evaluation Paradox. You need expertise to assess whether AI gave you expertise. Without it, you’re not collaborating with intelligence. You’re trusting a stranger’s homework.
The danger is real. Wrong answers delivered with confidence look identical to right answers delivered with confidence. Only the expert can tell them apart.
We think our problem is thinking — too slow, too hard, too much effort. So we outsource it. But the problem was never thinking. The problem is stopping.
Can you be an expert?
Yes. Anyone can be an expert. This is how humans have always worked. We learn, we practice, we fail, we adjust. That process hasn’t changed.
Andreessen sees AI as a teacher delivering knowledge. But most AI users today aren’t learning. They’re chatting to get fast answers, then moving on. Hit and run. This builds nothing.
There is a better way. Just as AI foundation models require training on structured data, humans need foundational training too. Before we can use AI properly at work, we need to understand our own data, recognise the glitches in our workflows, and know what good output actually looks like.
This is where real domain experts matter — not as gatekeepers, but as guides. Learning happens in the interaction with people who have lived the consequences, not in quick chats with machines that haven’t.
AI can accelerate expertise. But it cannot replace the foundation. Skip the foundation, and you’re not building expertise. You’re collecting answers you can’t verify.

