Why AI Literacy Needs an Empirical Stage
The OECD AI Literacy framework—Engage, Create, Design, Manage—is the most sensible approach originally created to develop AI-powered education. We’ve found this AILit framework also highly suitable for learning organizations seeking to reskill and upskill their workforce for AI implementation at the workplace. Its four knowledge domains provide clear direction without overwhelming learners with technical complexity.
But after reviewing this framework, we’ve identified a critical gap: there’s no foundation stage for establishing empirical truth.
This isn’t a theoretical concern. It’s a practical necessity.
Why the AILit Framework Works for Workplace Learning
The OECD framework succeeds in organizational reskilling because it maps to how professionals actually work:
ENGAGE → Using AI tools and resources to perform tasks
CREATE → Producing outputs using AI resources
DESIGN → Building repeatable AI-enhanced workflows
MANAGE → Governing AI use and measuring impact
These four domains are intuitive. They don’t require technical background. They translate directly to business outcomes—making them ideal for workforce upskilling programs where practitioners need applicable knowledge, not academic theory.
But it assumes something critical: that learners can distinguish between AI-generated truth and AI-generated plausibility.
The Hallucination Problem
Here’s the uncomfortable reality: AI systems generate confident, coherent, and completely fabricated information. Regularly.
Considering this scenario: A marketing professional using AI for competitive research might receive:
Pricing data that sounds accurate but is outdated
Feature comparisons that list capabilities competitors don’t have
Market statistics that are mathematically plausible but empirically false
The problem: Without empirical grounding, how would they know?
The AILit framework teaches how to use AI (Engage), what to create (Create), how to systematize it (Design), and how to govern it (Manage).
It doesn’t teach: How do you know if what AI tells you is true?
The Empirical Stage: Pre-Framework Foundation
We propose adding a foundational stage—call it Stage 0: Empirical Preparation—before engaging with the four domains.
This stage establishes the evidence-based discipline that makes everything else reliable.
What “Empirical Preparation” Means?
It is not a philosophical discussion about truth and accuracy. The empirical preparation is concrete practice of establishing factual baselines.
Before using AI for any task, learners establish:
Ground truth dataset → Facts they can verify independently
Validation checkpoints → Known correct answers to test against
Source documentation → Where information came from and when
Gap awareness → What they don’t know (where hallucination risk is highest)
We use the empirical prep to transform AI from “black box oracle” to “research assistant requiring fact-checking.”
Why This Matters for Each Domain
The empirical stage strengthens the AILit framework with,
ENGAGE (Domain 1)
Without empirical foundation: Learners accept AI outputs at face value
With empirical foundation: Learners engage critically, knowing what to verify
Example: Using NotebookLM to analyze industry reports
Empirical prep: Read 2-3 reports manually first to understand actual content
Engagement: When AI summarizes 20 reports, learner can assess: “Does this match what I know?”
CREATE (Domain 2)
Without empirical foundation: Quality is subjective (”this sounds good”)
With empirical foundation: Quality is measurable against known standards
Example: Creating case study with AI assistance
Empirical prep: Gather actual customer data—implementation timeline, cost savings, quotes
Creation: AI drafts narrative, but learner validates every claim against source data
Result: Factually accurate case study, not plausible fiction
DESIGN (Domain 3)
Without empirical foundation: Workflows propagate errors systematically
With empirical foundation: Workflows include validation checkpoints
Example: Weekly competitive intelligence routine
Empirical prep: Establish baseline of current competitor positions
Design: Workflow includes “spot-check 3 competitors manually monthly”
Result: Automated monitoring with built-in reality checks
MANAGE (Domain 4)
This is where empirical foundation becomes critical.
Managing AI requires measuring impact, assessing accuracy, and making informed adjustments.
Without empirical foundation, you can’t manage effectively because you lack:
Baseline to measure improvement against
Validation data to assess AI accuracy
Evidence to justify continuing or changing approaches
The empirical foundation enables evidence-based management, not assumption-based management.
Why Manual Work Matters? Literacy vs. Dependency
At this point, you might ask: Why do all this manual work? Isn’t AI supposed to handle everything for us?
After all, the empirical preparation requires learners to read reports manually, verify data independently, and establish baselines before using AI. This sounds tedious. It seems to contradict the promise of AI automation where a single prompt achieves the goal.
But this is precisely the point.
AI Literacy Is Not About Maximizing Automation
The goal of AI literacy training isn’t to advance automation efficiency. It’s to develop cognitive capability.
There’s a fundamental difference between literacy and dependency:
Dependency means outsourcing thinking to AI:
Accepting outputs without verification
Trusting patterns without understanding
Scaling processes without validation
Measuring activity without assessing quality
Literacy means partnering with AI while retaining critical judgment:
Using AI to process volume, while verifying accuracy
Leveraging AI for pattern detection, while interpreting meaning
Automating repetitive tasks, while maintaining quality standards
Scaling intelligently, not blindly
Dependency deteriorates cognitive ability. Literacy enhances it.
The Purpose of Organizational Reskilling
Any learning organization implementing AI upskilling should recognize this distinction clearly. The goal is not to make employees dependent on AI. The goal is to develop human-AI collaboration that:
Improves productivity → AI handles volume and speed
Enhances human capability → People develop new ways of organizing knowledge, problem-solving, and critical thinking
Builds sustainable advantage → The organization doesn’t just move faster, it thinks better
Why Empirical Preparation Builds Literacy, Not Dependency
Consider what happens when a learner manually reads 2-3 industry reports before using AI to summarize 20:
What they develop:
Pattern recognition → They can spot when AI summaries miss nuanced arguments
Domain knowledge → They understand context that AI cannot infer
Critical judgment → They know which claims require verification
Quality standards → They can distinguish between thorough and superficial analysis
This is cognitive development, not busywork.
Now contrast this with someone who never reads reports manually and only consumes AI summaries:
What they lose:
Ability to assess summary quality (no reference point)
Understanding of source material depth (AI abstracts away detail)
Capacity to detect bias or omission (AI makes editorial choices invisibly)
Confidence to challenge AI outputs (no independent knowledge base)
Over time, this person becomes entirely dependent on AI’s interpretation of reality. They’ve outsourced not just the work, but the thinking.
The False Promise of “Single Prompt Solutions”
Much of AI marketing promotes the fantasy: “One prompt, complete solution.” They sell tools. But the tools don’t build capability.
The empirical stage acknowledges an uncomfortable truth: AI is powerful but unreliable. Human judgment is slow but essential.
The combination—AI for scale, humans for validation—creates something neither can achieve alone:
Speed with accuracy
Volume with quality
Automation with accountability
Reskilling for the AI Era Requires Both Skills
Effective AI literacy programs must teach two complementary skill sets:
Technical Skills (how to use AI tools)
Prompt engineering
Tool selection
Workflow integration
Output optimization
Critical Skills (how to verify AI outputs)
Establishing ground truth
Source validation
Bias detection
Quality assessment
The empirical stage develops the second set—the skills that prevent AI dependency.
Without these critical skills, technical proficiency becomes dangerous. You’ve trained people to execute AI workflows efficiently, but not to recognize when those workflows produce flawed outputs.
Organizations investing in AI literacy face a choice:
Short-term thinking: Train people to use AI tools quickly, measure productivity gains, declare success.
Long-term thinking: Train people to collaborate with AI critically, measure quality of outcomes, build sustainable capability.
The empirical stage represents long-term thinking. Yes, it requires upfront effort. Yes, it seems slower initially. But the investment compounds over time..
The Discipline of Thinking
AI is the most powerful cognitive tool ever created. But like any tool, its value depends entirely on the skill of those wielding it. The OECD AILit framework provides excellent structure for workplace AI adoption. Our proposed empirical stage doesn’t complicate that framework. It will complete it.
Because the point of AI literacy isn’t to stop thinking. It’s to think better. When organizations understand this distinction—between literacy and dependency, between automation and capability—they approach AI training differently:
Not as a race to deploy tools faster, but as an investment in developing critical judgment. Not as replacement of human work, but as amplification of human intelligence. Not as outsourcing decisions to AI, but as enhancing our capacity to make better decisions.
The manual work of empirical preparation isn’t a bug in the framework. It’s the feature that ensures AI enhances human capability rather than replacing it. In an era where AI can generate anything instantly, the most valuable skill isn’t knowing how to prompt AI.
It’s knowing when AI is wrong.
That requires literacy. And literacy requires discipline. And discipline begins with empirical foundation.
The question for any learning organization isn’t: “How quickly can we adopt AI?”
The question is: “Are we building capability that lasts, or dependency that deteriorates?”
The empirical stage gives you the answer.
ps: you can download the OECD AILit Framework here