So Many Vain, So Little Time
While consulting for a top-100 global technology company, one that actively markets AI-powered products to the world, I encountered an ironic reality: their marketing IT workflow protocols remain virtually unchanged from two decades ago. The revision guidelines I reviewed focused exclusively on traditional concerns: data privacy controls, data transfer procedures, helpdesk ticketing systems, and conventional security measures. AI was conspicuously absent.
This isn’t a story about a small business struggling to keep up with technology. This is a leading tech company selling AI solutions while operating internally on workflows designed for a pre-cloud, pre-mobile, certainly pre-AI world.
Most organizations aren’t ready to adopt AI not because they lack the technology, but because they’re too busy perfecting the wrong skills.
Cisco’s AI Readiness Index 2025, released weeks ago, found that just 13 percent of organisations surveyed are fully prepared for the AI age. As someone working directly with these organizations, I can tell you: this number isn’t pessimistic. If anything, it’s generous. I witness this reality firsthand—companies proudly showcasing their AI initiatives while their fundamental workflows remain trapped in the past.
The Vanity Metrics of AI Readiness
Walk into any corporate training session on AI, and you’ll see:
Employees proudly showing off their prompt engineering skills
Leaders tracking “number of AI tools deployed”
Teams competing over ChatGPT use cases
Consultants selling courses on “101 AI productivity hacks”
These are vanity metrics. They look impressive on slide decks. They generate LinkedIn engagement. They make everyone feel like they’re “doing AI.”
But they’re not solving the actual problem.
When Fortune 500 companies deploy AI tools, teams report they “did not work” or “were too generic,” not because the AI lacks capability, but because the tools don’t understand how work actually gets done in that specific organization.
92% of companies plan to invest more in AI over the next three years. Only 1% believe they’ve reached maturity.
So many vain efforts. So little time before competitors who focus on substance leave you behind.
Technology’s Vanity Cycle
Having worked through multiple waves of digital transformation over the last 25 years, I’ve watched this same pattern repeat with depressing predictability. Every new technology—the web, mobile, cloud, blockchain, and now AI—arrives with its own inflated vanity value.
The vanity inflates in direct proportion to our fear of being left out. Fear of not being in the “wanted group.” Fear of being labeled a dinosaur. Fear of missing the next big thing.
And governments and industry bodies feed this cycle, raising innovation flags so high that no one can see where they’re actually pointing. We get vague mandates to “embrace digital transformation” or “become AI-ready” without any meaningful directive on what outcomes we’re actually trying to achieve.
What is a meaningful directive? Start with the outcome.
When Singapore announced its Smart Nation initiative, the early phases were heavy on rhetoric about “being a leader in innovation.” The meaningful directives only emerged when specific outcomes were defined: reduce healthcare wait times by X%, increase elderly care efficiency by Y%, improve urban mobility by Z%.
Now, with AI, we’re seeing two dominant approaches emerge, and neither solves the fundamental problem.
The Two Failure Modes of AI Adoption
Approach 1: The Endless Sandbox
In October 2024, Minister for Digital Development and Information Josephine Teo announced that Singapore would take a “proactive, practical and collaborative approach” to govern agentic AI—AI systems capable of autonomous actions. The Government Technology Agency’s sandbox initiative with Google Cloud allows public agencies to test agentic AI capabilities and develop mitigation measures through actual deployment.
Mrs. Teo emphasized: “By observing how these systems behave – and sometimes fail – we learn what guard rails are truly needed.”
On the surface, this sounds thoughtful. Learn by doing. Understand failure modes. Build appropriate guardrails.
But here’s what actually happens: Organizations observe. And observe. And observe some more.
The sandbox becomes a comfortable holding pattern, perpetual experimentation without commitment. Teams test AI capabilities endlessly, documenting edge cases and failure modes, building ever-more-sophisticated governance frameworks. Meanwhile, competitors who understand their workflows are already deploying AI that creates actual business value.
The sandbox approach assumes the problem is understanding AI behavior. It’s not. The problem is understanding your own organization.
Approach 2: Business as Usual
The alternative? Treat AI like any other IT deployment.
This is what I witnessed at that top-100 tech company: IT protocols unchanged from 20 years ago. AI gets bolted onto existing systems without rethinking workflows. Data governance policies designed for structured databases get applied to generative AI. Security teams apply legacy access controls to tools that require fundamentally different paradigms.
Organizations procure AI tools the same way they procured CRM systems: evaluate features, negotiate contracts, roll out to users, measure adoption rates.
The business-as-usual approach assumes AI is just another enterprise software category. It’s not. AI requires understanding how work actually flows through your organization. Something most companies have never systematically documented.
The Uncomfortable Truth
Organizations want to skip the hard work. They want to jump straight to AI deployment, whether in controlled sandboxes or production environments, without first understanding their own operations.
They want governance frameworks without documented workflows.
They want Chief AI Officers without organizational literacy.
They want to demonstrate AI adoption without foundational knowledge.
This is understandable. The hard work isn’t scalable, isn’t fast, isn’t visible, and isn’t comfortable. It exposes gaps in organizational knowledge.
But it’s the only work that matters.
Why Both Approaches Fail
Both the sandbox and business-as-usual approaches share the same fatal flaw: they focus on the technology instead of organizational readiness.
The sandbox camp asks: “How does AI behave?”
The business-as-usual camp asks: “How do we deploy AI?”
Neither asks: “Do we understand how work actually happens here?”
But there’s an even deeper problem: Both approaches assume AI’s value comes from replacement, not augmentation.
Organizations envision AI in their future workflows and immediately think: “How many “Full-Time Equivalent” can this replace?” They calculate linear returns, turning manpower costs into economic savings. AI becomes a headcount reduction tool, a way to do the same work with fewer people.
This is the most expensive form of thinking small.
The true return isn’t in replacement. It’s in augmentation. It’s in using AI to discover opportunities you’ve never pursued because you lacked capacity. It’s developing ideas you’ve never had time to explore. It’s evolving what work means rather than shrinking the workforce.
That contracts team that reduced manual effort by 50%? They didn’t cut headcount. They redirected that capacity to negotiate better terms, identify new suppliers, and strengthen relationships. The ROI wasn’t cost savings—it was revenue growth and risk reduction.
This is why organizations fail at AI adoption: they’re optimizing for subtraction instead of multiplication.
You can’t assign accountability for AI decisions without understanding your workflows, decision patterns, and organizational knowledge structure.
Without this foundation, both approaches become expensive ways to avoid confronting uncomfortable truths.
So Many Vain. So Little Time.
Walk back into that corporate training session. Look at the prompt engineering courses, the tool adoption metrics, the pilot programs, the sandbox experiments.
All of it is vanity if it doesn’t start with one fundamental question:
Do we understand how work actually happens here?
Not according to org charts or process docs or 20-year-old IT protocols.
Actually. The companies winning at AI aren’t the ones with the most sophisticated models or governance frameworks. They’re the ones who did the unglamorous work first: mapping real workflows, documenting tribal knowledge, understanding decision patterns, structuring organizational context.
That contracts team that reduced manual effort by 50%? They started with workflow archaeology, not prompt engineering.
That 13% of organizations actually ready for AI? They built literacy before deploying technology.
The 87% still unprepared? They’re perfecting vanity metrics while their operations remain unmapped and misunderstood.
So many vain efforts. So little time.
The clock is ticking. Choose wisely.

