How an AI at Work Consultant Actually Works
AI fluency doesn’t come from a certificate. It comes from a process.
I am an AI at Work consultant based in Singapore. And I will be honest, when the government announced its goal to train 100,000 AI-fluent workers by 2029, my first reaction was not skepticism. It was relief.
Someone is finally treating this seriously.
But ambition needs method. Training headcount is a metric. Fluency is a capability. And the distance between those two things is exactly where most corporate AI initiatives collapse.
Having spent 25 years implementing digital and marketing transformation programs for global organizations, I have watched this pattern repeat across every major technology wave. The rollout is loud. The adoption is shallow. The certificates get issued. The work doesn’t change.I don’t intend to repeat that cycle with AI. So let me tell you how I actually work.
Start With What Exists
When I engage a new client, I don’t begin with tools. I don’t begin with training. I begin with an audit.
Specifically, an asset audit is a systematic review of everything the organization has already produced. Marketing materials. Data. Report templates. Briefing documents. Collateral. The full inventory of outputs that exist because work was done.
This matters because an asset is not just a file. An asset is evidence. Every output is the end of a process. When you examine what an organization has produced, you are reading how work actually gets done, not the version in the process manual, but the version that happens every day.
A template tells you what decisions get made repeatedly. A report tells you what information someone needed and how they chose to present it. A data set tells you what the organization believes is worth measuring. Trace how each asset was produced, who initiated it, what inputs it required, how it was reviewed, how it was shared. etc. This is how you let the actual workflow reveal itself.
That workflow is the foundation. You cannot responsibly integrate AI into a process you have not mapped. Consultants who skip this step are not implementing AI. They are installing software and hoping.
Then Read the Team
The asset audit tells you how work gets done. The second question is equally important: how AI-capable is the team doing that work?
I use the OECD AI Literacy Framework as the diagnostic lens, organized across four domains: Engage, Create, Design, and Manage. Engage and Manage sit at the level of attitude and practice — how a team relates to AI and how responsibly they govern it. Create and Design are about tooling — whether people can produce AI-assisted work and structure it with intention.
The practical value of this mapping is precision. “This team has low AI literacy” is not a useful finding. “This team is willing to engage but lacks the design skills to move beyond reactive prompting” is actionable.
And here is the efficiency gain: the audit evidence does double duty. You are not running a separate literacy test. The assets you have already examined become the test material. A report showing signs of AI-generated content but no visible editing discipline tells you Create is present but Manage is absent. A dataset that has never been used as an AI input — despite being clearly structured for it — tells you something about Engage. The assets don’t lie about the team any more than they lie about the workflow.
Assess the Fluency Components
Literacy tells you where the gaps are. Fluency tells you how deeply the capability needs to be built.
I assess four AI Fluency Components within each literacy domain.
Delegation — does the team know what to hand to AI and what to keep human? The failure modes are both directions: handing AI work it cannot do reliably, and refusing to hand AI work it does better. Both signal the same underlying problem — no clarity on where human judgment adds value.
Description — can they articulate intent clearly enough to get useful output? The gap between a mediocre AI result and a useful one is almost always a description problem, not a model problem.
Discernment — can they evaluate what AI produces? Not with suspicion, but with the professional judgment to know when output is accurate, when it is plausible but wrong, and when it is confidently fabricated. This is the skill that erodes fastest when teams become over-reliant.
Diligence — do they treat AI output as a draft requiring ownership, or a finished product requiring a signature? Without diligence, you don’t have human-AI collaboration. You have abdication.
These four components form a progression. A team that cannot Delegate will never invest in Description. Strong Description without Discernment produces confident mistakes. And without Diligence, the entire chain becomes a liability.
Know Where AI Actually Breaks
There is something most AI consultants will not admit: if you have never worked with AI at a technical level, you are advising on a system you do not fully understand.
I am not a data scientist. But I code with AI. I have built with it, broken it, and diagnosed why it broke. That experience gives me something a purely business-side consultant cannot offer — a technical lens on how AI actually behaves inside a workflow, not just how it appears to behave in a demonstration.
Let me be specific about why this matters.
The most common failure in enterprise AI implementations is not the model. It is the data. Specifically, how data is prepared, structured, and fed into the system. A language model does not read a document the way a human reads a document. It processes tokens. It weights relationships. It draws inferences from patterns in the data it was trained on, and from the data you provide it at the point of use. Feed it poorly structured input, inconsistent formatting, or context that contradicts itself, and the output will be confidently wrong. Not obviously broken. Confidently wrong.
This is a data ingestion problem. And it is invisible to a consultant who has never had to think about it.
The technical understanding does not replace the business judgment that consultancy requires. It sharpens it. When I assess a client’s workflow and identify where AI should sit, I am not only asking what work AI can take over. I am asking what the data context looks like at that point in the process, whether it is structured well enough to produce reliable output, and what the failure mode looks like if it isn’t.
That is a different question from “which tool should we use.” And it produces a different quality of recommendation.
The best AI implementations I have seen share one characteristic: someone in the room understood both what the business needed and how the technology actually worked. Not at an engineering level. At a fluency level. Deep enough to know when the system is behaving as designed and when it is about to mislead you.
That is the technical lens a serious AI at work consultant needs to carry. Not to write the code. To ask the right questions before the code gets written.
Build Toward Augmentation — Not Automation
Task-level automation is a baseline expectation. If an AI implementation cannot handle repetitive, structured work, it has failed at the minimum. But automation is the floor, not the outcome.
The outcome worth pursuing is value augmentation — what the organization can now produce that it could not produce before, by any method, at any cost.
I have run this process before, in a different context. Several years ago I chaired a marketing excellence program for a global luxury FMCG organization. We began with an asset audit, reconstructed the actual workflow from evidence, and discovered not inefficiency but misalignment, teams operating against different definitions of success with no shared performance language.
The intervention redesigned the workflows and produced something that had not previously existed: a performance matrix that aligned internal teams and external stakeholders to the same criteria. Not a faster version of the old operation. A structurally different one.
That is value augmentation. A capability the organization could not have developed without the process that preceded it.
I am now applying the same methodology to AI adoption. The audit still comes first. The workflow mapping still follows. The literacy and fluency assessment is the new layer. And the ceiling on augmentation is higher — because AI can compress weeks of research into hours, make personalization viable at scale, and enable generalists to perform analysis that previously required specialists.
But the methodology earns the right to that ceiling.
You cannot augment what you do not understand. And you cannot govern what you have not mapped.
This is what an AI at Work consultant actually does. Not install tools. Not run training sessions. Build the foundation that makes genuine augmentation possible — then push the organization to claim it.
But that work demands a specific kind of consultant. Not a generalist with an AI certification. Not a technologist who has never sat in a business strategy meeting. The right consultant brings domain expertise — a deep enough understanding of how a specific industry or function operates to know what good output actually looks like. They are sensitive to data — knowing how it is structured, where it breaks down, and what it means when the results don’t make sense. They carry genuine technical knowledge — not at an engineering level, but enough to understand how AI systems behave and where they fail. Hands-on system design experience matters too — having actually built something, tested it against real conditions, and understood what went wrong. And they need to be senior enough to read a business process end to end — not just task by task, but how decisions are made, how accountability works, and where change actually takes hold.
When Singapore’s 100,000 are trained, the question will not be whether they completed the program. It will be whether anything changed. That answer depends entirely on who was leading the work.
That is what you should expect from an AI at Work consultant. Anything less is a course, not a consultancy.

