The Wrong People Are Leading Your AI
I was listening to an AI-curated music playlist the other day when it hit me. Not the music itself, though it was good, but what it represented.
The elephant in the room is “AI at Work.” A composer can now use AI to generate demo tapes, test different genres, swap voice profiles, and iterate on arrangements in hours instead of weeks. A filmmaker can use AI cinematographic models to pre-visualize plots and screenplay sequences before committing serious budget to production. A sales team can feed customer data into AI and surface purchasing patterns that would take an analyst months to uncover manually.
These are all real, working applications. Not demos. Not conference slides. They deliver measurable outcomes and genuine creative leverage. AI at work can absolutely improve performance and produce positive results.
But why haven’t we seen massive successful deployment yet? The noise about how AI can be an amazing co-worker is far bigger than the evidence of it actually working in real life. AI-native companies are fully utilizing it, sure. But not yours. And probably not your neighbor’s either.
Here’s the pattern I keep coming back to: every one of these successful use cases is driven by someone who already understands the work. The composer knows what good music sounds like before AI touches a single note. The filmmaker understands narrative structure and visual storytelling. The sales analyst knows which customer signals actually matter.
The people making AI work are domain experts first. AI users second.
So why do most companies hand AI innovation to their IT department?
The Wrong Fit
This isn’t about IT being incompetent. IT teams keep organizations running. They manage infrastructure, protect data, maintain systems, and ensure everything stays connected. That work is critical and it always will be.
The problem isn’t IT. The problem is what we’re asking IT to do.
Think about how IT has always been structured inside organizations. It exists as a horizontal function — a support layer that serves every department equally. There is no “Marketing IT” as a discipline. No “Financial IT.” No “HR IT.” The classic Management Information Systems training approaches technology from the top line: information security, data processing, systems administration, network management. It was designed to keep the computers running, not to understand what each department actually does with it.
Computer science students are trained to think in systems, architectures, and code. They learn how technology works. What they’re not trained in — because it was never part of the curriculum — is how a marketing team develops a campaign, how a finance team evaluates risk, or how a design team iterates on a brand experience. The culture, the practice, the unwritten rules of each domain are invisible to someone whose education and career path never intersected with them.
This was never a problem before. IT didn’t need to understand your marketing workflow to set up your email server. They didn’t need to know your sales methodology to configure your CRM. The role was enablement: you tell us what you need, we build it.
Now, AI changes that equation entirely.
The Gate Before the Starting Line
In a typical evaluation scenario, even before AI can prove its value, organizations hit a wall, and IT is usually standing in front of it.
The first barrier is information security. Most AI applications worth building require data such as customer records, sales history, internal documents. But feeding proprietary data into third-party platforms or vector databases sets off every alarm in IT’s playbook. And fairly so. Data governance exists for real reasons.
In larger organizations, IT infrastructure is typically closed. Information lives behind firewalls, on company servers, under strict access controls. The idea of piping that data into an external AI system isn’t just uncomfortable for IT. It contradicts the policies they were hired to enforce.
In smaller companies, the problem flips. Security may be looser, but the data itself is a mess. Tribal knowledge sitting in someone’s inbox. Unaudited spreadsheets on personal drives. Customer records scattered across three platforms that don’t talk to each other. There’s no gate to open because there’s no structured path behind it.
Either way, IT becomes the bottleneck — not out of obstruction, but because the existing infrastructure was never designed for what AI demands.
We’ve spent years criticizing digital transformation failures caused by corporate silos. And the criticism was valid. But here’s the irony: silos exist because they work. When departments build their own solutions, they bypass the slow machinery of enterprise-wide transformation. They stop waiting for the big plan and start producing results.
The same pattern is emerging with AI. Teams that make progress aren’t waiting for IT to redesign the corporate platform. They’re finding contained, domain-specific ways to experiment, within their own workflows, with their own data, on their own terms. It’s messy. It’s not scalable. But at least it moves.
The silo approach we learned to criticize may be the most effective workaround organizations have for getting AI through the gate. But it only works when the people inside those silos have the knowledge to direct the technology. Most don’t, and that’s the gap no workaround can close on its own.
Tool or Practice?
McKinsey’s latest survey shows 88% of companies now use AI in at least one business function. The numbers look impressive until you read the fine print: two-thirds of those companies are still stuck in pilot or experimentation mode. Only 7% have fully scaled AI across their organizations.
The gap tells the real story. There is a difference between a tool and a practice. A tool gets picked up when it’s convenient and put down when something easier comes along. It has substitutes. A practice is integrated into how work actually gets done — embedded in workflows, shaped by context, refined through repetition. Most organizations are using AI as a tool. Very few have turned it into a practice.
Without that shift, work culture remains fragmented. The silo model becomes the default, not because anyone planned it that way, but because nothing else was built to replace it.
And here’s the distinction that matters most: does your company’s workflow assume you already have the sources and the data, or does it assume you have a question? IT works effectively when the job is to vault your sources and data, organize them, and retrieve them safely. That’s reporting. But AI’s real potential lies in discovery — surfacing answers, patterns, and insights that require more than what’s already in your servers. IT can’t build that platform alone, because discovery demands domain context that no horizontal support function was ever designed to carry.
None of this means IT should be sidelined. AI implementation still needs infrastructure, security, integration, and governance. IT remains essential to the foundation.
But foundation isn’t strategy.
IT builds the road. The people who do the work need to decide where it goes.

