AI at Work: Expect Different, Not Faster
AI-powered browsers promise to handle web tasks on your behalf - unsubscribing from newsletters, filling forms, automating shopping carts. Tools like Perplexity’s Comet browser represent this new wave of “agentic” software: AI that doesn’t just respond but acts.
But the early adopters are discovering a paradox. Reviews consistently report that AI browsers show “performance lag, especially during demanding tasks like browser automation.” Others are more direct: “AI-driven actions like shopping cart automation often fail or are slower than manual browsing.” The AI takes longer than doing it yourself.
This isn’t a criticism of any specific product. It’s an observation about where we actually are with AI at work. And where we are is somewhere between “impressive demonstration” and “practical tool.”
When The Magical Moment Fades, The Latency Remains.
The first time you use a generative AI tool, text streams across the screen and you’re watching a machine think. There’s genuine wonder in that moment.
This is a UX technique called perceived performance - streaming output keeps users in “active waiting,” making the same processing time feel shorter. It works in the first few times.
By the hundredth time, you’re checking your phone while the response generates. The trick still runs, but the wonder has become waiting. The intelligence is assumed. The latency is the experience.
Anyone who’s built software knows what bad user experience feels like. Watching a spinner - no matter how sophisticated the process behind it - is latency. We’ve just given it better marketing.
Traditional software operates on a simple contract: short input, instant processing, structured output. Click a dropdown, select “Q3 2024,” receive a filtered table. Milliseconds. Done. Move on.
Generative AI rewrites that contract entirely. Now you’re composing prose to describe what you want. You’re waiting seconds - sometimes minutes - for a response. And when that response arrives, it’s conversational text that you must parse, extract from, and often clarify with follow-up prompts.
This isn’t faster. This is a different kind of work.
The Agentic Paradox
The current wave of AI development is pushing toward “agents” - systems that don’t just respond but act. They plan, reason through steps, and execute multi-stage workflows on your behalf.
The promise is compelling: AI handles the tedious work while you focus on higher-value tasks.
The reality introduces a paradox: the planning overhead often exceeds the execution time.
What I mean is that when an AI browser spends two minutes reasoning through an unsubscribe action - identifying elements, considering edge cases, confirming the click - it isn’t being unintelligent. It’s being thorough. That’s exactly what we’d want from an autonomous agent.
But thoroughness takes time. And when the manual alternative takes twenty seconds, the math doesn’t work.
This compounds across multi-step workflows. Each reasoning stage adds latency. A five-step agentic process isn’t five times slower than a single response - it’s often worse, because each step waits for the previous one to complete.
Work, fundamentally, is not waiting. Work is: You input, hit an enter, get the response, move on. It is input, process, output, next task. The rhythm of productivity is momentum, and momentum doesn’t survive two-minute pauses.
The Conversation Problem
There’s a deeper mismatch beyond latency. Generative AI’s interface is conversational, and work is not.
Conversations are exploratory. They meander, clarify, build shared understanding through back-and-forth. That’s valuable for learning, brainstorming, open-ended exploration.
But work outputs need to be deterministic. A financial report requires specific numbers in specific formats. A project status update requires structured information that maps to existing systems. A customer response requires consistency with established policies.
When you ask generative AI to help with a work task, you’re asking a conversational system to produce deterministic output. The mismatch is fundamental:
What Work Needs vs. What Conversational AI Delivers
Structured data → Prose paragraphs
Predictable format → Variable presentation
Instant response → Generation latency
Consistent output → Probabilistic variation
This doesn’t mean AI is useless for work. It means the ChatGPT-style chat window - the interface we’ve all grown accustomed to - isn’t the right frame for most professional applications.
Your ChatGPT Experience Should Not Be Your AI at Work Experience
Here’s where expectations need managing.
Many professionals have formed their understanding of AI through personal use: asking ChatGPT questions, generating text, exploring ideas. That experience - while genuine - creates a misleading template for what AI at work looks like.
The consumer AI experience is front-end and conversational. You type, it responds, you read.
The work AI reality is largely back-end and invisible. AI processing data pipelines. AI scoring leads. AI flagging anomalies. AI running in batch processes overnight. You don’t watch it stream text. You see the outputs in your existing dashboards, reports, and systems.
And here’s what rarely gets discussed: AI at work doesn’t eliminate tasks. It creates new ones.
Someone needs to validate the AI’s output. Someone needs to establish guardrails and monitor for drift. Someone needs to reconcile AI-generated data against ground truth. Someone needs to verify accuracy before that output reaches a customer or a financial statement.
These aren’t optional overheads. They’re the new work that AI implementation demands. Validation workflows. Human-in-the-loop checkpoints. Reconciliation processes. Integrity audits.
If you’re expecting AI to make your workday shorter, recalibrate. AI makes your workday different. The tasks change. The total work may not.
The Addition Principle
This brings us to the honest reframing that organizations need to hear:
AI doesn’t replace the old. It adds to it. The legacy systems don’t disappear. The existing workflows don’t evaporate. The human judgment calls don’t get automated away. Instead, AI layers onto what exists. New capabilities, yes. But also new responsibilities. New validation steps. New failure modes to monitor. New skills to develop. The value isn’t efficiency through subtraction. It’s augmentation through addition.
That augmentation is real. AI can surface insights humans would miss. It can process volumes that would take teams weeks. It can operate continuously in ways humans cannot. The value proposition is genuine.
But it’s not the value proposition most people have been sold.
The narrative of “AI will do your job for you” is seductive but wrong. The reality of “AI will change what your job requires” is less exciting and true.
The Practical Position
If you’re implementing AI in your organization, here’s my advice:
Don’t expect AI to look like ChatGPT at your desk. Expect it to run in your data infrastructure, surface in your existing tools, and require new processes to govern.
Don’t expect AI to shorten workflows. Expect it to change workflows, adding validation and oversight tasks that didn’t exist before.
Don’t expect AI to deliver instant productivity. Expect it to deliver different productivity - new capabilities that take time to learn, integrate, and trust.
Don’t confuse impressive with practical. The demo that amazes you in a conference keynote may frustrate you in daily use. Evaluate for your actual work rhythm, not your sense of wonder.
This doesn’t mean generative AI can’t work reliably today. It can - with discipline.
I use generative AI to track my daily expenses. But I don’t converse with it.
My inputs are terse: add $45 lunch client meeting. No prose. No explanation. The AI knows what to do because I’ve established rules upfront: “new day” triggers a fresh subtotal with the current date. The dollar sign parses the amount from the description. “Add” is always the instruction verb.
These parameters were set once. They’re rules, not conversations. The output is consistent, predictable, and structured - exactly what a financial tracker requires.
This is generative AI working deterministically. No magic. Just discipline.

