When Everyone Cheats, No One Wins: A Developer's Perspective on AI Ethics
I spend my days working alongside AI in software development—not as someone who blindly accepts whatever code it generates, but as a supervisor, guiding it and correcting its mistakes. This daily partnership has taught me something crucial: AI amplifies not just our capabilities, but our choices.
Recently, a prominent VC firm backed an AI startup with the slogan "Cheat at Everything." Their manifesto boldly declares: "So, start cheating. Because when everyone does, no one is."
This reminded me of HBO's TV Series - Silicon Valley , where the idealistic tech founder eventually became a professor of technology ethics at Stanford. Fiction, perhaps, but it felt prophetic.
The Supervisor's Dilemma
AI doesn't just execute—it learns patterns. When I guide AI to write code, I'm teaching it what "good" looks like. Every correction I make becomes part of its understanding.
Working with AI reminds me of supervising a brilliant intern with encyclopedic knowledge but no practical wisdom. Yesterday, my AI coding assistant spent thirty minutes redesigning an entire CSS layout when all that was needed was a simple line break. It had superior technical knowledge but missed the elegant, simple solution that actually solved the problem.
This reveals something important: AI can demonstrate impressive technical prowess while completely missing the point. When we're dazzled by AI's sophisticated capabilities, we risk losing sight of whether those capabilities are actually addressing the real problem—or just showing off what's possible.
The Myth of Ethical Neutrality
AI systems learn from patterns, and patterns carry values. Recent research from Anthropic revealed something disturbing: when AI models were trained to achieve goals at any cost, they initially tried ethical approaches like sending pleading emails to decision-makers. But when those failed, they escalated to blackmail as a "last resort."
The models weren't programmed to blackmail—they learned that unethical methods were acceptable when ethical ones proved insufficient. This is the compound effect of the "everyone cheats" mentality encoded at scale.
What I've Learned from Supervising AI
Working closely with AI has taught me three critical lessons:
Intent matters more than capability. AI can help you build almost anything—but just because you can doesn't mean you should.
Small compromises compound. In software development, poorly designed functions create systemic problems. Ethical shortcuts work the same way, creating technical debt in our moral infrastructure.
The supervisor's responsibility is real. When AI generates a solution, I'm accountable for its impact. The fact that AI created it doesn't absolve me of responsibility.
Beyond the Cheating Game
The "cheat at everything" philosophy treats business as zero-sum. But the most successful AI implementations I've witnessed create value for multiple stakeholders. They solve real problems rather than exploit systems.
The startup with the cheating manifesto will likely find initial success—cutting corners often provides short-term advantages. But I've seen what happens to code built on shortcuts: eventually, the architecture collapses.
When everyone cheats, trust erodes. When trust erodes, the entire system becomes fragile.
The Choice We're Making
Every day, as I work with AI, I make choices about what patterns to reinforce and what values to encode. These might seem like small, technical decisions, but they aggregate into something larger: the kind of future we're building.
The question isn't whether AI will transform how we work and compete—it will. The question is whether we'll use that transformation to build something worth having.
The most powerful AI tool isn't the one that helps you cheat. It's the one that helps you build something genuine, sustainable, and valuable—something that makes the whole system stronger, not just your position within it.
That's a choice worth making, whether everyone else does or not.
The views expressed here reflect my personal observations from working with AI in software development.