Human's Work Ethics for Machine Intelligence
I attended a vibe coders event recently. A relaxed social mixer. The kind where people show up with curiosity and leave with enthusiasm. The crowd was a healthy mix of tech and non-tech, hobbyists, early adopters, and the genuinely curious. That room probably represents a decent cross-section of where society is right now with AI.
The speakers were two vibe coders. Enthusiastic, capable, and clearly energized by what AI tools have unlocked for them. I’d describe them as early tech adopters — semi-computer-trained, but not carrying the weight of enterprise development or legacy MIS environments on their shoulders. Nothing wrong with that. In fact, there’s a certain freedom in it.
I, on the other hand, come from a different generation of this craft. Proper enterprise development. The discipline of debugging at late night. The paranoia of a production release. I’ve lived inside systems where a single bad line of code doesn’t just break a feature — it breaks a business.
So I asked a few questions.
“With all this vibe coding and agent-generated code, do you actually debug?”
Both speakers replied without hesitation: “I haven’t been reading code for a long time already.”
I followed up: “What about dead code? Obsolete residues left behind in the codebase? Code hygiene?”
One speaker shrugged it off with a kind of philosophical ease. Working with AI is exploratory, he said. The point is that agents generate code far faster than any human. Production time is compressed. Developers no longer need to spend months learning new languages. And if the code is bad? Scrap it. The agent will regenerate it. No harm, no foul.
He went further. With visible enthusiasm, he shared that he grants his coding agent full permissions — no interruptions, no approval prompts, no guardrails asking “are you sure?” The agent runs autonomously: building the idea, adding new features, making decisions end to end. Hand it the wheel and get out of the way. I’ll admit, I sat with that for a moment. There’s a name for that kind of builder ethic. I’m still figuring out if it’s brave or reckless.
The other speaker had recently used an AI agent to build a book review website. I asked whether he had any concerns about the accuracy of AI-curated book information.
“Not particularly,” he said. “Even if some of the information is wrong, it’s just a book site.”
Cool and adventurous. I’ll give them that.
The headline message of the evening was clear and genuinely exciting in its own way: AI enables exploration. Output is fast. Input can be anything driven purely by idea. We are in the era of 10x speed and 10x productivity. Everyone can now be a builder.
My post-mortem
I left the event with a quiet, unsettled feeling that I haven’t quite been able to shake.
Am I doing something wrong? I work with my coding agent daily. Why do I still end up debugging lousy code? Why do I take code hygiene so seriously that I can’t simply accept whatever the agent produces? Am I using the tools wrong, or is it possible that my practice is at the forefront but my mindset is caged in the past?
I’m not dismissing what these speakers represent. The democratization of building is real, and in many ways it’s remarkable. But I kept thinking about the layers of concern that didn’t make it into the conversation.
When we stop reading code, we stop understanding what we are building. When we normalize wrong information because “it’s just a book site”, we are making a quiet decision about what accuracy is worth. When code hygiene becomes irrelevant because regeneration is cheap, we are not raising the bar. We are quietly lowering it.
The argument for speed is seductive. Time cost is no longer a debt. Ideation is now unlimited. But here’s what worries me: we may be trading quality judgment for quantity output. The worst is that we call it advancement.
In enterprise development, we were ruthless about standards, not because we were slow or afraid of change, but because we understood consequence. Bad code in a live system doesn’t stay contained. It compounds. It creates failures that no agent can simply regenerate away, because the damage by then is already real.
And this isn’t purely a technical problem. It’s a values problem.
When we accept that inaccurate information is acceptable because the stakes feel low, we are training ourselves and the next generation of builders to tolerate a lower standard of truth.
A tradeoff between human’s work ethics and machine intelligence.
I genuinely don’t know where this leads. That’s the honest answer. If what we see is what we get, then I am seeing a tradeoff between human work ethics and machine intelligence.
Maybe the vibe coding generation will develop their own new instincts about quality, shaped by different tools but arriving at similar standards. Maybe the ecosystem will self-correct. Maybe I’m the old man in the room who doesn’t yet understand the new rules.
But I also wonder: we are outsourcing quality practice to enjoy the endless replenishment of AI-generated ideas, buying back time at the cost of rigor. What exactly are we building toward?
Intellectual advancement has always required friction. The struggle to understand something deeply is not a bug in the learning process. It is the process. When we remove that friction entirely, we may be producing more output than ever while understanding less and less of what we’re actually doing.
The AI era presents us with a genuine choice. We can use these tools to amplify human judgment, or we can use them to replace it and tell ourselves we’ve improved.
My mixed feelings
I left that room hopeful about the energy, and worried about the direction.
Both can be true. That’s what keeps me thinking.
The future is not predicted. The future is made. And that’s precisely what unsettles me. If we are the ones making it, then what we choose to carry forward matters as much as what we choose to leave behind.
Does a great technology arrive and suddenly the work ethics we’ve spent decades building becomes disposable? When I looked at those speakers, they are confident, energized, unbothered. What I heard underneath the enthusiasm was: “I don’t read code and syntax anymore.” Said like a liberation. But it landed on me like a quiet loss.
And I wonder — if human stop carrying the right ethics, who will?

