The Reason Behind Reasoning
I recently attended the Singapore Design Education Summit 2025, where creative practitioners shared their philosophies on learning and making. Beneath the diversity of their journeys, a singular truth emerged: meaningful work begins with sensitivity to surroundings—an awareness of context, needs, and the questions worth asking.
This principle, essential to design thinking, reveals something critical about our current struggle with AI: most AI failures don’t happen because the technology isn’t capable. They happen because we start with the wrong inquiry.
The Forgotten Art of Asking
At the summit, speakers emphasized what drives genuine learning: curiosity about our environment, the satisfaction of discovery, the fulfillment of completing meaningful tasks. These aren’t abstract concepts—they’re the foundation of how humans acquire and apply knowledge effectively.
Consider how a designer approaches a problem. They don’t immediately jump to solutions. They observe. They question. They develop sensitivity to what’s actually happening in the environment they’re trying to improve. Only then do they begin to shape responses.
Now compare this to how most organizations approach AI implementation.
A company acquires an AI tool. Leadership announces it will “transform operations.” Teams receive brief training on how to use the tool. Then comes the inevitable disappointment: the AI produces generic outputs, misses critical context, or simply doesn’t deliver the promised efficiency gains.
The problem? They never asked the right questions to begin with.
Writing a prompt to an AI system is fundamentally an act of inquiry, no different from a student asking “why?” or a designer probing “what if?” Yet we treat prompt engineering as a technical skill rather than what it truly is: the art of articulating what we need to know and why it matters.
Reasoning Requires a Reason
The AI industry has entered what we might call “the agentic moment.” We’re no longer just getting answers from AI, we’re watching systems that can structure their own thinking, develop supporting arguments, build multi-step strategies, and package entire reasoning processes into what appears as a simple “answer.”
This is genuinely remarkable. Modern AI doesn’t just respond; it reasons. It breaks down complex problems, evaluates approaches, considers trade-offs, and articulates its logic—all in seconds. What once required human deliberation across meetings, documents, and decision cycles can now happen instantly.
And yet, paradoxically, this increased sophistication makes the failure to ask the right questions even more dangerous.
Because even the most advanced AI can’t determine what matters in your specific organizational context. It can’t tell you which problems are worth solving, which workflows actually create value, or which decisions carry strategic weight. That sensitivity—the awareness of your surroundings, your constraints, your opportunities—remains distinctly human.
Now Agentic AI has the reasoning power. It just doesn’t have a reason.
Organizations are rushing to implement agentic AI because the promise is intoxicating: complex decisions automated, strategic plans generated instantly, sophisticated analysis delivered at scale. Why spend weeks on strategic planning when an AI agent can deliver a comprehensive strategy in seconds?
But this urgency reveals a fundamental misunderstanding of what creates value in organizational work.
The time humans spend on complex decisions isn’t waste. It’s the process through which we develop sensitivity to context. Those meetings, discussions, debates, and iterations aren’t inefficiency; they’re how we build shared understanding of what matters and why.
When we replace this human deliberation with instant agentic outputs, we’re not just gaining efficiency. We’re losing the very process that creates contextual awareness.
The more sophisticated our AI reasoning becomes, the more we’re tempted to skip the human work of understanding our environment. Ironically, this is the exact work that makes AI implementation valuable in the first place.
The Work Before the Prompt
Here’s what 25 years of software development has taught me: the power of any tool is proportional to the clarity of purpose you bring to it.
When I approach AI in my work, I don’t start with the prompt. I start with the traditional disciplines that have always separated effective engineering from mere code production: planning, organizing information into proper structures, applying frameworks to guide design decisions.
Sometimes I compute results using traditional methods first, for example, SQL queries, spreadsheet formulas, basic scripts. This gives me a baseline: what the right answer actually is. Then I can use AI to scale or enhance the process, while validating its outputs against results I’ve already verified.
All of these practices share a common foundation: theorizing a proper reason before tapping into machine reasoning.
Let me be concrete. When I need AI to help with a complex problem:
First, I structure the context:
What framework applies to this situation? (SWOT? Value chain? A custom model?)
What are the fixed constraints? (Budget, timeline, technical limitations)
What are the variables that actually matter? (Not everything that’s measurable is meaningful)
What would “good” look like, specifically?
Then, I establish guardrails:
What results would immediately signal the AI has misunderstood the problem?
What ranges are realistic versus impossible?
What domain rules must never be violated?
Only then do I craft the prompt:
With clear context already structured
With explicit constraints articulated
With success criteria defined
With validation logic ready
The AI’s reasoning power amplifies this preparation exponentially. But without the preparation, that same power amplifies confusion and produces impressively structured nonsense.
Build Your Reason First
Before you write your next prompt, spend time curating your reason.
The most effective AI practitioners I know treat AI as an amplifier of their own clarity, not a replacement for it. They spend time with frameworks. They organize information systematically. They validate outputs against established logic. They question results that seem too easy.
They understand that curating your reason is not the work you do before AI. It’s the work that makes AI worthwhile.
This discipline isn’t glamorous. It doesn’t produce viral demonstrations or generate headlines about transformation and disruption.
But it works. Consistently. Sustainably. Profitably.
Build your reason first. Structure your context. Define your constraints. Establish your guardrails.
The machine has reasoning. You provide the reason.

