A 2,000-Year-Old Skill for Leading in the AI Age

The greatest risk of AI is flawless logic applied to poorly framed questions that replace human judgment, writes Rubén Montoya González.

The organizational arms race to adopt generative AI has promised a new age of unparalleled productivity, data-driven insight, and automated efficiency. We are told that the future belongs to those who can move the fastest, generate insights and data instantly, test strategies at scale, and eliminate human bottlenecks from decision-making. And in many ways, the promise is real: today’s AI models can generate a 30-page market analysis, a five-year strategic plan, and a fully formed ad campaign in seconds. The output comes off as fluent, confident, and data-rich.

This is where the risk begins. Because, in our rush to harness the power of generative AI, we have overlooked one of its greatest risks, one that has nothing to do with code and everything to do with human wisdom.

We are preoccupied by hallucinations and bias, concerned that the AI will be factually wrong. This is a distraction. Managing accuracy is a managerial task, albeit a new one, that consists of refining prompts, monitoring reliability, and mitigating errors – whether the model performs at 80 percent or 99.

But the leader’s challenge is far greater. It is the danger of receiving a 100 percent correct, logically flawless answer… to the wrong question.

This challenge for leadership teams, of framing the right question, has always existed. But the current, frantic productivity race has made it an existential threat. AI eliminates friction. Fueled by KPI demands and cost-cutting imperatives, organizations are in a high-stakes dash to adopt AI (and automate judgment). In doing so, they actively replace human doubt – the friction of debate, hesitation, and slow research – with the demand for speed.

The real threat, then, is not the plausible liar; it’s the ruthlessly logical optimizer. AI is an engine that will deliver a perfectly data-driven, internally consistent, and disastrous answer – because it has no wisdom, no context, and no human judgment. It can tell you how to do something, but it has no understanding of why you should.

The core error is subtle but profound: we demand certainty from a system that offers only statistical probability. AI is a prediction machine, not an oracle of truth. And this flawed search for certainty blinds us, leading us to mistake logical perfection for strategic truth.

This is hardly a new problem. In fact, it’s a 2,000-year-old problem. And our best guide to navigating it is a 2nd-century Syrian satirist named Lucian of Samosata. Lucian lived in his own “infodemic.” The Roman Empire was awash in “experts,” pretentious philosophers, hucksters selling miracle cures, and credible historians who spun self-aggrandizing, fantastical tales as fact. The thought-leaders of his day were prolific, persuasive, and often completely wrong.

Lucian’s response to the epistemic chaos of his day was a literary masterpiece of ridicule called A True Story. The title itself is the first lesson in critical thinking. In the preface, Lucian warns his readers that he is writing about things he has “neither seen nor experienced… In fact, not a single word of it is true.” What follows is the first known work of science fiction: a wild journey to the Moon, where his crew is conscripted into an interplanetary war over the colonization of Venus. They witness bizarre armies of “vulture- horsemen” and “flea-archers,” all described with the same deadpan, authoritative tone used by the historians of the day.

Lucian’s goal was not to invent science fiction. It was to teach his audience to distrust fluency. By taking the plausible lies of his day and pushing them to a cosmic extreme, he trained his readers to spot a con. Lucian showed that a story’s internal logic can be flawless even when its premise is completely detached from reality.

This is the exact lesson our business world must now heed. An AI model does not think or know; it executes logical instructions. It optimizes and streamlines. It is the perfect tool for generating our modern “trips to the Moon.”

AI is an engine without a steering wheel.

For example, say you give your AI a clear and straightforward prompt: Analyze logistics data and consumer sentiment to find the highest-growth, lowest-cost market for our product. The AI produces a brilliant, completely correct answer. It tells you to launch in Southeast Asia and cites logistics data, positive consumer sentiment models, and a clear first-mover advantage. Every fact is verifiable. The leadership team agrees.

What your perfectly logical request failed to ask about were deep-seated cultural taboos or pending, obscure legislation that could stall approvals, trigger consumer backlash, and derail the launch within a year. The AI did exactly what it had been asked to do – and it delivered a data-driven ride to the Moon. That’s just not where you wanted to go.

The real danger of AI, therefore, is not its output but our credulity in the face of flawless logic. We are fundamentally wired to trust something that is well written, data rich, and authoritative in tone. To combat this, the dominant analytical model in our organizations is now dangerously incomplete. For decades, data-driven culture has over-indexed on quantitative validation. In practice, critical thinking is often reduced to checking the bricks: Are these numbers correct? Did we verify this source?

But just fact-checking an AI is a trap. Even when a system uses real, verified sources, it can still deliver a strategically blind answer. You cannot fact-check a 100 percent correct answer to a badly framed question. The problem is not that the bricks are fake, it’s that the blueprint is a fantasy.

This is where the humanities prove their hard, economic value. We should embrace AI, but we must understand what it is and what it is not. AI is an engine without a steering wheel. The humanities provide the guidance. In an economy where AI has commoditized the how, the humanities remain our only reliable way of mastering the why – and the why is now the only durable source of differentiation left. The humanities teach a different, more powerful method of critique: framing the question, as opposed to validating answers.

First, the humanities train narrative interrogation. What leaders often call intuition is not instinct nor guesswork, but experience processed by the brain – high-speed pattern recognition built over time. The humanities are the ultimate training ground for developing this kind of disciplined intuition at scale. This is not about spotting a lie. It is about framing the right problem.

A leader trained in philosophy, for example, does not simply hunt for logical fallacies; they specialize in framing the why. An operator asks the AI, “How can we optimize our current supply chain?” A visionary trained in philosophical inquiry asks, “What is the fundamental human problem we are actually solving, and is a supply chain even the right answer?”

An AI can generate a flawless response to either prompt. The difference is that one question optimizes an assumption, while the other interrogates it. That is the distinction between a plan that is logically perfect and one that is strategically sane.

A leader trained in literature brings something different to the table: disciplined empathy for and accumulated insight into human motivation. This is not intuition in the casual sense but a true understanding of how people feel, decide, and change. A data-driven question is, “How do we reduce customer call times?” A literary question asks, “Why are our customers really calling and what must they feel to remain loyal?”

That human hypothesis allows the leader to ask the AI a far smarter prompt: Analyze 10,000 call transcripts for keywords and signals of anxiety and confusion. By focusing on more subtle cues, this leader gains genuine insight that leads to innovation, not just optimization.

A leader trained in history performs a different kind of pattern recognition – one that operates on a grand scale and across time. An AI can generate a logically perfect Q1 plan. A historian may identify that the same plan could trigger unintended consequences that reveal their impact in Q3. They are often the ones in the room who can distinguish a true paradigm shift from a cyclical fad, an old pattern in a new, high-tech suit.

The humanities provide intellectual friction: an active, creative, and critical form of resistance modeled by Lucian himself. In an age obsessed with AI-driven speed, intellectual friction is not hesitation. It is method. It is what we might call the True Story Protocol.

When presented with a plausible, logically perfect AI strategy, reflective leaders run this protocol. They don’t just check the facts. They parody the plan. They apply Lucian’s reductio ad absurdum, pushing the AI’s logic to its extreme to see whether – or where – it collapses. This is a creative act with a critical purpose: If we follow this optimized path for five years, where do we actually end up? Is this a path to Venus – or a perfectly logical trip to the Moon?

So, what does this look like in a real meeting? It’s a creative stress test. When an AI-driven plan is on the table, the leader must be the one to ask the Lucian-esque questions:

  1. What is the parody of this plan? What is the most absurd, ridiculous version of its logic? (This exposes hidden assumptions.)
  2. If we follow this plan for five years, what unintended catastrophe emerges? (This tests second-order consequences.)
  3. What inconvenient human truth must be ignored for this plan to work? (This pierces the logical bubble.)

The return on investment of a humanities education in the AI age is therefore not just the avoidance of spectacular, logic-driven failure. It is the only reliable path to differentiation. Your competitors have the same AI. They run the same models on the same data. They ask the same obvious how questions. The only way to win is to arrive at an insight they cannot generate.

That insight will not come from how. It will come from asking a better why.

Lucian’s 2,000-year-old lesson is that, at the highest level, critical thinking is a creative act. It is not just spotting flaws in logic, but having the judgment to frame a more human question. When everyone has access to machines that generate plausible answers, power belongs to the leader who can use Lucian’s method to uncover the one inconvenient, market-breaking truth their competitors never think to face.

 

© IE Insights.