{"id":1467126,"date":"2026-01-19T11:51:42","date_gmt":"2026-01-19T10:51:42","guid":{"rendered":"https:\/\/www.ie.edu\/insights\/?post_type=articles&#038;p=1467126"},"modified":"2026-01-19T11:51:42","modified_gmt":"2026-01-19T10:51:42","slug":"a-2000-year-old-skill-for-leading-in-the-ai-age","status":"publish","type":"articles","link":"https:\/\/www.ie.edu\/insights\/articles\/a-2000-year-old-skill-for-leading-in-the-ai-age\/","title":{"rendered":"A 2,000-Year-Old Skill for Leading in the AI Age"},"featured_media":1467128,"template":"","meta":{"_has_post_settings":[]},"schools":[],"areas":[508,481],"subjects":[421],"class_list":["post-1467126","articles","type-articles","status-publish","has-post-thumbnail","hentry","areas-artificial-intelligence","areas-leadership","subjects-humanities"],"custom-fields":{"wpcf-article-leadin":["The greatest risk of AI is flawless logic applied to poorly framed questions that replace human judgment, writes Rub\u00e9n Montoya Gonz\u00e1lez."],"wpcf-article-body":["The organizational arms race to adopt generative AI has promised a new age of unparalleled productivity, data-driven insight, and automated efficiency. We are told that the future belongs to those who can move the fastest, generate insights and data instantly, test strategies at scale, and eliminate human bottlenecks from decision-making. And in many ways, the promise is real: today\u2019s AI models can generate a 30-page market analysis, a five-year strategic plan, and a fully formed ad campaign in seconds. The output comes off as fluent, confident, and data-rich.\r\n\r\nThis is where the risk begins. Because, in our rush to harness the power of generative AI, we have overlooked one of its greatest risks, one that has nothing to do with code and everything to do with human wisdom.\r\n\r\nWe are preoccupied by hallucinations and bias, concerned that the AI will be factually wrong. This is a distraction. Managing accuracy is a managerial task, albeit a new one, that consists of refining prompts, monitoring reliability, and mitigating errors \u2013 whether the model performs at 80 percent or 99.\r\n\r\nBut the leader's challenge is far greater. It is the danger of receiving a 100 percent correct, logically flawless answer... to the wrong question.\r\n\r\nThis challenge for leadership teams, of framing the right question, has always existed. But the current, frantic productivity race has made it an existential threat. AI eliminates friction. Fueled by KPI demands and cost-cutting imperatives, organizations are in a high-stakes dash to adopt AI (and automate judgment). In doing so, they actively replace human doubt \u2013 the friction of debate, hesitation, and slow research \u2013 with the demand for speed.\r\n\r\nThe real threat, then, is not the plausible liar; it's the ruthlessly logical optimizer. AI is an engine that will deliver a perfectly data-driven, internally consistent, and disastrous answer \u2013 because it has no wisdom, no context, and no human judgment. It can tell you how to do something, but it has no understanding of <em>why<\/em> you should.\r\n\r\nThe core error is subtle but profound: we demand certainty from a system that offers only statistical probability. AI is a prediction machine, not an oracle of truth. And this flawed search for certainty blinds us, leading us to mistake logical perfection for strategic truth.\r\n\r\nThis is hardly a new problem. In fact, it\u2019s a 2,000-year-old problem. And our best guide to navigating it is a 2nd-century Syrian satirist named Lucian of Samosata. Lucian lived in his own \"infodemic.\" The Roman Empire was awash in \"experts,\" pretentious philosophers, hucksters selling miracle cures, and credible historians who spun self-aggrandizing, fantastical tales as fact. The thought-leaders of his day were prolific, persuasive, and often completely wrong.\r\n\r\nLucian\u2019s response to the epistemic chaos of his day was a literary masterpiece of ridicule called <em>A True Story<\/em>. The title itself is the first lesson in critical thinking. In the preface, Lucian warns his readers that he is writing about things he has \"neither seen nor experienced... In fact, not a single word of it is true.\" What follows is the first known work of science fiction: a wild journey to the Moon, where his crew is conscripted into an interplanetary war over the colonization of Venus. They witness bizarre armies of \u201cvulture- horsemen\" and \"flea-archers,\" all described with the same deadpan, authoritative tone used by the historians of the day.\r\n\r\nLucian's goal was not to invent science fiction. It was to teach his audience to distrust fluency. By taking the plausible lies of his day and pushing them to a cosmic extreme, he trained his readers to spot a con. Lucian showed that a story's internal logic can be flawless even when its premise is completely detached from reality.\r\n\r\nThis is the exact lesson our business world must now heed. An AI model does not think or know; it executes logical instructions. It optimizes and streamlines. It is the perfect tool for generating our modern \"trips to the Moon.\"\r\n<blockquote>AI is an engine without a steering wheel.<\/blockquote>\r\nFor example, say you give your AI a clear and straightforward prompt: <em>Analyze logistics data and consumer sentiment to find the highest-growth, lowest-cost market for our product.<\/em> The AI produces a brilliant, completely correct answer. It tells you to launch in Southeast Asia and cites logistics data, positive consumer sentiment models, and a clear first-mover advantage. Every fact is verifiable. The leadership team agrees.\r\n\r\nWhat your perfectly logical request failed to ask about were deep-seated cultural taboos or pending, obscure legislation that could stall approvals, trigger consumer backlash, and derail the launch within a year. The AI did exactly what it had been asked to do \u2013 and it delivered a data-driven ride to the Moon. That\u2019s just not where you wanted to go.\r\n\r\nThe real danger of AI, therefore, is not its output but our credulity in the face of flawless logic. We are fundamentally wired to trust something that is well written, data rich, and authoritative in tone. To combat this, the dominant analytical model in our organizations is now dangerously incomplete. For decades, data-driven culture has over-indexed on quantitative validation<strong>.<\/strong> In practice, critical thinking is often reduced to checking the bricks: <em>Are these numbers correct? Did we verify this source?<\/em>\r\n\r\nBut just fact-checking an AI is a trap. Even when a system uses real, verified sources, it can still deliver a strategically blind answer. You cannot fact-check a 100 percent correct answer to a badly framed question. The problem is not that the bricks are fake, it's that the blueprint is a fantasy.\r\n\r\nThis is where the humanities prove their hard, economic value. We should embrace AI, but we must understand what it is and what it is not. AI is an engine without a steering wheel. The humanities provide the guidance. In an economy where AI has commoditized the <em>how<\/em>, the humanities remain our only reliable way of mastering the <em>why<\/em> \u2013 and the why is now the only durable source of differentiation left. The humanities teach a different, more powerful method of critique: framing the question, as opposed to validating answers.\r\n\r\nFirst, the humanities train narrative interrogation. What leaders often call intuition is not instinct nor guesswork, but experience processed by the brain \u2013 high-speed pattern recognition built over time. The humanities are the ultimate training ground for developing this kind of disciplined intuition at scale. This is not about spotting a lie. It is about framing the right problem.\r\n\r\nA leader trained in philosophy, for example, does not simply hunt for logical fallacies; they specialize in framing the <em>why<\/em>. An operator asks the AI, \"How can we optimize our current supply chain?\" A visionary trained in philosophical inquiry asks, \"What is the fundamental human problem we are actually solving, and is a supply chain even the right answer?\"\r\n\r\nAn AI can generate a flawless response to either prompt. The difference is that one question optimizes an assumption, while the other interrogates it. That is the distinction between a plan that is logically perfect and one that is strategically sane.\r\n\r\nA leader trained in literature brings something different to the table: disciplined empathy for and accumulated insight into human motivation. This is not intuition in the casual sense but a true understanding of how people feel, decide, and change. A data-driven question is, \"How do we reduce customer call times?\" A literary question asks, \"Why are our customers really calling and what must they feel to remain loyal?\"\r\n\r\nThat human hypothesis allows the leader to ask the AI a far smarter prompt: <em>Analyze 10,000 call transcripts for keywords and signals of anxiety and confusion.<\/em> By focusing on more subtle cues, this leader gains genuine insight that leads to innovation, not just optimization.\r\n\r\nA leader trained in history performs a different kind of pattern recognition \u2013 one that operates on a grand scale and across time. An AI can generate a logically perfect Q1 plan. A historian may identify that the same plan could trigger unintended consequences that reveal their impact in Q3. They are often the ones in the room who can distinguish a true paradigm shift from a cyclical fad, an old pattern in a new, high-tech suit.\r\n\r\nThe humanities provide intellectual friction: an active, creative, and critical form of resistance modeled by Lucian himself. In an age obsessed with AI-driven speed, intellectual friction is not hesitation. It is method. It is what we might call the <em>True Story Protocol<\/em>.\r\n\r\nWhen presented with a plausible, logically perfect AI strategy, reflective leaders run this protocol. They don't just check the facts. They parody the plan. They apply Lucian\u2019s <em>reductio ad absurdum<\/em><em>, <\/em>pushing the AI's logic to its extreme to see whether \u2013 or where \u2013 it collapses. This is a creative act with a critical purpose: <em>If we follow this optimized path for five years, where do we actually end up? Is this a path to Venus \u2013 or a perfectly logical trip to the Moon?<\/em>\r\n\r\nSo, what does this look like in a real meeting? It's a creative stress test. When an AI-driven plan is on the table, the leader must be the one to ask the Lucian-esque questions:\r\n<ol>\r\n \t<li>What is the parody of this plan? What is the most absurd, ridiculous version of its logic? (This exposes hidden assumptions.)<\/li>\r\n \t<li>If we follow this plan for five years, what unintended catastrophe emerges? (This tests second-order consequences.)<\/li>\r\n \t<li>What inconvenient human truth must be ignored for this plan to work? (This pierces the logical bubble.)<\/li>\r\n<\/ol>\r\nThe return on investment of a humanities education in the AI age is therefore not just the avoidance of spectacular, logic-driven failure. It is the only reliable path to differentiation. Your competitors have the same AI. They run the same models on the same data. They ask the same obvious <em>how<\/em> questions. The only way to win is to arrive at an insight they cannot generate.\r\n\r\nThat insight will not come from <em>how<\/em>. It will come from asking a better <em>why<\/em>.\r\n\r\nLucian's 2,000-year-old lesson is that, at the highest level, critical thinking is a creative act. It is not just spotting flaws in logic, but having the judgment to frame a more human question. When everyone has access to machines that generate plausible answers, power belongs to the leader who can use Lucian's method to uncover the one inconvenient, market-breaking truth their competitors never think to face.\r\n\r\n&nbsp;\r\n\r\n\u00a9 IE Insights."],"wpcf-audio-article":[""],"wpcf-article-extract":["The greatest risk of AI is flawless logic applied to poorly framed questions that replace human judgment, writes Rub\u00e9n Montoya Gonz\u00e1lez."],"wpcf-article-extract-enable":["1"]},"_links":{"self":[{"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/articles\/1467126","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/articles"}],"about":[{"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/types\/articles"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/media\/1467128"}],"wp:attachment":[{"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/media?parent=1467126"}],"wp:term":[{"taxonomy":"schools","embeddable":true,"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/schools?post=1467126"},{"taxonomy":"areas","embeddable":true,"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/areas?post=1467126"},{"taxonomy":"subjects","embeddable":true,"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/subjects?post=1467126"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}