{"id":1403593,"date":"2025-06-09T17:17:44","date_gmt":"2025-06-09T15:17:44","guid":{"rendered":"https:\/\/www.ie.edu\/insights\/?post_type=articles&#038;p=1403593"},"modified":"2025-06-09T17:17:44","modified_gmt":"2025-06-09T15:17:44","slug":"is-ai-creating-incompetent-experts","status":"publish","type":"articles","link":"https:\/\/www.ie.edu\/insights\/articles\/is-ai-creating-incompetent-experts\/","title":{"rendered":"Is AI Creating Incompetent Experts?"},"featured_media":1403594,"template":"","meta":{"_has_post_settings":[]},"schools":[],"areas":[508],"subjects":[422],"class_list":["post-1403593","articles","type-articles","status-publish","has-post-thumbnail","hentry","areas-artificial-intelligence","subjects-innovation-and-technology"],"custom-fields":{"wpcf-article-leadin":["Generative AI is short-circuiting the learning process that builds real expertise, writes Kiron Ravindran."],"wpcf-article-body":["What if the greatest risk of artificial intelligence isn't job displacement, but something far more insidious: the creation of professionals who appear competent but lack the judgment to back it up?\r\n\r\nLast year, Norwegian marine biologists discovered their herring populations had forgotten their ancient migratory routes. After overfishing removed the experienced fish, young herring invented new paths leading them to colder, inhospitable waters. In just one generation, <a href=\"https:\/\/www.science.org\/content\/article\/herring-had-spawning-culture-overfishing-obliterated-it\" target=\"_blank\" rel=\"noopener\">centuries of accumulated wisdom vanished<\/a>.\r\n\r\nThe same dynamic is now playing out in knowledge work. As AI tools automate the tasks that once taught professionals how to think, there is now a generation that produces sophisticated outputs yet can't explain their own work.\r\n\r\nSteven Schwartz\u2019s ChatGPT-generated legal brief appeared flawless, until a judge discovered all six cited cases were fake. When questioned, the 30-year veteran <a href=\"https:\/\/www.abajournal.com\/web\/article\/lawyers-who-doubled-down-and-defended-chatgpts-fake-cases-must-pay-5k-judge-says\" target=\"_blank\" rel=\"noopener\">couldn't even identify basic legal citation formats that he'd supposedly used<\/a>. This case is far from isolated. Damien Charlotin, a research fellow and lecturer at HEC Paris, has documented close to <a href=\"https:\/\/www.damiencharlotin.com\/hallucinations\" target=\"_blank\" rel=\"noopener\">150 such cases<\/a> of AI hallucinations filed around the world since June 2023.\r\n\r\nOrganizations are discovering the same harsh lesson about AI's limitations. Klarna, the buy-now-pay-later unicorn, fired 700 employees last year claiming AI could do their tasks. However, within a year they were on a hiring spree and their CEO stated, \u201c<a href=\"https:\/\/fortune.com\/2025\/05\/09\/klarna-ai-humans-return-on-investment\/\" target=\"_blank\" rel=\"noopener\">what you end up having is lower quality<\/a>\u2026investing in the quality of the human support is the way of the future for us.\u201d\r\n\r\nWhen vulnerable people meet AI instead of human expertise, consequences can be fatal. NEDA's eating disorder chatbot <a href=\"https:\/\/www.vice.com\/en\/article\/eating-disorder-helpline-fires-staff-transitions-to-chatbot-after-unionization\/\" target=\"_blank\" rel=\"noopener\">recommended weight loss<\/a>. Character.AI's chatbots allegedly encouraged a <a href=\"https:\/\/www.nytimes.com\/2024\/10\/23\/technology\/characterai-lawsuit-teen-suicide.html\" target=\"_blank\" rel=\"noopener\">teen's suicide<\/a>. Yes, AI will progressively improve, but these tragedies reveal what happens when we prioritize efficiency over understanding in contexts that warrant human judgment.\r\n\r\nThe WEF seems to think that the most important <a href=\"https:\/\/www.weforum.org\/publications\/the-future-of-jobs-report-2025\/infographics-94b6214b36\/\" target=\"_blank\" rel=\"noopener\">Core Skill in 2025<\/a> is analytical thinking. Perhaps the promise of AI creating an expert workforce, as <a href=\"https:\/\/fortune.com\/2025\/06\/05\/anthropic-ai-automate-jobs-pretty-terrible-decade\/\" target=\"_blank\" rel=\"noopener\">Dario Amodei and Jensen Huang seem to think<\/a>, is a <em>little<\/em> overhyped.\r\n\r\nI've observed a worrisome phenomenon in my own classroom. Given the concern that has spread since 2023 over ChatGPT use in written exams, I added oral examinations onto take-home assignments. The results were revealing. I received submissions that read like graduate-level work, but when I asked basic questions about methodology or reasoning, many students struggled to respond satisfactorily. They'd produced sophisticated analysis without developing the thinking skills to support their own assignments.\r\n\r\nIt is not concerning that students use AI \u2013 these tools are a reality of the modern classroom and workplace. It's that they seemed genuinely surprised by my desire to probe further. After all, the written submission had made everything convincingly clear, had it not? The tool's linguistic fluency had convinced them of their competence even as it hollowed out their learning.\r\n\r\nNow, a confession: This essay took over twenty drafts alternating between ChatGPT and Claude. But I have read the research and am trained in critiquing it, I lived the anecdotes and formed my thesis before sitting down to write. Absent this training, this article would exemplify its own critique \u2013 polished but empty. Even attempting to write substantively about AI's dangers while using AI reveals how easily we can be dependent on such assistance. The lesson remains: competence must precede collaboration. Otherwise, we're just automating ignorance.\r\n\r\nAt consulting firms, analysts generate polished presentations using AI but when the task was beyond the capability of the AI, the humans did <a href=\"https:\/\/www.oneusefulthing.org\/p\/centaurs-and-cyborgs-on-the-jagged\" target=\"_blank\" rel=\"noopener\">better when they <em>did not <\/em>use AI<\/a>. Software programmers are now more reviewer than coder \u2013 the code can come from AI, but the skills to review that code pull from years of being a <a href=\"https:\/\/sourcegraph.com\/blog\/the-death-of-the-junior-developer\" target=\"_blank\" rel=\"noopener\">junior coder <em>without <\/em>AI<\/a>.\r\n\r\nA quote that triggered this essay came from Ankur Gupta, who states \u201c<a href=\"https:\/\/ankurashokg.substack.com\/i\/162675863\/time-the-invisible-layer\" target=\"_blank\" rel=\"noopener\">AI does not understand time<\/a>. It can recall a prompt from 15 seconds ago, but it cannot appreciate what unfolds over five years of a messy product rollout, nor why a seasoned executive might wait rather than act.\u201d In short, AI can simulate knowledge, but it cannot (yet) embody wisdom. Wisdom emerges slowly through frustration, iteration, and adaptation. It is not a product of pattern recognition alone, but of temporal depth and context-sensitive judgment.\r\n<blockquote>You can't develop judgment about what you don't know.<\/blockquote>\r\nToday\u2019s AI-hybrid professionals are, for all intents and purposes, considered accomplished and successful. On paper. But without real understanding behind the work, organizations are trading short-term efficiency for, possibly, long-term fragility. Employees might be more likely to falter under pressure, overlook ethical dilemmas, or freeze when conditions change. When AI\u2019s answers fall short (and eventually they do) there needs to be someone in the room who knows how to respond. The risk of what Wharton\u2019s Peter Capelli calls the <a href=\"https:\/\/www.hbs.edu\/managing-the-future-of-work\/podcast\/Pages\/podcast-details.aspx?episode=2101851700\" target=\"_blank\" rel=\"noopener\">experience gap is real<\/a>: \u201ceverybody wants to hire somebody with three years\u2019 experience, and nobody wants to give them three years\u2019 experience.\u201d The belief that the AI-equipped expert can fill this gap may be dangerous.\r\n\r\nTo understand why AI-assisted work creates such risks, consider two critical dimensions: one is the user\u2019s own knowledge and the other is the nature of the task to be accomplished, specifically whether it is factual or subjective.\r\n\r\n<strong>The AI Risk Matrix<\/strong>\r\n\r\nExpertise in any domain ranges from low to high \u2013 from complete novice to seasoned professional. Meanwhile, the tasks fall into two categories: codifiable\/verifiable (where knowledge can be explicitly stated and checked) or non-codifiable\/judgment-based (requiring tacit knowledge that comes only from experience).\r\n\r\nWhen a worker has high domain expertise:\r\n<ul>\r\n \t<li>Codifiable\/Verifiable tasks: Execution Zone. Fast, accurate, augmented work. Example: A senior accountant using AI to process tax returns.<\/li>\r\n \t<li>Non-Codifiable\/Judgment-Based tasks: Judgment Zone. AI assists, but tacit knowledge governs. Example: An experienced doctor using AI for diagnosis.<\/li>\r\n<\/ul>\r\nWhen a worker has low\/no domain expertise:\r\n<ul>\r\n \t<li>Codifiable\/Verifiable tasks: Checkable Zone. Fact-checkable but risk of omission. Example: A student using AI to solve physics problems \u2013 errors are detectable but might miss conceptual understanding.<\/li>\r\n \t<li>Non-Codifiable\/Judgment-Based tasks: Danger zone. Convincing nonsense is undetectable. Example: A junior analyst using AI to assess market dynamics in an unfamiliar industry \u2013 no way to know what's missing or wrong.<\/li>\r\n<\/ul>\r\nIn the danger zone, AI generates authoritative-sounding analysis with a powerfully convincing tone that novice users are likely to be seduced by and can't or do not feel the need to verify. The output feels sophisticated, and the reasoning appears sound, but hidden beneath fluent prose may lie what Harry Frankfurter calls \u201cbullshit\u201d that these AI users never examined and can't detect. This is neither hyperbole nor hypothetical. The White House MAHA report on making children healthy again had <a href=\"https:\/\/www.washingtonpost.com\/health\/2025\/05\/29\/maha-rfk-jr-ai-garble\/\" target=\"_blank\" rel=\"noopener\">fictitious citations<\/a>. It is precisely what creates incompetent experts.\r\n\r\nSome might argue that what we're witnessing is not incompetence but cognitive evolution. Andy Clark of the University of Sussex suggests that generative AI represents just <a href=\"https:\/\/www.nature.com\/articles\/s41467-025-59906-9\" target=\"_blank\" rel=\"noopener\">the latest chapter in humanity's long history<\/a> of \"extended minds\" \u2013 from writing to calculators to search engines, we've always augmented our thinking with tools. Why should AI be different?\r\n\r\nThe answer lies in the danger zone. When a GPS fails, you know you're lost. When a calculator malfunctions, the errors are likely quite obvious. But when ChatGPT fabricates plausible-sounding analysis in domains you personally don't understand, the failure goes undetected. Clark acknowledges we need new \"metacognitive skills\" to evaluate AI outputs \u2013 but that's precisely what novices in the danger zone lack. You can't develop judgment about what you don't know.\r\n\r\nThis isn't Plato worrying that <a href=\"https:\/\/www.amazon.com\/gp\/product\/0199554021\" target=\"_blank\" rel=\"noopener\">writing would corrupt memory<\/a>. It's about professionals wielding tools they can't validate in domains they don't understand, producing outputs that look expert but lack the underlying comprehension that defines genuine expertise.\r\n\r\nCurrent research on AI's benefits may be missing this deeper risk. When Ethan Mollick and colleagues cite the work of <a href=\"https:\/\/www.hbs.edu\/ris\/Publication%20Files\/24-013_d9b45b68-9e74-42d6-a1c6-c72fb70c7282.pdf\" target=\"_blank\" rel=\"noopener\">BCG<\/a> and <a href=\"https:\/\/www.hbs.edu\/faculty\/Pages\/item.aspx?num=67197\" target=\"_blank\" rel=\"noopener\">P&amp;G<\/a> to demonstrate impressive productivity gains from ChatGPT, they are looking at highly skilled professionals who developed their expertise before AI existed. These veterans can evaluate AI outputs effectively because they know <em>what they don't know<\/em>. Or when Stanford University\u2019s Erik Brynjolfsson shows the <a href=\"https:\/\/academic.oup.com\/qje\/article\/140\/2\/889\/7990658\" target=\"_blank\" rel=\"noopener\">productivity gains of call center<\/a> operators, the tasks are largely routine tasks and more factual than subjective. Neither of these studies addresses the danger zone that can generate incompetent experts.\r\n\r\nWhat happens then, when the next generation of knowledge workers learns strategy, analysis, and problem-solving through AI from day one? They'll produce equally polished work but lack the foundational judgment to distinguish good insights from compelling-sounding nonsense. Rodney Brooks, former director of MIT's Computer Science and Artificial Intelligence Laboratory and founder of iRobot, warns us of how <a href=\"https:\/\/www.newsweek.com\/rodney-brooks-ai-impact-interview-futures-2034669\" target=\"_blank\" rel=\"noopener\">we are easily seduced by language<\/a>, since we have always associated language skills with intelligence. Today's productivity benefits and the instant generation of words may be borrowing against tomorrow's competence and expertise.\r\n\r\nThe herring case illustrates what happens when learning chains break. Senior fish didn't just know migration routes, they also understood why those routes worked, carrying accumulated wisdom across decades. When overfishing removed them, it eliminated the system's memory. Professional domains operate similarly. Senior practitioners don't just know answers, they understand the reasoning behind them, the historical context that shaped current practices, the subtle indicators that signal when standard approaches won't work. This knowledge lives in experience and transfers through mentorship, observation, and shared problem-solving.\r\n\r\nAI is overfishing in our professional waters, draining the early-career experiences that once developed judgment and skill. When organizations eliminate \"inefficient\" learning processes, they're dismantling the systems that create wisdom. Junior professionals miss opportunities to observe how experts navigate ambiguity. They skip the productive failures that build judgment.\r\n\r\nThe solution isn't abandoning AI but recognizing that what we call \"inefficiency\" is often the foundation of competence. The struggle <em>is<\/em> the learning. But how do we preserve this in practice? It requires deliberate choices at three levels\r\n\r\n<strong>Individual Level<\/strong> - Preserve Developmental Struggles: The cognitive effort required to build competence isn't inefficiency \u2013 it's essential learning. Law students need to struggle through bad arguments to recognize good ones. Business students need to crack the case to identify critical information before asking ChatGPT for the solution.\r\n\r\n<strong>Organizational Level<\/strong> - Maintain Learning Ladders: Companies must resist the false economy of eliminating junior roles. TSMC expanded <a href=\"https:\/\/www.tsmc.com\/static\/abouttsmcaz\/apprenticeship.htm\" target=\"_blank\" rel=\"noopener\">its apprentice program<\/a> despite automation because they understand: today's junior employees are tomorrow's experts who will know when the AI is wrong.\r\n\r\n<strong>Systemic Level<\/strong> - Create Transparency and Accountability: Make AI assistance visible, not to shame but to enable appropriate scrutiny. The EU's AI Act suggests <a href=\"https:\/\/www.europarl.europa.eu\/thinktank\/en\/document\/EPRS_BRI(2023)757583\" target=\"_blank\" rel=\"noopener\">watermarking<\/a>, but we need to go further.\r\n\r\nThe stakes extend beyond individual careers. When entire cohorts skip the experiences that create judgment, the labor force and, in turn, society lose not just talent but institutional memory.\r\n\r\nThe future of the knowledge economy depends on balancing AI's power with the human learning it still can\u2019t replace. Like the Norwegian herring, we may still be moving. But if we're not deliberate about preserving the learning processes that build real competence, we may find ourselves swimming confidently in the wrong direction.\r\n\r\n&nbsp;\r\n\r\n\u00a9 IE Insights."],"wpcf-article-extract-enable":["1"],"wpcf-article-extract":["Generative AI is short-circuiting the learning process that builds real expertise, writes Kiron Ravindran."],"wpcf-audio-article":["https:\/\/www.ie.edu\/insights\/wp-content\/uploads\/2025\/06\/Is-AI-Creating-Incompetent-Experts_.mp3"]},"_links":{"self":[{"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/articles\/1403593","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/articles"}],"about":[{"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/types\/articles"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/media\/1403594"}],"wp:attachment":[{"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/media?parent=1403593"}],"wp:term":[{"taxonomy":"schools","embeddable":true,"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/schools?post=1403593"},{"taxonomy":"areas","embeddable":true,"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/areas?post=1403593"},{"taxonomy":"subjects","embeddable":true,"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/subjects?post=1403593"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}