Is AI Creating Incompetent Experts?

Generative AI is short-circuiting the learning process that builds real expertise, writes Kiron Ravindran.

Listen to this Article

What if the greatest risk of artificial intelligence isn’t job displacement, but something far more insidious: the creation of professionals who appear competent but lack the judgment to back it up?

Last year, Norwegian marine biologists discovered their herring populations had forgotten their ancient migratory routes. After overfishing removed the experienced fish, young herring invented new paths leading them to colder, inhospitable waters. In just one generation, centuries of accumulated wisdom vanished.

The same dynamic is now playing out in knowledge work. As AI tools automate the tasks that once taught professionals how to think, there is now a generation that produces sophisticated outputs yet can’t explain their own work.

Steven Schwartz’s ChatGPT-generated legal brief appeared flawless, until a judge discovered all six cited cases were fake. When questioned, the 30-year veteran couldn’t even identify basic legal citation formats that he’d supposedly used. This case is far from isolated. Damien Charlotin, a research fellow and lecturer at HEC Paris, has documented close to 150 such cases of AI hallucinations filed around the world since June 2023.

Organizations are discovering the same harsh lesson about AI’s limitations. Klarna, the buy-now-pay-later unicorn, fired 700 employees last year claiming AI could do their tasks. However, within a year they were on a hiring spree and their CEO stated, “what you end up having is lower quality…investing in the quality of the human support is the way of the future for us.”

When vulnerable people meet AI instead of human expertise, consequences can be fatal. NEDA’s eating disorder chatbot recommended weight loss. Character.AI’s chatbots allegedly encouraged a teen’s suicide. Yes, AI will progressively improve, but these tragedies reveal what happens when we prioritize efficiency over understanding in contexts that warrant human judgment.

The WEF seems to think that the most important Core Skill in 2025 is analytical thinking. Perhaps the promise of AI creating an expert workforce, as Dario Amodei and Jensen Huang seem to think, is a little overhyped.

I’ve observed a worrisome phenomenon in my own classroom. Given the concern that has spread since 2023 over ChatGPT use in written exams, I added oral examinations onto take-home assignments. The results were revealing. I received submissions that read like graduate-level work, but when I asked basic questions about methodology or reasoning, many students struggled to respond satisfactorily. They’d produced sophisticated analysis without developing the thinking skills to support their own assignments.

It is not concerning that students use AI – these tools are a reality of the modern classroom and workplace. It’s that they seemed genuinely surprised by my desire to probe further. After all, the written submission had made everything convincingly clear, had it not? The tool’s linguistic fluency had convinced them of their competence even as it hollowed out their learning.

Now, a confession: This essay took over twenty drafts alternating between ChatGPT and Claude. But I have read the research and am trained in critiquing it, I lived the anecdotes and formed my thesis before sitting down to write. Absent this training, this article would exemplify its own critique – polished but empty. Even attempting to write substantively about AI’s dangers while using AI reveals how easily we can be dependent on such assistance. The lesson remains: competence must precede collaboration. Otherwise, we’re just automating ignorance.

At consulting firms, analysts generate polished presentations using AI but when the task was beyond the capability of the AI, the humans did better when they did not use AI. Software programmers are now more reviewer than coder – the code can come from AI, but the skills to review that code pull from years of being a junior coder without AI.

A quote that triggered this essay came from Ankur Gupta, who states “AI does not understand time. It can recall a prompt from 15 seconds ago, but it cannot appreciate what unfolds over five years of a messy product rollout, nor why a seasoned executive might wait rather than act.” In short, AI can simulate knowledge, but it cannot (yet) embody wisdom. Wisdom emerges slowly through frustration, iteration, and adaptation. It is not a product of pattern recognition alone, but of temporal depth and context-sensitive judgment.

You can’t develop judgment about what you don’t know.

Today’s AI-hybrid professionals are, for all intents and purposes, considered accomplished and successful. On paper. But without real understanding behind the work, organizations are trading short-term efficiency for, possibly, long-term fragility. Employees might be more likely to falter under pressure, overlook ethical dilemmas, or freeze when conditions change. When AI’s answers fall short (and eventually they do) there needs to be someone in the room who knows how to respond. The risk of what Wharton’s Peter Capelli calls the experience gap is real: “everybody wants to hire somebody with three years’ experience, and nobody wants to give them three years’ experience.” The belief that the AI-equipped expert can fill this gap may be dangerous.

To understand why AI-assisted work creates such risks, consider two critical dimensions: one is the user’s own knowledge and the other is the nature of the task to be accomplished, specifically whether it is factual or subjective.

The AI Risk Matrix

Expertise in any domain ranges from low to high – from complete novice to seasoned professional. Meanwhile, the tasks fall into two categories: codifiable/verifiable (where knowledge can be explicitly stated and checked) or non-codifiable/judgment-based (requiring tacit knowledge that comes only from experience).

When a worker has high domain expertise:

  • Codifiable/Verifiable tasks: Execution Zone. Fast, accurate, augmented work. Example: A senior accountant using AI to process tax returns.
  • Non-Codifiable/Judgment-Based tasks: Judgment Zone. AI assists, but tacit knowledge governs. Example: An experienced doctor using AI for diagnosis.

When a worker has low/no domain expertise:

  • Codifiable/Verifiable tasks: Checkable Zone. Fact-checkable but risk of omission. Example: A student using AI to solve physics problems – errors are detectable but might miss conceptual understanding.
  • Non-Codifiable/Judgment-Based tasks: Danger zone. Convincing nonsense is undetectable. Example: A junior analyst using AI to assess market dynamics in an unfamiliar industry – no way to know what’s missing or wrong.

In the danger zone, AI generates authoritative-sounding analysis with a powerfully convincing tone that novice users are likely to be seduced by and can’t or do not feel the need to verify. The output feels sophisticated, and the reasoning appears sound, but hidden beneath fluent prose may lie what Harry Frankfurter calls “bullshit” that these AI users never examined and can’t detect. This is neither hyperbole nor hypothetical. The White House MAHA report on making children healthy again had fictitious citations. It is precisely what creates incompetent experts.

Some might argue that what we’re witnessing is not incompetence but cognitive evolution. Andy Clark of the University of Sussex suggests that generative AI represents just the latest chapter in humanity’s long history of «extended minds» – from writing to calculators to search engines, we’ve always augmented our thinking with tools. Why should AI be different?

The answer lies in the danger zone. When a GPS fails, you know you’re lost. When a calculator malfunctions, the errors are likely quite obvious. But when ChatGPT fabricates plausible-sounding analysis in domains you personally don’t understand, the failure goes undetected. Clark acknowledges we need new «metacognitive skills» to evaluate AI outputs – but that’s precisely what novices in the danger zone lack. You can’t develop judgment about what you don’t know.

This isn’t Plato worrying that writing would corrupt memory. It’s about professionals wielding tools they can’t validate in domains they don’t understand, producing outputs that look expert but lack the underlying comprehension that defines genuine expertise.

Current research on AI’s benefits may be missing this deeper risk. When Ethan Mollick and colleagues cite the work of BCG and P&G to demonstrate impressive productivity gains from ChatGPT, they are looking at highly skilled professionals who developed their expertise before AI existed. These veterans can evaluate AI outputs effectively because they know what they don’t know. Or when Stanford University’s Erik Brynjolfsson shows the productivity gains of call center operators, the tasks are largely routine tasks and more factual than subjective. Neither of these studies addresses the danger zone that can generate incompetent experts.

What happens then, when the next generation of knowledge workers learns strategy, analysis, and problem-solving through AI from day one? They’ll produce equally polished work but lack the foundational judgment to distinguish good insights from compelling-sounding nonsense. Rodney Brooks, former director of MIT’s Computer Science and Artificial Intelligence Laboratory and founder of iRobot, warns us of how we are easily seduced by language, since we have always associated language skills with intelligence. Today’s productivity benefits and the instant generation of words may be borrowing against tomorrow’s competence and expertise.

The herring case illustrates what happens when learning chains break. Senior fish didn’t just know migration routes, they also understood why those routes worked, carrying accumulated wisdom across decades. When overfishing removed them, it eliminated the system’s memory. Professional domains operate similarly. Senior practitioners don’t just know answers, they understand the reasoning behind them, the historical context that shaped current practices, the subtle indicators that signal when standard approaches won’t work. This knowledge lives in experience and transfers through mentorship, observation, and shared problem-solving.

AI is overfishing in our professional waters, draining the early-career experiences that once developed judgment and skill. When organizations eliminate «inefficient» learning processes, they’re dismantling the systems that create wisdom. Junior professionals miss opportunities to observe how experts navigate ambiguity. They skip the productive failures that build judgment.

The solution isn’t abandoning AI but recognizing that what we call «inefficiency» is often the foundation of competence. The struggle is the learning. But how do we preserve this in practice? It requires deliberate choices at three levels

Individual Level – Preserve Developmental Struggles: The cognitive effort required to build competence isn’t inefficiency – it’s essential learning. Law students need to struggle through bad arguments to recognize good ones. Business students need to crack the case to identify critical information before asking ChatGPT for the solution.

Organizational Level – Maintain Learning Ladders: Companies must resist the false economy of eliminating junior roles. TSMC expanded its apprentice program despite automation because they understand: today’s junior employees are tomorrow’s experts who will know when the AI is wrong.

Systemic Level – Create Transparency and Accountability: Make AI assistance visible, not to shame but to enable appropriate scrutiny. The EU’s AI Act suggests watermarking, but we need to go further.

The stakes extend beyond individual careers. When entire cohorts skip the experiences that create judgment, the labor force and, in turn, society lose not just talent but institutional memory.

The future of the knowledge economy depends on balancing AI’s power with the human learning it still can’t replace. Like the Norwegian herring, we may still be moving. But if we’re not deliberate about preserving the learning processes that build real competence, we may find ourselves swimming confidently in the wrong direction.

 

© IE Insights.

Would you like to receive IE Insights?

Suscríbete a nuestra Newsletter

Suscripción a la Newsletter