The Student Brain in the Age of Generative AI
By Irene Yagüe Sancho and Koen Van den Biggelaar
“'Is AI helping me think, or just helping me finish?' This sentence made me stop for a second and read it again. My first instinct is often to get to the answer as fast as possible. But when I let go of that initial pull, there is some discomfort, and that is where I realize I am learning. And using Lumis as AI teaching companion strengthened that experience.”
- Ari, IE student
Can you think about the last time you used an AI tool to help with a decision? Maybe you asked for a recommendation, checked a fact, or let it draft something you then sent without changing much.
Did you pause to evaluate the answer? Or did you simply accept it?
This article focuses on generative AI in learning, and on what happens to cognition, agency, and well-being when students accept AI output without fully evaluating it.
Generative AI is transforming the learning process not only by offering support, but by making it easier to accept polished outputs without fully evaluating them. This matters because when students outsource judgment as well as effort, the stakes are not just academic performance but also agency. Used well, AI can scaffold reflection and deepen learning; used uncritically, it can bypass the thinking processes education is meant to cultivate.
A New System of Thought
For decades, cognitive psychology has described the mind through two systems. System 1 is fast, intuitive, and automatic. System 2 is slow, deliberate, and analytical (Kahneman, 2011).
But generative AI introduces something that does not fit neatly into either. In a recent preprint, Shaw and Nave (2026) propose that AI now functions as a third cognitive system: external, algorithmic, and operating entirely outside the brain. Students are no longer only using tools that retrieve or store information, but systems that can generate plausible responses and invite deference to their outputs. In their experiments, Shaw and Nave (2026, preprint) report that higher trust in AI and lower Need for Cognition were associated with a greater tendency to defer to AI output without fully evaluating it.
The Line Between Cognitive Offloading and Cognitive Surrender
Cognitive offloading is when we shift a specific mental task to the environment: lists, reminders, calculators, even finger counting, to lighten working memory load while keeping ownership of the goal and the final judgment (Risko & Gilbert, 2016).
By contrast, the recently coined term cognitive surrender has been used by Shaw and Nave (2026, preprint) to describe the tendency to defer effort, responsibility, and judgment to an AI’s output, especially when it arrives fluent, confident, and frictionless.
The distinction matters because the two processes do not have the same consequences for learning. Cognitive offloading can free attention for deeper reasoning while preserving authorship of the task. Cognitive surrender, by contrast, can short-circuit reflection, weaken students’ sense of agency, and encourage acceptance without the understanding that learning is meant to build.
One useful lens here is Need for Cognition: the relatively stable tendency to enjoy and engage in effortful thinking (Cacioppo & Petty, 1982). Students high in Need for Cognition are more likely to question AI outputs and offload selectively rather than surrendering judgment. This also overlaps with what we might call cognitive patience: the willingness to stay with a question long enough for deeper learning to occur (van de Ven, Hakemulder, & Mangen, 2023).
Impact on Learning and Cognitive Engagement
Concerns about AI and learning are not purely speculative. Kosmyna et al. (2025), in a widely discussed MIT Media Lab preprint on essay writing, found that students who relied heavily on an LLM showed lower cognitive engagement, weaker memory for what they had just written, and a reduced sense of ownership over their work compared to students who wrote without AI support. However, the authors themselves caution against simplistic headlines like “AI makes you stop thinking.”
Stronger evidence comes from large-scale education studies. Bastani et al. (2025) show a clear distinction between performance support and learning support: unrestricted, answer-giving AI improved short-term performance but reduced later independent performance, whereas AI designed to guide thinking avoided these losses. Kestin et al. (2025), by contrast, found that a pedagogically structured AI tutor led to stronger learning gains than traditional active learning classrooms, while also increasing motivation and engagement.
Taken together, the evidence suggests a simple distinction: unstructured, answer-giving AI could invite surrender; pedagogically designed AI can support thinking.
Learning, Agency and Well-being
Research on desirable difficulties shows that effort, struggle, and even discomfort during learning are not obstacles to understanding: they are conditions for it. The friction that slows performance in the short term is what builds retention and transfer over time (Bjork & Bjork, 2011). When AI removes that friction, it may also remove the learning.
Difficulty alone is not enough, however. Productive struggle becomes relevant only when students feel safe enough to stay with it. In that sense, the educational challenge is not simply to reintroduce effort, but to design conditions in which effort remains meaningful and manageable (Edmondson, 2019).
This creates a specific tension for students. Tools that reduce effort are naturally attractive. But when students consistently follow that pull without noticing, they weaken what Ryan and Deci (2000) identify as a foundation of psychological well-being: autonomy, the experience that one’s actions and choices are genuinely one’s own. Darvishi et al. (2024) suggest that reliance on AI feedback without internalization can weaken self-monitoring and agency once the assistance is removed.
Recognizing the pull, and choosing to stay with the difficulty, is both a learning move and a claim to agency. And agency is closely tied to well-being.
The Pause as Intervention
When AI output arrives fluent, confident, and well structured, it may not create the kind of friction that prompts closer scrutiny. A polished AI response can feel immediately usable, reducing the likelihood that a student pauses to question it.
This is also why the idea of a “human in the loop” can be misleading. The assumption is that a human who reviews AI output will evaluate it. But the evidence suggests otherwise. When the output feels right, evaluation does not happen automatically.
What makes the loop functional is not presence but awareness. The trained capacity to notice when you are about to accept rather than evaluate. This is metacognition in its most practical form.
Practical Implications
Three practical lessons follow. First, cognitive surrender is a real risk. Second, guardrails matter. Third, pedagogical design makes the difference.
The key questions therefore become:
Is AI helping me think, or just helping me finish?
Does it prompt explanation, comparison, revision, and genuine attempt?
Am I still the author of the conclusion?
For students, this means thinking first, asking for hints before answers, and comparing AI output with their own reasoning. For educators, it means designing tasks that require stance, explanation, and revision, and valuing learning processes over polished output.
Generative AI does not need to be avoided, but it does need to be used in ways that preserve thinking. As the tools become more fluent and persuasive, the challenge is not only what students can produce, but whether they remain cognitively present in the process. When students are supported to stay with difficulty, to evaluate rather than accept, and to recognize the pull toward effortless answers, they learn more deeply and retain a stronger sense of authorship over their own thinking.
Noticing the pull toward the answer and returning to the question is often the key move.
References
Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, Ö., & Mariman, R. (2025). Generative AI without guardrails can harm learning: Evidence from high school mathematics. Proceedings of the National Academy of Sciences, 122(26), e2422633122.
Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. In M. A. Gernsbacher & R. W. Pomerantz (Eds.), Psychology and the real world: Essays illustrating fundamental contributions to society (pp. 56–64). Worth Publishers.
Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42(1), 116–131.
Darvishi, A., Khosravi, H., Sadiq, S., Gašević, D., & Siemens, G. (2024). Impact of AI assistance on student agency. Computers & Education, 210, 104967.
Edmondson, A. C. (2018). The fearless organization: Creating psychological safety in the workplace for learning, innovation, and growth. Wiley.
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
Kestin, G., Miller, K., Klales, A., Milbourne, T., & Ponti, G. (2025). AI tutoring outperforms in-class active learning: An RCT introducing a novel research-based design in an authentic educational setting. Scientific Reports, 15, 17458.
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task [Preprint]. arXiv. https://doi.org/10.48550/arXiv.2506.08872
Risko, E. F., & Gilbert, S. J. (2016). Cognitive offloading. Trends in Cognitive Sciences, 20(9), 676–688.
Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68–78.
Shaw, S. D., & Nave, G. (2026). Thinking—fast, slow, and artificial: How AI is reshaping human reasoning and the rise of cognitive surrender [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/yk25n_v1
van de Ven, I., Hakemulder, F., & Mangen, A. (2023). “TL;DR” (Too Long; Didn’t Read)? Cognitive patience as a mode of reading: Exploring concentration and perseverance. Scientific Study of Literature, 12(1), 68–86.