The False Promise of AI in Classrooms

AI is changing classrooms fast—but not always for the better. Professor Wayne Holmes of University College London explains why tools like ChatGPT won’t fix education and what we risk by misunderstanding AI.

 

© IE Insights.

Transcription

Since the arrival of Large Language Models, universities around the world have gone into a tailspin. What do we do? Real panic. And I see around the world in universities, including my own, decisions are being made and we don’t know the repercussions.

The first thing that people need to realize is that AI is not just ChatGPT. AI has been developed for 70 years. AI for education has been developed for 50 years, and yet ChatGPT has only been around for almost three years now. But a lot of people get confused and think, if they can understand ChatGPT, they can understand all AI in education.

The kind of things that people focus on is AI for saving teacher time. And it’s such a myth because AI really does not save time. The idea that technology will save teacher time is 60 years old itself. From B.F. Skinner back in the 1950s, claimed that his teaching machine would save teacher time. It didn’t. And no technology since then has saved any time.

What happens instead is that the teacher’s work is displaced from one activity to another activity. So, for example, today, teachers are spending their time worrying about privacy, worrying about data, worrying about writing multiple prompts, worrying about how do I change the output so that it actually meets the needs of my students and my curriculum.

So one of the things that’s interesting about generative AI is that it’s hitting higher education from the bottom. And when we think of old technologies like MOOCs and learning management systems and interactive whiteboards etc., they’ve all been put into education from the top. So we’ve had technicians, we’ve had experts who brought the technology, but with Large Language Models like ChatGPT, they appeared overnight and suddenly students around the world were using them. And we’re not really sure yet what this all means.

What we do know is that a lot of time is being spent on these tools, and the kind of quality of the output is very, very weak. One of the other problems we have across higher education is that people really don’t understand this technology. They think they do. They get the highlights, the exciting bit, but they’re not really literate.

And when we talk about AI literacy, people think it means literacy in the technology. How the technology works. And I agree that it’s useful for everybody to have a basic understanding of that. But I’m more interested in literacy in the human impact of these technologies. So the evidence we have is that if you use a Large Language Model in your learning, compared to somebody who doesn’t, then you probably do learn more quickly.

But if you take a Large Language Model away, then you actually you learn less well than a student who never had access to that tool in the first place. So these are huge problems that we’re just not getting our heads around at the moment at all. And we need a lot more research. One of the ambitions for AI is to help what they call personalized learning.

So every individual student gets their own specific way of learning. The reality is that these tools might give each individual an individual pathway through the materials to be learned. But they’re all designed to take everybody to the same end point, to learn the same stuff. That’s not real personalization in my book. And it misinterprets what education’s about.

For me, education is about helping each individual to become the best they can become, but also helping them to recognize their role in society to make positive contributions. None of the tools do that. If anything, they go in the opposite direction. You know, I am worried about the current state of affairs, but I think there are two things we can do.

I think one is top down. We need regulation. The Council of Europe is developing a legal instrument to regulate the use of AI in educational settings. It’s a difficult piece of work. It’s going to take a while, but I think it’s important. The bottom up, though is also important. And that’s helping all members of faculty, all our students to become critically AI literate.

And so that’s not just in terms of literacy in the technology, but literacy in the human impact of these technologies, the ways in which they don’t work, the ways in which they can undermine people’s human rights, the ways in which they impact negatively on the climate, for example. But I think if we can do these two things, then I think we could move forward in ways that can benefit everybody.

My concern is that that’s not happening yet.

 

Read More

Would you like to receive IE Insights?

Sign up for our Newsletter

Newsletter Subscription