Critical Inquiries into AI’s Impact on Learning

The rapid rise of transformative technologies in education must inject both excitement and caution, writes Borja Santos Porras.

Listen to this Article

This past year has been incredibly exciting – and fun – in the world of education, thanks to the rapid emergence of new and relevant technologies. ChatGPT arguably caused a bit of angst within the academic community, but it has also given the sector a renewed vigor. It has forced educators, students, and institutions to look at learning in a new way.

The integration of AI tools, however, in higher education – as in all industries – cannot be done with simple wild abandon, regardless of how tempting that might be. No, it requires significant forethought in order to ensure that these technologies and tools are being used in an effective and, most importantly, ethical manner.

Indeed, the use of AI in education presents challenges alongside many opportunities and so educators must therefore take stock before implementing it within curriculum, programs, and teaching. There are still many questions that educators and educational organizations must address so as to properly navigate and capitalize on an ever-evolving technological landscape. Here are some suggestions:

  1. How can we avoid the implications of AI’s current monoculturalism?  Given that half of the content on the Internet is in English, followed by Russian (less than 6%) and Spanish (4%), and that AI is trained using this data, it makes sense that the cultural norms associated with those prominent languages bias the content. Furthermore, as Jill Walker Rettberg of The University of Bergen in Norway argues, while ChatGPT may be trained on a variety of multilingual data, it tends to reflect and promote American norms and values, potentially perpetuating that culture. There are several initiatives dedicated to exploring various items in their own language, yet there remains room for research into resolving these implications.
  2. How much of our resources are going to teaching machines how to learn vs teaching humans how to learn? The evolution of large-scale language models (LLMs) has focused the debate on how to feed and teach these machines. Fair enough, but we should be careful not to neglect the fundamental aspect of education: teaching humans how to learn effectively and independently. In his book ¿Cómo Aprendemos? (How Do We Learn?), Héctor Ruiz Martín of the International Science Teaching Foundation, comments that it is essential for teachers to understand the cognitive and emotional mechanisms that enable learning so as to teach students how to study, learn, and memorize. As LLMs continue to learn and evolve, enhancing educational practices, it’s important that the essential learning needs of our students are kept front and center.
  3. What skills are necessary for students to develop in order to lead and work in this era of AI? Most leaders today will need to work with AI in some way or another. Therefore, I see four basic and necessary competencies to do so: a) general knowledge of AI, understanding what it is and how it works. The hardest part of this is how to keep up to date in a constantly changing technological environment; b) an understanding of data-driven decision-making, developing the ability to interpret, use and mitigate biases in this data, and the ability to pinpoint those biases; c) a steadfast commitment to the ethical and responsible use of AI, addressing issues such as privacy and fairness; and finally, d) critical thinking to be able to ask the right questions.
  4. How broad is the scope and potential of AI in education? Ethan Mollick and Lilach Mollick of Wharton have proposed a wide variety of cases where it could be very useful. For example, AI can function as a personal tutor, catering to the individual levels and needs of each student, or generate personalized feedback for learners. However, there are other dilemmas ahead. Is it appropriate for an LLM to assign the final grades for academic assessments? How can we identify when this is unfair or wrong? Can it assess creativity? In what cases would students consider the automation of their results as legitimate?
  5. How do we address plagiarism and academic integrity in regard to AI? There are applications that can detect instances of plagiarism, but they still have a high error rate, including both false positives and false negatives. Thus, would it not be more effective to focus on teaching ethical values for the responsible use of AI? How do we develop that awareness and how can we act to recognize when integrity is not part of the equation?
  6. Is it possible to become overdependent on ChatGPT? Excessive use of LLMs could curtail the development of research, critical thinking, and problem-solving skills. The ease and speed with which ChatGPT provides information could also reduce student motivation and the perceived need to learn and retain information. They may even feel insecure about their creativity and originality once they get used to relying on AI-generated responses. It’s essential that educators learn how to guide students in using AI responsibly. For instance, in my public speaking and speechwriting course, I demonstrate to students how they can utilize chatbots and AI for idea generation, while also highlighting its limitations and ethical implications. Additionally, I guide them in crafting more personalized and effective speeches by using their unique language, incorporating their original personal stories, and developing their own distinct style – elements that contribute significantly to their charisma. This approach emphasizes the value of individual creativity, which can be more challenging to achieve with AI.
  7. How do biases affect AI, especially in LLMs that identify patterns using large datasets? If the data used to train an LLM contains, for example, gender or racial stereotypes, or dominant cultural and geographic perspectives, these biases will be reflected in the model’s results. The question then becomes how to ensure that minority perspectives, which bring diversity and sometimes highlight less acknowledged and hidden truths, remain part of the education process.
  8. What should be done about algorithmic hallucinations? We would not have guessed it a year ago, but an AI hallucination is now a well-understood concept in public discourse. Not only do chatbots get things wrong, they can sometimes fabricate information altogether. If the original content is poorly developed, then it can lead to disinformation in searches, distorting reality and creating general confusion. How can we reduce AI-generated distortions to ensure they don’t impact the quality of learning materials or applied research content?
  9. What are the ethical boundaries of AI use, in terms of privacy and political impact? As personal data is input into systems such as LLMs, it can be stored and used in future training, often without attribution. Questions arise about how to ensure privacy (in the case of the education sector this is particularly important), what safeguards are in place to prevent AI from being used to disseminate inaccurate or discriminatory content, how these tools are trained, and what safety and security measures are implemented to protect users from erroneous information and harmful interactions. Additionally, there is the question of how AI handles copyright and content distribution.
  10. What does AI do to our environmental footprint? When combined with technologies such as the Internet of Things, AI has the potential to optimize resource consumption and promote sustainability, for example, in a university. However, the processing of large volumes of data demanded by AI increases energy consumption. By 2040, emissions from the Information and Communications Technology (ICT) industry as a whole are expected to reach 14% of global emissions. Most of these emissions will come from ICT infrastructure, in particular data centers and communication networks. Therefore, we face a major dilemma: how do we strike a balance between the sustainability benefits of AI and its environmental impact from greater energy consumption?

Without a doubt, AI has already had a transformative role on education and this trend will only continue. We therefore find ourselves at the intersection of unprecedented opportunities and daunting challenges. It’s important to ask the right questions, to move forward with careful consideration of purpose, ethics, resources, and always with the student in mind. It is the role of educators and academic institutions at large to lead the discourse around learning and AI so that we can move forward with integrity, inclusivity, and sustainability.

 

© IE Insights.