ChatGPT and the Decline of Critical Thinking

The use of ChatGPT as a search engine has made the need for critical thinking more pressing than ever, writes Enrique Dans.

OpenAI’s launch of ChatGPT in late November captured the world’s attention and opened our eyes to the existence of a technology capable of a surprising number of things called machine learning. Note the language here. It is not artificial intelligence because machines are not intelligent, they are simply programmed to consult databases and use statistics.

Advanced automation has for years been applied to all kinds of uses, but what ChatGPT has done is to apply one specific area of machine learning, Large Language Models (LLMs), to a conversational environment, and has dazzled the world in the process. As a result more and more people are using it—not so much for conversations, as it was initially designed, but as a search engine. This is a development that could challenge Google’s two decades of hegemony.

It’s worth pointing out that Google already has very similar technology, but has so far decided to keep it under wraps until it can be sure of its quality and reliability. An LLM answers questions based on affinities and statistical matches, and the only way to get “correct” answers is to filter them, i.e. exclude the unsuitable stuff. This is not easy, so some answers will be partially incorrect, and sometime they will be very off the mark, even if the language used seems that of an expert.

Google, of course, immediately realized the threat posed by ChatGPT. In response, the company brought its two founders out of retirement, and decided to incorporate similar technologies into around 20 of its products. Microsoft, now an investor in OpenAI, is about to do the same. (If you haven’t experienced the pleasure of creating Excel documents with an algorithm like ChatGPT, I highly recommend it).

So far, so good: we have innovation driving competition.

Yet, the problem here is what happens to critical thinking. A Google results page, with its links, shows us the source and may include a fragment of text, etc., before we finally click and land on the page to (hopefully) find what we were looking for. But, with ChatGPT and the like, we ask a question and receive a couple of paragraphs with the answer – and this answer may or may not be correct, though it certainly looks like it.

The steady erosion of critical thinking exposes us to reliance on a tool that can easily be used to manipulate us.

Such is the absence of critical thinking in the world today that many people take the first result on a Google page as gospel. I say this from experience. I have been called by several people convinced that I was the customer service manager of a Spanish airline simply because I once wrote, long ago, about Air Europa’s disastrous customer service and the article was indexed at the top of Google’s search engine. Just try convincing angry flyers that you are not the person to attend to their complaints! It didn’t matter what I told them: according to their reading of the search engine page, Google said I was the airline’s customer service, and so I must be lying.

So, if people will accept Google’s word uncritically, imagine the response to ChatGPT. The answer to your search may be 100% bullshit, but whatever: for many people, it’s hard, reliable truth.

There are tools like Perplexity.ai that try to alleviate this by providing sources – and this, at least, allows anyone who wants to fact check the answer to do so. I would think that Google, which has a great deal at stake here, will go in that direction and not simply provide a single piece of text but something more elaborate that would allow the user to check whether the answer is from a scientific article, a tabloid newspaper, or a sect of anti-vaxxers, climate change deniers, or conspiracy theorists. At the very least, search engines have a responsibility to make it possible for users to probe their answers with a few clicks. This is a necessity because the steady erosion of critical thinking exposes us to reliance on a tool that can easily be used to manipulate us.

Nevertheless, at the end of the day, we shouldn’t expect Big Tech to facilitate critical thinking: it’s something we must develop for ourselves, as individuals and collectively as a society. The problem is that our schools don’t teach it, preferring instead the comfort of a textbook, often banning smartphones in the classroom rather than embracing the challenge of teaching students how to use them properly.

That said, the education system cannot bear all the responsibility: parents also have an obligation to teach their children about how the world works. I’m afraid this means thinking twice about giving children a phone or tablet to keep them quiet.

After years of “this must be true because I saw it on television,” we now have “this must be true because the algorithm says so.” So, either we get a grip and start to prioritize critical thinking, or we’re going to end up in a collective muddle – and more vulnerable than ever to misinformation.

 

© IE Insights.

Privacy Preference Center

Required

gdpr[allowed_cookies], gdpr[consent_types]

Publicidad

Web Analytics

_ga, _gid, _gat, _gtag, _gat_UA_5130164_1

Otras