From Debut to Disruption: A Year with ChatGPT

ChatGPT’s inaugural year witnessed rapid success, triggering debates and challenges in AI ethics, innovation, and societal integration, writes Rafif Srour.

Listen to this Article

Image created by ChatGPT 4.0 (with additional generation by Adobe Photoshop) portraying a year of ChatGPT in an abstract style.

 

We have reached the end of our first year with ChatGPT 3.0, the advanced large language model launched by OpenAI that achieved instantaneous success – or at least sensation. Within just a week of its launch, the ChatGPT 3.0 had more than 1 million users and was generating a mix of enthusiasm and skepticism, not to mention heated regulatory and ethical debates. Assessing the exact number of articles written about ChatGPT (or any other related terms such as large language model and generative AI) over the past year is a challenging, if not an impossible task, if only because the number is substantial and continues to grow daily (according to Google Scholar search, the number is over 6,000). What is clear is that generative AI is catalyzing a paradigm shift, and that we must ask ourselves: are we ready for such significant change?

The development of AI over the years has been predominantly task-specific, with applications in areas such as autonomous vehicles and precision medicine. Generative AI and models like ChatGPT exhibit a broader range of capabilities, enabling them to create and innovate across multiple domains. Traditionally, certain human attributes were thought to be beyond AI’s reach, for example cognitive and emotional skills like creativity, critical thinking, problem-solving, empathy, intuition, and emotional intelligence. But now, generative AI is beginning to trespass these domains, displaying capabilities in creating new music genres, writing imaginative essays, and even solving complex problems, such as optimizing logistics in supply chain management and/or developing innovative solutions in environmental conservation.

The technology underpinning ChatGPT’s evolution is truly groundbreaking.

This chart is created by the author based on the following reference.

This evolution started in the 1980s, with the foundational Recurrent Neural Networks (RNNs) predicting what comes next in a sentence based on a sequence of letters and words of varying lengths. This is crucial for tasks like language modeling and text generation. In the 1990s, Long Short-Term Memory networks (LSTMs) enhanced memory retention – as in the network’s ability to “remember” and then use information from earlier parts of the sequence – over longer sequences, despite their limited language skills. The significant breakthrough came with the introduction of the Transformer Neural Networks in 2017, which leveraged contextual relationships within text to great effect, setting the stage for the Generative Pre-trained Transformer models, GPT and GPT-2. These latter models could understand complex patterns and generate coherent text, though they were not without issues like bias. The timeline progresses with GPT-3 in November 2020, which produced responses indistinguishable from humans across various languages and contexts. Since then, several iterations of GPT-3 followed (InstructGPT, ChatGPT, which built upon GPT-3.5, and the latest, ChatGPT-4.0 in March 2023), with each iteration refining the model’s ability to interact more fluidly and responsibly.

The ramifications of this technology on different industries, including healthcare, finance, entertainment, manufacturing, transport, retail and, education are still unfolding. The introduction of ChatGPT and similar technologies initially sent ripples through the higher education sector, creating a wide range of reactions from outright resistance to cautious acceptance. Educators and institutions scrambled to preserve academic integrity while students, highly skilled at navigating digital landscapes, quickly leveraged these tools to circumvent traditional forms of assessment. The resulting arms race between proctoring technologies and innovative cheating methods served to highlight a fundamental mismatch between evolving tech capabilities and outdated educational paradigms.

As the year unfolded, it became clear to academic institutions that instead of resisting the tide of technology, it would be more beneficial to recalibrate and to integrate generative AI within educational platforms and, in doing so, foster more customized and interactive learning environments. Recently, the sophistication of GPT-4.0 has simplified the creation of virtual assistants tailored to educators’ specific needs. I personally have developed an AI-based digital counterpart, embedding my own teaching content to form a virtual educator, soon to become part of my classroom. In addition, more institutions are teaching digital responsibility and putting an emphasis on helping students understand how to ethically and effectively harness the power of AI.

Image generated by Chatgpt 4.0. “Prompt: if you are to paint ChatGPT, how would you do that Dali style?”

In the pharmaceutical industry, for instance, AI’s ability to generate new drug formulations is not just about efficiency, it’s an expansion of human knowledge. A prime example is the discovery, in November 2020, of halicin, a novel antibiotic identified using AI algorithms at MIT. This breakthrough challenged and enhanced our understanding of pharmacology, in that halicin operates differently from most existing antibiotics and is effective against a range of resistant bacteria. Similarly, in scientific research, AI’s ability to process and analyze vast amounts of data can lead to new theories or insights, accelerating the pace of discovery and innovation and pushing the boundaries of what we know and can achieve in various scientific fields.

GPT models have revolutionized communication by enabling real-time, multilingual translation and content creation, thereby breaking language barriers and democratizing information access. In addition, these large language models’ ability to summarize complex documents and generate user-friendly content has transformed the way individuals and businesses process and understand large volumes of data. The focus is moving from the labor-intensive process of development and writing to the conception and curation of ideas.

Over the past year, many expressed concern, even fear, that AI development will lead to job displacement and loss while others raved about the limitations and challenges posed by generative AI, namely bias, representativeness gaps, and the potential for spreading misinformation. Yes, certain jobs may become obsolete thanks to generative AI, but let’s not forget the not-so-distant Industrial Revolution and what it taught us: new roles and industries often emerge around technological advancements.

To address bias issues, it is important to understand that the quality of the outputs from these models is directly linked to their training data (garbage in, garbage out); with “better” data becoming available, a more reliable outcome is likely to be generated. What is more important at this stage, is the urgent need for appropriate legislation and governance to mitigate misuse and uphold ethical standards. To highlight this point, imagine the consequences in a scenario where AI is utilized to propagate false narratives during an important election. In the absence of stringent governance, such AI-driven misinformation could significantly influence public opinion and disrupt democratic processes, exacerbating the digital divide by misleading those unable to distinguish between AI-generated fabrications and factual information.

In my opinion, it is still too early to tell whether we as a society are ready to accompany the rapid evolution of AI technologies. For not only must we go beyond acknowledging and addressing the challenges posed by generative AI but we must come to terms with AI in general. As industries evolve and new roles emerge, much like during the Industrial Revolution, we must look closely at our societal infrastructure, from education systems to legal frameworks, and determine whether it is evolving at a pace that matches the rapid development of AI. This is at the heart of our collective journey towards a future in which humans coexist with advanced AI technologies.

Note from the author: Writing this article was as much about the process as the content itself. Reflecting on my journey, I recall my first article on ChatGPT taking over a week to perfect. In contrast, this piece transitioned from concept to publication in just two days. Throughout the first day, ChatGPT 4.0 was an integral part of my workflow assisting in structuring my thoughts, refining sections of the text, identifying weaknesses, and suggesting improvements. This collaboration led me to a pivotal question: what portions of this article were distinctly mine, and which were influenced by ChatGPT? The lines between my creative input and the AI’s assistance blurred. After completing the initial draft, I left it to rest on my desk. However, my mind lingered on it, continuously tweaking and reimagining parts of it, striving for a coherent and logical flow. The following day, I returned to my desk with fresh perspective and re-crafted the entire article in what I hoped was my unique style – a notion I now ponder with some uncertainty.

This experience marks a paradigm shift in my approach to writing, suggesting a new era where our inherent storytelling abilities as humans are not just maintained but enhanced through collaboration with AI technology.

 

© IE Insights.