Scarlett Johansson Opens the Debate on AI Consent

Scarlett Johansson’s dispute with OpenAI over her voice highlights the legal and ethical challenges facing the AI industry, writes Robert Polding.

Listen to this Article

Chatbots are now popular with the general public thanks to the recent development of Large Language Models (LLMs) such as OpenAI’s ChatGPT. Of course, interacting with text-based chatbots is not exactly what we envisioned the future of AI to entail. Years of goofy science fiction movies and TV series have conditioned us to imagine a future where we communicate with computer voices, robots, or AI avatars on screens. C3P0 from Star Wars springs easily to mind, as does Data of Star Trek, and even the satirical Holly from Red Dwarf. And let us not forget the unnerving and omnipresent AI that is HAL 9000 from Kubrick’s 2001: A Space Odyssey.

Indeed, the text-based chatbots of today are far from what we imagined AI would be – and it is unlikely they represent the future of this technology. AI is already transitioning from websites that require copy-and-paste actions to being seamlessly integrated into the tools and websites we use daily.

Even modern depictions of AI haven’t deviated from this trend. One of the most memorable movies about AI from the last decade is the thought-provoking “Her,” starring Scarlett Johansson and Joaquin Phoenix. It tells the tale of a writer who buys an AI system to assist him. Johansson plays the voice of the AI and Phoenix portrays the hapless romantic who falls in love with it, or her.

Voice cloning, also called deepfake audio, is not a new phenomenon. In 2022, OpenAI created the Voice Engine, a tool that could clone anyone’s voice. However, the company claims it is too dangerous to be released to the public due to its effectiveness. This year, in May 2024, OpenAI launched an improved voice assistant for ChatGPT with a selection of new voices, including a flirty female voice called Sky that has a similar tone and delivery to Johansson’s AI character in Her.

Sam Altman, the CEO of OpenAI, called Johansson’s agent the day before launching the new feature. They asked her if she would license her voice for a virtual assistant. It was not the first time they had made such a request, actually. The company had approached her earlier in the year with the same question. Johansson declined on both occasions.

Then, after demonstrating the new voice assistant, Altman posted the word, “her” on the social media platform X. Doing so implied that Sky was a clone of the voice from the movie of the same name and it sparked a media frenzy and Johansson immediately protested.

In reality, an actress was hired to create the voice before OpenAI contacted Johansson. But, there is a plethora of evidence showing that OpenAI had made the voice independently and without sampling Johansson’s voice. Rather, they held a casting call for sampling voices and that the person “should sound between 25 and 45 years old. And their voices should be ‘warm, engaging [and] charismatic.” The hired actress stated that there was no mention of the movie or of Johansson during the recording and creation of Sky.

Despite all this, the voice does sound very much like Johansson. In response to the controversy, OpenAI immediately disabled Sky as an option for their new voice assistant and issued a press statement acknowledging that it is unethical for AI companies to use people’s voices without consent, while still clarifying that the voice for Sky is not based on Johansson’s voice. The feature has not been reintroduced. This is rather telling of OpenAI’s apprehension of the possible repercussions from using a voice that clearly resembles Johansson’s and highlights the sensitivity around voice cloning.

Voice Cloning and Deep Fake Technologies

Even before the recent explosion in generative AI, voice manipulation was making headlines when, in 2019, a scammer impersonated the voice of a German CEO, resulting in their UK division transferring over €200,000 to a scammer. This incident marked the birth of audio phishing, and the media and general public started to realize the implications of this new technology.

Then an even more shocking case followed in early 2020. Scammers cloned the voice of company director, and stole $35 million in the process. By the time generative AI came along, voice manipulation was an established technology already in active use for illegal activities. Now that AI can generate text, the potential to automate this technology and exploit people at a large scale is very real. When combined with video manipulation, it is clear that soon, we will be unable to distinguish authentic video and audio footage from fake.

Voice cloning has massive implications for individual privacy, as it could allow the creation of fake revelations from footage of anyone who has been recorded. It also raises new questions about how we can protect our identities and ensure proper consent is obtained when using this type of technology. The reaction from the creative industry has been unprecedented. Artists, actors, and media professionals went on strike for 148 days. As a result, there has been progress in legislation that will protect actors against deepfake technologies. This obviously has an influence on the case with Johansson.

Despite the controversies surrounding voice cloning, the technology – when used with consent – holds significant potential. It could revolutionize the voiceover industry, create personalized experiences, and even resurrect the voices of beloved actors from the past. However, the current focus on controversies and their impact on actors’ careers and income has overshadowed these potential benefits.

Implications for the AI Industry

The media portray this as “just the start of legal wrangles” for the AI industry and, logically, the freedoms that AI companies have thus far enjoyed cannot last forever. Image generation is also under scrutiny as the prospect of selling AI-generated art has hit the mainstream. All the algorithms for generative AI rely on human-generated data, and it is easy to replicate the style of any writer or artist. Authors are finding AI-generated books on Amazon. Music streaming services are allowing AI-generated music to be published and this is set to impact every creative industry.

The creative industry is reacting, unsurprisingly, with lawsuits and hostility towards AI. Prominent authors, including George R.R. Martin, John Grisham, and Jodi Picoult, have filed a lawsuit against OpenAI for “systematic theft on a mass scale” and claim that “Great books are generally written by those who spend their careers and, indeed, their lives, learning and perfecting their crafts. To preserve our literature, authors must have the ability to control if and how their works are used by generative AI.” Visual artists including Damien Hurst and Tracey Emin are also taking legal action, against Midjourney, one of the leaders in image generation. Artists worldwide are contacting their lawyers to work together, like Hollywood has, to force AI technologies to conform with copyright law. The lawsuit against Midjourney included so many authors that it took 24 pages to list them all!

In the music industry, there is also a backlash against the Amazon-funded company Anthropic, who have developed a chatbot called Claude that has been trained (among other data) with uncountable copyrighted lyrics. Some of the most prominent players in the music sector, including Universal Music, ABKCO, and Concord Publishing, are behind the lawsuit, which claims Claude illegally reproduces lyrics from its dataset.

Technology like AI is hard to control because it is new, and legal systems and society still need time to adapt. This is a common trend in technology, with legal precedents set long after a technology’s launch. The internet and MP3s posed a similar threat in the early 2000s during the transition from physical to streaming media.

AI is a particularly complex technology to regulate because once a model has been trained on data, removing that data from the system is difficult and expensive. If LLMs become highly restricted and cannot use copyrighted material, the capabilities of generative AI will be severely limited. At the moment, AI companies are ignoring the advice of their lawyers. They are pressing on with using copyrighted works in their datasets. The most significant step would be government legislation to stop the use of copyrighted materials. In the UK, the House of Lords is already pressuring the government. Whether other countries follow suit will determine the future of many AI capabilities that we currently take for granted.

Potential Solutions and Future Outlook

So, how can AI companies and organizations that deploy AI mitigate the risk of potential lawsuits? It is uncharted territory from a legal standpoint and challenging to predict in the fast-moving world of generative AI. The obvious solution is to follow Altman’s example and seek consent from individuals whose content or likeness may be used – one could argue in either direction that Altman did or didn’t do this effectively in the case of Johansson. AI companies must learn from these events and develop formal legal protections and guidelines for obtaining consent.

AI companies can also take a significant step by developing watermarks for their generated content so it is easy to flag copyrighted content. The only way to get everyone on board with these protections and guidelines will be to make them industry-wide, which is a challenge with international industries like movies and music. There is a battle commencing between innovation and ethical considerations in AI, and no one can predict the outcome.

OpenAI has claimed that they are working on the “development of a broadly beneficial social contract for content in the AI age.” The entire industry is aware and prepared for a rocky journey into the future. The company is also developing tools such as Media Manager, which enables authors and creators to opt out of training for generative AI – but this tool is not likely to see the light of day until 2025. Most of the industry sees this as too little too late and a simple attempt by OpenAI to admonish itself for previous copyright violations.

The era of unrestricted AI use is coming to an end. Accountability is settling on the shoulders of the AI giants, who must now be concerned even about false accusations of deepfaking. There are many ethical problems surrounding the models and data being used, and the rise in legal actions against AI indicates that change is on the horizon.


© IE Insights.