- Home
- News And Events
- News
- The Brief #001 The Battle For The Mind
THE BRIEF #001 The Battle for the Mind
IN BRIEF
Generative AI and other emerging technologies are enabling the development of Cognitive Warfare, where the human mind becomes the main battlefield…and we are not prepared for it.
THE GIST
On May 30, OpenAI announced that it had terminated accounts linked to five covert influence operations by state actors and private companies in Russia, China, Iran and Israel that were using its technology “in support of deceptive activity across the internet”. The operations primarily used AI models to generate content for various online platforms in different languages, and to fake engagement to influence public opinion on issues such as Western support for Ukraine, the conflict in Gaza, or criticism of the Chinese government by dissidents and foreign governments.
While OpenAI claims their models do not appear to have significantly increased the campaigns’ engagement or reach, this incident is representative of a broader issue: in today's geopolitical landscape, where controlling the narrative is increasingly crucial in conflicts and political contests, both state and non-state actors are leveraging the latest technological innovations to influence and deceive citizens.
This has prompted NATO to consider establishing a new category of warfare, ‘cognitive warfare’, where the human mind becomes the main battlefield. In a recent publication, NATO explains how, exploiting advances in digital technology and cognitive science, cognitive warfare “takes well-known methods of warfare to a new level by attempting to alter and shape the way humans think, react and make decisions”.
THE TAKEAWAY
Cognitive warfare represents a new challenge for international security. Generative AI is not only going to increase the volume and reach of influence operations by lowering the financial and technical barriers to creating content. The growing authenticity of AI-generated content will also make it harder for citizens to tell what’s real and what’s not. Furthermore, the capacity of AI systems to learn and adapt its messages to its interlocutors, will enable a new level of microtargeting and personalized disinformation.
The risks of manipulation and distortion of reality will be expanded as AR/VR technologies such as Apple Vision Pro or Meta Quest Pro, and brain computer interfaces such as Elon Musk’s Neuralink become widespread. The latter could give malicious actors greater access to our neural data (meaning greater insights into how we feel, think or react to particular stimuli) and allow them to hack the “reality” around us, including our moods and behaviors.
Governance frameworks to mitigate the risks of emerging tech misuse are more necessary than ever. NATO’s exploratory work to respond to cognitive warfare, or the EU AI Act’s prohibition on the use of AI for “cognitive behavioral manipulation” are significant steps forward, but much remains to be done. Policymakers now need to consider what actions in the cognitive domain constitute acts of aggression, or what are the best mechanisms for attribution and accountability.
DELVE DEEPER
The Battle for the Mind: Understanding and Addressing Cognitive Warfare and its Enabling Technologies, by Irene Pujol Chica and Quynh Dinh Da Xuan — This piece summarizes the key takeaways and recommendations from our roundtable discussion on Cognitive Warfare during the 2024 Munich Security Conference with Margrethe Vestager, Arancha González Laya, Anne-Marie Slaughter, and other renowned policymakers and industry leaders.
AI in Elections: The Battle for Truth and Democracy, by Carlos Luca de Tena — Our Executive Director, Carlos Luca de Tena, wrote this article a few months ago in which he explains the impact of AI-driven deception in elections and why governments, companies, and civil society must collaborate to swiftly address the limitations of existing regulation.