- Home
- News And Events
- News
- The Brief #002 Depolarizing Social Media With Ai
THE BRIEF #002 Depolarizing Social Media With AI
IN BRIEF
We’ve all heard that AI algorithms exacerbate political segregation online. But AI systems can also help depolarize social media networks by being designed to be agnostic to political data.
THE GIST
Stark political divisions within society have been significantly amplified through social media platforms, particularly at critical times such as when citizens head to the polls. Social media algorithms tended to show users content that aligned with their existing beliefs, leading to echo chambers where individuals were primarily exposed to viewpoints they already agreed with.
In the 2016 U.S. election and Brexit referendum campaigns, for example, memes and viral content played a substantial role in shaping public discourse, often simplifying complex issues into easily digestible, and sometimes misleading, visual snippets. Hyperpartisan and misleading content on social media, benefitting from the echo chamber effect, spreads rapidly, thus reinforcing existing biases and contributing to a more divided society.
There is, however, an alternative path. One that gets less media attention but holds great promise. AI systems can actively combat political segregation and polarization on social media by leveraging political science models and machine learning techniques.
THE TAKEAWAY
AI algorithms, which filter and curate the content we see online, can reinforce political segregation by promoting selective exposure to information. This lack of diversity in the content users encounter can deepen political divides. Traditionally, efforts to mitigate this risk have involved prescribing diversity in content recommendations. However, this normative approach raises several questions: How much diversity is enough? Who decides what constitutes appropriate diversity? And how do we ensure that enforcing diversity doesn’t itself exacerbate polarization?
Pedro Ramaciotti from Sciences Po and the French National Centre for Scientific Research (CNRS), and researcher in our AI4Democracy project, proposes an innovative solution that bypasses these normative dilemmas. By leveraging advanced AI techniques and political science models, Ramaciotti suggests that algorithms can be designed to be agnostic to political information. This means creating AI systems that do not rely on political data when making recommendations, thereby reducing the risk of political segregation without the need to prescribe specific levels of diversity.
According to Ramaciotti, AI providers and developers should:
Develop markers to detect political information within AI-generated data representations, in order to help identify and measure political biases in AI systems.
Regularly self-assess their AI systems to detect and address unintended political biases. This involves using the predefined markers mentioned above to evaluate how AI algorithms process political information.
Continuously evaluate and adjust their AI learning procedures to help mitigate the risks of political segregation. By experimenting with different algorithms and techniques, providers can neutralize unwanted political biases while maintaining accuracy.
Facilitate collaboration with regulators and researchers by designing systems that allow external scrutiny.
DELVE DEEPER
“Depolarizing and moderating social media with AI: Tools and guidelines leveraging representation spaces” by Pedro Ramaciotti – This paper, part of our AI4Democracy initiative, explores how AI systems can actively combat political segregation and polarization on social media by leveraging political science models and machine learning techniques.
“How social media platforms can reduce polarization” by Christian Staal Bruun Overgaard and Samuel Woolley – This Brookings article summarizes the results of the authors’ review of the scientific literature on how to bridge societal divides, presenting specific ways in which social media platforms can reduce polarization.