AI of the People, by the People, and for the People: The Need to Democratize Control Over AI

A futuristic scene depicting a brain surrounded by digital icons in a grand amphitheater filled with an audience.

By Alex Roche, Associate Director, Center for the Governance of Change at IE University and Violeta Ruiz, Research Assistant, Center for the Governance of Change at IE University

AI has demonstrated its capacity to strengthen democracy. Across the globe, civic actors are using AI to increase transparency and enhance citizen participation. Talk to the City, for example, is an open-source AI tool designed to enhance collective decision-making that has been successfully used in places as diverse as Michigan and Taiwan. In France, Make.org’s AI-powered Panoramic platform was used during the Citizen Assembly on End of Life to make complex deliberations more accessible to the public. Another good example can be found in Massachusetts, where MAPLE has developed an open-source platform to improve public engagement with the state law-making process. In the Center for the Governance of Change at IE University, our AI4Democracy project has researched these and other democracy-affirming uses of AI.

While these relatively small-scale examples show how AI can enhance democracy when deliberately designed and used with this purpose, there is no escaping the fact that most powerful AI models are owned by a small number of big tech companies that are seeking to maximize profit. These companies choose the data used for training their models and what values to embed into their systems without input from the public nor much transparency. We run the risk of AI turning into a tool of private power, rather than serving the public interest.

AI is an extremely powerful technology with the potential to transform every aspect of our lives, influencing the way we communicate, learn, and work. Do we want to leave key decisions about its development and deployment in the hands of a small number of private individuals driven by their economic interests? For AI to serve public purposes, it must be shaped by public input in a democratic process that upholds universal human rights principles. This means that AI models should be democratic by design. Beyond how these tools are used, what is crucial is to ensure that the way they are created and trained is purposedly focused on respecting and advancing democratic values.

Some promising techniques are already being used to democratize AI. For example, Reinforcement Learning from Human Feedback (RLHF) fine-tunes AI models by having them learn from human preferences. Value-Sensitive Design (VSD) integrates ethical and social values into the development of AI. These and similar methods are particularly powerful when they are inclusive of multiple stakeholders and participatory – ensuring that a broad range of people take part in setting the values and rules the AI should follow.

These techniques aim to align AI with pluralistic values – not just what is profitable, but what societies genuinely want and need. In contrast, on the opposite side of the spectrum, concentrated control of AI creates serious risks to our democracies. We have already seen the consequences. For example, look at Elon Musk’s Grok AI chatbot, which amplified misinformation during the 2024 U.S. presidential campaign by falsely telling X users that Democratic Party candidate Kamala Harris was ineligible to appear on the ballot. This incident showcases how AI tools can easily distort public discourse and be used to benefit the interest of specific parties.

In response to these risks, citizens and governments are turning to the notion of “sovereign AI”. David Shrier, Professor of AI and Innovation at Imperial College London, defines it as a model in which governments aim to develop their own AI systems rather than relying solely on Big Tech. As he points out, this concept entails public participation and oversight in the design and training of these AI systems to ensure they align with national interests.

However, not all sovereign AIs are necessarily the same – the nature of the state involved in the development of these systems matter. Authoritarian regimes like Saudi Arabia are creating state-led AI systems such as Humain. With little democratic participation nor transparency into their development, they serve as an example of the risks ahead of us. If democratic governments fail to act, we risk a future where the only AI options available are dominated by tech oligarchs or authoritarian states.

Democratic countries should pick up their pace and offer alternatives. Some have tangible projects in the works. Spain, for example, has launched the ALIA project, which has been defined as “the first European public, open, and multilingual infrastructure which reinforces the technological sovereignty of Spain and Europe in the development of a transparent, responsible AI at the service of people.” The Netherlands has also embarked on developing its own AI model, GPT-NL, specially developed for applications within the security domain, from police services to the Public Prosecution Service.

What does it mean to democratize AI control?

Calls to democratize AI have become increasingly common, but it is rarely defined. As Elizabeth Seger, Director of Digital Policy at Demos and affiliate of the AI: Futures and Responsibility Project at the University of Cambridge, points out, democratization can mean different things: ensuring equitable access to AI tools, involving a wide range of stakeholders in their design and development, ensuring an equitable distribution of AI’s benefits, and, most importantly, democratizing the governance of AI itself. This last point is essential – access to AI is meaningless if the systems behind it perpetuate misinformation, discriminatory biases or authoritarian values. Democratizing AI control means building an AI infrastructure that is publicly accountable – models developed transparently, funded by public institutions, and open to civic involvement.

Open-source models are a crucial element of this vision as they allow for their code and training data to be publicly available for inspection and reuse, guaranteeing a level of transparency that is often missing from proprietary models. By allowing the global research and coding communities to follow along the entire process, everyone will be able to spot biases, vulnerabilities, and limitations and improve these systems. Open-source access also helps overcome the leverage tech giants hold over AI. By the universalization of access to the necessary knowledge, multiple actors will be able to build their own systems, thus reducing the oversized influence tech giants exert over society and leveling the playing field.

It is imperative we act now, because the choices we make today about the ownership and control of AI will shape the future of our economies, our political systems, and possibly our own existence as human beings. If we allow AI to remain in the hands of a few individuals – whether it is big tech corporations or authoritarian regimes – we run the risk of undermining democratic values, weaponizing information and knowledge for private interest and profit. If AI is to become a force for good, then it must be governed democratically by the people it is supposed to serve.