Who Decides AI’s Values? Inside OpenAI’s Ethics Debate
AI systems reflect the values of the people who build them – but who actually decides what those values should be? OpenAI ethicist Chloé Bakalar explains how AI companies think about governance, responsibility, scaling risks, and the long-term societal impact of generative AI.
© IE Insights.
Transcription
When generative AI was still a fairly new thing, the people who were at the forefront of it had no idea it was going to blow up the way that it did. When you’re working on these new technologies, keep in mind that maybe it’s going to turn out to be a really big deal, and if it does, how would you think about the challenges that come with scaling?
How would you think about what the long-term impacts on people are going to be? Everybody needs to be part of the conversation. It shouldn’t simply be one person who has a multi-billion dollar company and gets to dictate that this is what’s right for all people. Especially when we’re talking about global technologies that are operating all over the world, impacting billions of people.
The kinds of questions that AI ethics brings up are the same kinds of questions, in fact, that people have been asking for centuries. What does it mean to be a human? What does it mean to have responsibilities to others? What kind of responsibilities do we have for the things that we create? Especially when, you know, several steps down the line, they’re no longer in our field of vision.
Also, a number of the different technologies and paradigm-shifting technologies that we’ve lived through raise really interesting and challenging ethics questions that we haven’t quite solved, but that we’ve learned a lot from. So one example is what we might have learned from the social media space. And I think AI has, in fact, learned a great deal from it.
Before there was AI ethics, there was bioethics, there was climate change ethics. But the same kinds of approaches and methodologies and frameworks that we used to think about something like bioethics, we can apply to this very different kind of technology, how we need to be thinking about the long term societal implications of a technology that seems really cool at first, but then once you scale, it can have really different implications.
What kinds of effects those have on people, and if those really are the values that we want to be embedding into our technological tools? Technology companies, especially, are starting to take quite seriously that there are other perspectives besides what are we technologically capable of building?
My job didn’t exist. Companies didn’t hire ethicists. I think that’s one really important shift. We’re in this really unique position with AI where we get to start at these early foundational stages. So we get to put our values in or we get to be deliberate about our values. At a foundational level, what ethics means and what values we ought to care about and how we should prioritize or make trade offs between them.
It includes bringing in experts from civil society and other areas, academia certainly, who have been thinking about these questions a lot, and I know have a lot to give. Regulators, it includes the people who are directly impacted, the users, and then yeah, the people who are building it, too. Bringing all of those voices together in a way that we’ll never find exact consensus, but can give us some idea of the direction we want to go, and then being ready to iterate as those views change as the technology changes.
I’m a big believer in ethics as being implementable. I prefer terms like ethics-forward decision making, but I think what’s most important, at least working in ethics within industry and a tech company is making sure that ethics isn’t just grand statements. A list of principles that we can say, you know, we care about fairness and, oh, we care about safety.
The kinds of ethics that I think are most useful and that you see most often in AI ethics are normative ethics. So questions about what the world should be and practical or applied ethics we can involve through things like democratic fora or deliberative polls, that kind of thing. Regular polling, qualitative and quantitative research. I think the biggest risk is opportunity cost.
Again, pulling back to the idea that there’s so many things we can learn from what’s happened before, a lot of people make the comparisons between AI and nuclear technology. They pulled back attempts at really investing in and building out nuclear infrastructure for energy.
My fear is something like that happening to AI, people being very focused on the terrifying existential risks, “this is the end of humanity” and trying to pull back AI as much as possible. There are real opportunity costs. There are real people who are not benefiting, who could be benefiting from AI.
Values change. Societal norms and expectations evolve. But I want to make sure that we’re doing our part to create AI that is non-harmful but also really helpful and can really enable human flourishing and advancement.







