Artificial intelligence is here, and humans are struggling to keep up. When an AI can decide if you live or die, we all need to start paying attention to the ethics involved. But what does a moral machine look like?

3 min read

by Emily FullerGraduated from the International MBA and the Master in Business Analytics & Big Data at IE University.

What does it mean to be “good”? If you think back to when you were a child, chances are this was made clear in school or by your parents. It was probably simple: turning in your homework, playing nice with friends, sharing with your siblings… Most humans can agree on the basic tenets of morality and we find these ideas embedded in the rules and regulations of society. Stealing and murder, for example, are generally frowned upon, and our laws reflect that.

But what happens when we don’t learn the rules, or we disregard them? Because we’re not just dealing with humans anymore. Advances in technology mean that we’re living in a world where artificial intelligence is making millions of decisions for us on a daily basis. And ethics in AI matters.

AI is older than you think…

AI may seem like a new buzzword, but it has old roots. Philosopher René Descartes questioned if machines could think in his work Discourse on Method in the 1600s—long before your phone recommended the next song you should sing in the shower or which tie to buy to match those new shoes. Today, machines not only “think,” they act. While recommending purchases seems mostly harmless, AI actions are not always so benign. Instead of just being able to keep you entertained by playing you in chess, AI can drive your car, approve you for credit, and decide if you’re a criminal. When instead of just calling a checkmate, your AI’s decisions are the only thing that stands between you and your safety, we all need to start paying attention.

Machines can think, but how smart are they?

When technology can make critical decisions, we have a responsibility to make sure those decisions are the right ones. We’re already seeing the impacts of faulty AI. Amazon was recently under fire for their gender-biased AI tool that discriminated against female job applicants. And facial recognition used in law enforcement is still racially biased, meaning innocent people have ended up doing time for a crime they never committed.

Heart in the age of automation: AI and ethics

Why? Because machines think much like children: they are taught to make decisions based on what they learn.

The key difference here, though, is that machines don’t distinguish between “good” and “bad.” They are taught by humans, data, and code to act without regard for right or wrong. What a moral map for a machine would look like is still very much unclear, and is the topic of growing debate as the stakes of AI reasoning continue to rise.

In light of the troubling use cases of AI gone rogue, companies, governments, and academic institutions around the globe are grappling with the issue, working to devise frameworks for ethical AI. But with many veritable cooks in the conscientious kitchen, getting multiple stakeholders to agree on universal standards has proven difficult. The discord is so widespread that most bodies are just drafting their own individual standards for AI principles. One of the latest to emerge are the U.S. Department of Defense principles for AI in warfare, sparking even more debate and controversy. 

Finding common ground

Despite the disagreement amongst global players regarding universal standards, there is one thing that they all seem to have in common: the importance of algorithmic fairness. Making sure AI is unbiased and neutral is essential in ensuring the proper design of technology across cultures.

Unfortunately, this task is far from simple. While some industries have stringent regulations for exposing how their technology works, such as healthcare, others are not so clear. This means that industries can design AI without disclosing their proprietary methods. Although this intellectual property (IP) is often a company’s financial backbone, the lack of transparency makes it easier for algorithmic bias to slip through the cracks.

What can we do now, while policymakers struggle to come to a consensus.

1. Support transparency and open-source code

Hold companies accountable to be open with their design principles and business practices so people can decide if their practices are fair. Open-source code means more innovation and opportunity for new business models based on something other than proprietary data.

2. Ensure diverse teams for the development of AI

Diversity of gender, race, religion, sexuality, and other areas will help reduce bias by training tech with a wider cross-section of data.

3. Engage in critical thinking

Interdisciplinary education is key to understanding morality.

Next time you take a course in Python, consider adding a philosophy course, too. Just because machines can think now, doesn’t mean we should stop thinking for ourselves. There are steps we can take to ensure AI is as unbiased as possible—maybe even more so than humans.