Ethical Challenges in the Data World

Analytics has enabled us to find hidden patterns all around us—how consumers’ buying habits are changing, how users are behaving on webpages, where disease outbreaks are headed, which online purchases are likely fraudulent, and what the best routes are for shipping packages. Yet with all this discussion about the benefits of technological advancement in the data world, we tend to overlook or downplay the harm they may cause. With these types of technologies becoming such an integral part of our lives, we need to be aware of the challenges that may arise and tread carefully.

Desafios eticos en el mundo de los datos

Each arena of new technology has a downside, and data techthe rapidly evolving world of big data, machine learning, and the like—is no exception. So long as we are careful, we can get the most out of our data and try to mitigate any unwanted or not-so-obvious byproducts. Besides the typical privacy concerns and the exaggerated “AI robots taking our jobs and ruling the world” scenarios, what hidden challenges can we expect? Some of the less-noted but nonetheless important concerns include:

  • Lack of accountability when using black-box algorithms.
  • Ill effects of misleading data.
  • Bias and discrimination in data and algorithms.

Algorithm creators and users must be responsible and accountable for what their algorithms do. Algorithms can and do make mistakes, as well as ethically wrong decisions.

Who is to blame?

Many algorithms work like black boxes: we don’t know exactly what is happening every step of the way and we cannot always explain the reasoning behind the results. This could become risky as more aspects of our lives become controlled by data, so algorithm creators and users must be responsible and accountable for what their algorithms do. Algorithms can and do make mistakes, as well as ethically wrong decisions.

Take the case of banks issuing loans in the United States. The approval and denial process, as well as the final interest rate, is more often than not determined by an algorithm that weighs the risk behind each applicant. As this algorithm becomes more complicated and falls into black-box territory, can we confidently justify its decisions? The creators or users of these algorithms need to be held accountable; some protocols may need to be defined in order to do so.

 

Misconstrued figures

As data gains more power to shape our lives, we must be aware of its trustworthiness, accuracy, and whether it is being put to appropriate use. Are the results of our study accurate? Does the data match the final purposes? For instance, if you are using Facebook data, keep in mind that this is not a good way to understand the population. People tend to present an inflated reality online, posting only the most positive elements of their lives. On another note, is data being used for unethical purposes like manipulating or brainwashing the population? This has popped up in recent debates on whether data was used to influence or even brainwash populations during political elections.

Even more concerning lately has been the appearance of deepfakes—advanced machine-learning techniques used to superimpose images and videos onto originals in order to create new, surprisingly realistic clips. Similar technology has been used in the creation of 3D movies, but as the process becomes more mainstream, the spread of this misleading material could have dire consequences.

As data gains more power to shape our lives, we must be aware of its trustworthiness, accuracy, and whether it is being put to appropriate use.

Algorithmic bias

Algorithms tend to reflect the beliefs and assumptions of their creators, thus strengthening discrimination—albeit unconsciously—towards people who are different. A simple example of how this can pose a problem is the case of facial recognition software. Most of these applications struggle to recognize the faces of people with dark skin. This can usually be attributed to a lack of diversity in the training datasets used to teach these applications, unintentionally caused by a mirrored lack of diversity in their creators.

Through her Poet of Code Movement, the Ghanaian-American computer scientist Joy Adowaa Buolamwini has worked to challenge this deep-rooted problem, known as the “coded gaze,” which is found throughout the data field but especially in facial recognition software. Movements such as these bring hope for more balance.

If we remain mindful of the potential dangers along the way, the benefits of data tech can far outweigh any damage they might cause. We should keep in mind what Spider-Man famously said—“With great power comes great responsibility”—and take care to use our newfound data powers properly.

 

© IE Insights.