The impact of AI on our society in the years to come will be profound. How we can ensure that the impact will be a positive and ethical one? In this post, we look at why ethical AI is so important and the challenges of operationalizing it.

In thinking about how artificial intelligence works, it is not difficult to arrive at the analogy of a human brain, learning over time from the information it is provided, seeking patterns in that information to optimize its ability to apply those learnings to similar or never-before-seen problems. However, the power of AI lies in its ability to process infinitely greater volumes of information, including streaming data, to detect patterns that may otherwise never be detectible to the human brain. This kind of superpower can be useful when processing over one hundred billion transactions per year and seeking, in real time, to detect costly fraud. This is how, using artificial intelligence technologies such as smart agents, neural networks, and case-based reasoning, Brighterion has been able to transform how fraud is detected and prevented across paymenthealthcare and credit risk lifecycle ecosystems.

As AI continues to enable, improve and automate a growing number of tasks and processes across different industries, it is not only shifting how companies conduct business, it is also increasingly curating our daily experiences and shaping how we as individuals interact with our world. With our interactions no longer limited to the online world due to our use of connected devices, autonomous vehicles, robotics and other technologies, the line between the virtual and physical worlds is starting to blur. All of this is bound to generate even greater volumes of data, with currently more than 2.5 quintillion bytes of data created every day, continuing to drive the case for AI’s growth. Despite all the advancements so far, it is clear we are still at the precipice of what some refer to as the new age of AI, and there is no doubt that the impact of AI on our society in the years to come will be profound.

The question that many are now wrestling with is, how do we make sure that we end up on the right side of history and that our AI’s profound impact will be a positive one?

There is no innovation without risks, and many of the risks in AI have been documented and have become part of mainstream conversations, including in boardrooms, classrooms and the offices of regulators. Many are understandably worried that AI can be easily misused by bad actors. Yet, at least in the context of the private sector, much of the harm publicized to date has been a result of well-intentioned solutions developed or utilized by well-meaning actors. Many incidents in the long and fast-growing list of headlines about harm caused by AI involve unintended racial, gender, age or socioeconomic bias in outcomes of AI solutions deployed across a variety of use cases.

Later posts in this blog series will explore how machines end up producing discriminatory outcomes, but a high-level explanation can be gleaned by going back to the human brain versus AI analogy. Just as we, humans, become biased by the information our brain gets exposed to since childhood (e.g., most nurses I have seen in my lifetime have been women; therefore, a woman is what I may think of when I imagine a nurse), AI, which learns from the data it is provided, will also pick up any bias, prejudice and historical inequalities which may be reflected in that data. While some may argue the problem is not new, especially given the human brain analogy, we hear more about it in the context of AI because of AI’s ability to discover, codify, and scale bias in the data and its potential discriminatory effects. This problem is further exacerbated by the lack of transparency around AI, whether around awareness of its very use or understanding of how it arrives at its outcomes. The topic of bias in data and AI is explored in later posts in this series, but the key takeaway is that the ease with which AI can cause unintended harm at scale is a cause for concern for any responsible organization looking to innovate with data.

As the need for data and AI ethics is becoming widely acknowledged, a growing number of organizations around the world have been establishing ethical data and AI principles and guidelines. While a commitment to protect individuals’ rights and promote ethical outcomes is common across such principles and guidelines, what is ethical and fair can vary by context and culture. The term ‘fairness’ itself does not have a one-to-one translation in other languages, yet more than 20 different mathematical notions of fairness have been proposed around the globe. With regulation still catching up to this emerging area of concern, there is no standard approach to the highly complex and critical undertaking of promoting ethical AI. Further, operationalizing those principles and guidelines even with clearly defined ethical criteria is no small task.

While the next few posts in this blog series will get into more specifics on what it takes to operationalize ethical AI, if it is already starting to sound like a complicated and difficult-to-scale process, rest assured that it is. AI, Software 2.0, Industry 4.0, or, more broadly, any innovation with data should not be perceived as a shortcut to propel a business or our society forward. It is complex and requires a principled approach that understands the pitfalls of data and the risks associated with the utilization of data which may be biased or of insufficient quality. Such risks are further amplified in AI, where unintended consequences are not linear in their impact. AI, which learns from data, not only perpetuates the problems and biases it picks up, it will amplify and scale them.

AI governance, like any risk management-focused process, is a significant undertaking without guarantees. Yet, in the absence of such processes and systematic approaches for promoting responsible and human-centered innovation, organizations face the risk of unintended outcomes and the long-term harms it imposes not just for the organizations themselves but also our society.

In the end, do the benefits of AI outweigh the risks? Absolutely, but only if those risks are mitigated via a principled, thoughtful and systematic approach that places humans and their rights at the center of the innovation process. Promoting trustworthy and human-centered AI, which protects individuals from harm, is not only the right thing to do; it’s the only way to innovate sustainably today when trust is the new valuable asset and a critical innovation ingredient. It is also an opportunity to deliver on the promise of AI and its potential to have a truly positive impact on our society by augmenting decision-making to help build a more fair, inclusive, and decent world.

 

Joining Mastercard in 2019 as VP of Data Strategy for Strategic Growth, AI Governance and Data for Social Impact, Julia previously served as Managing Director of Data Products for Burgiss, a private capital data and analytics provider. Julia has an MBA in financial engineering from University of Toronto and is a graduate of Harvard University’s Business Analytics Program.