Financial organizations and their customers want to learn more about how AI works, citing the “black box” of AI as a complete mystery. With black box models, it’s not clear how scores are generated or for what reasons. Explainable AI, or “white box” models, takes away the enigma by providing transparent reasons for outcomes affecting banking customers’ finances.

It comes as no surprise that explainable AI and machine learning (ML) models are becoming increasingly important.  Like other developers and users of AI, financial institutions (FIs) must develop and deploy models responsibly in compliance with applicable legal and regulatory requirements. This may mean having to comply with diverse regulations and guidance on explainability for each geography in which an FI operates. While a challenge to be sure, this should not undermine the confidence to deploy AI and ML – especially as data scientists are better equipped to develop and train transparent models.

In this article, we will look at how this legislation creates opportunities for FIs to create model governance and ethics standards that focus on explainability, bias mitigation and transparency.

Explaining the unexplainable: the black box of AI

It’s late at night and your banking customer has just left the ER with their sick child, prescription in hand. They visit the 24-hour pharmacy, wait for the prescription to be filled and use their credit card to pay. The transaction is declined. They know their payments are up to date, and the clerk says they’ll have to call their bank in the morning. They go home with a crying child and no medication. You get a call the next morning, asking “What went wrong?”

It could be several things, from insufficient funds or suspected fraud to delayed payment processing or simply a false decline. Some issues are traceable, and others are not. This process may be difficult to explain to a customer and may erode trust in the relationship with the FI. Customers often leave their banks without a clear answer, and some vow to change FIs.

“‘Insufficient funds’ is one of the most common reasons for declined transactions, even when that may not be the case,” says Amyn Dhala, Brighterion Chief Product Officer and Global Head of Product for AI Express at Mastercard.

In the AI development world, a black box is a solution where users know the inputs and the final output, but they have no idea what the process is to make the decision. It isn’t until a mistake is made that problems start to surface.

A closed environment leaves room for error

When an AI model is developed for use in the banking sector, it is trained with historical anonymized and aggregated data, so it learns to predict events and score transactions based on historical patterns. Once the model goes into production, it receives millions of data points (drawn from approved sources) that then interact in billions of ways, producing outputs faster than any team of humans could achieve.

The risk is that the machine learning model may be generating these outputs in a closed environment, understood only by the team that originally built the model. This lack of explainability was cited as the second highest concern by 32 percent of financial executives responding to the 2022 LendIt annual survey on AI, after regulation and compliance.

How explainable AI/white box models work

Explainable AI, sometimes known as a “white box” model, lifts the veil off machine learning models by assigning reason codes to outputs and making them visible to FI’s users. Users can review these codes to both explain and verify outcomes. For example, if an account manager or fraud investigator suspects several outputs exhibit similar bias, developers can alter the model to remove such bias.

“Good explainable AI is simple to understand yet highly personalized for each given event,” Dhala says. “It must operate in a highly scalable environment processing potentially billions of events while satisfying the needs of the model custodians (developers) and the customers who are impacted. At the same time, the model must comply with regulatory and privacy requirements as per the use case and country.”

To ensure the integrity of the process, an essential component of building the model is privacy by design. Rather than reviewing a model after development, systems engineers consider and incorporate privacy at each stage of design and development. So, while reason outcomes are personalized, consumers’ privacy is proactively protected and embedded in the design and build of the model.

AI transparency helps prevent bias in banking

Dhala says the way to AI transparency is via responsible AI model governance. This overarching umbrella creates the environment for how an organization regulates access[CJ6]  to the model and its data, implements policies and monitors the activities and outputs for AI models. Responsible model governance creates the framework for ethical, fair and transparent AI in banking protecting against bias.

“It’s important that you don’t cause or make predictions based on discriminatory factors,” he says. “Do significant reviews and follow guidelines to ensure no sensitive data is used in the model, such as zip code, gender, or age.”

For example, Mastercard developed the Five pillars of AI strategy to provide a framework for its technology operation. Ethical, responsible AI and AI for Good are important facets of AI development at Mastercard and form a structure from which its governance model is built.

“When AI algorithms are implemented, we need to implement certain governance to ensure compliance,” says Rohit Chauhan, Executive Vice President, Artificial Intelligence for Mastercard. “Responsible AI algorithms minimize bias and are understandable, so people feel comfortable that AI is deployed responsibly, and that they understand it.”

Meeting the demands of AI regulations

Model custodians must be watchful to ensure the model continues to perform in compliance with credit information laws, such as the Equal Credit Opportunity Act (ECOA) in the U.S., and the Artificial Intelligence Act in the EU, for example.

As AI adoption grows, model explainability is increasingly important and is resulting in stricter government scrutiny. Lenders need to consider two major forms of explainability:

  • Global explainability, which identifies the variables with the most contributing factors across all predictions made by the model. Having global explainability makes it easier to conclude the model is sound.
  • Local explainability, which identifies the variables that contribute the most to an individual prediction. For example, if someone applies for a loan and gets rejected, what are the most important factors for rejecting the applicant? This also helps to identify opportunities for alternative data sets to increase approvals while minimizing risk.

Payments fraud: understanding why a transaction is flagged

When declining a transaction, it’s important to be certain – false positives are cumbersome for both the customer and the bank. The reason code must be easy to understand for a bank’s management team and their customers, both merchants and cardholders.

If a transaction is scored at a certain fraud level (e.g., 900 on a scale of 0-999), the AI model will provide reasons for that score. It could conclude the transaction shows an anomaly that could indicate fraud or is a legitimate transaction that should be approved but was flagged for a certain reason, such as the size of the transaction.

Building explainability into the AI model

For FIs, it’s important to choose an AI partner with extensive experience in building models that are designed in compliance with applicable regulations. This expertise is fundamental to building an explainable AI model that meets regulatory requirements at scale. Paired with each customer’s unique business challenge, this know-how ensures models are built effectively using a variety of AI/ML tools.

Today, models can be built in a matter of weeks and the process of implementation is highly efficient. Brighterion has a proven process, AI Express, that helps its customers develop, test and prepare for deployment in under two months.

This highly collaborative process begins with Brighterion’s team working with customers to understand their business goals and challenges and determine desired outcomes and how the data will be collected.

The development team then begins to build a model framework that supports the customers’ model governance and conforms to regulatory requirements, including explainability embedded throughout the model’s algorithms. The team omits data elements that could lead to problematic results or bias, and associates reason codes with all scores. Scoring over 150 billion events annually for over 2,000 organizations worldwide, Brighterion operates within a very secure model governance framework.

“We try to make it as simple as possible so users can look up reasons for scores,” Dhala explains. “The model must be accurate and explainable, so it must achieve both objectives. These two components balance the model.”

Dhala adds that these processes are very easy to replicate from one model to the next. “We are very experienced across sectors of financial services and what banks need across their spectrum of customers,” Dhala says. “We have a good feel for what banks need.”

Responsible AI in banking: thoughtful, transparent and explainable

AI’s increasing role in financial services has left some banks and their customers concerned about opaque “black box” decision-making. The remedy is transparent AI that protects individuals’ privacy.

Not only does explainable AI provide users with predictive scores, but it also helps them understand the reasons behind those predictions. This allows credit risk managers, for instance, to manage portfolios on a one-to-one level and develop more personalized strategies to improve their borrowers’ experiences. Merchant acquirers can understand trends before they cause problems, from fraud attacks to merchant instability.

By partnering with Mastercard, you can deploy a transparent Brighteron AI model with explainable predictions that benefit both your bank and your customers.