Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

The Impact of Achieving Responsible AI in Finance
Artificial Intelligence

The Impact of Achieving Responsible AI in Finance

Last Updated on January 16, 2021 by Editorial Team

Author(s): Anusha Sethuraman

Artificial Intelligence

To take advantage of AI and machine learning, financial institutions have to navigate implementing complex new technology in one of the most regulated industries in the world. In October 2020, Fiddler’s 3rd annual Explainable AI Summit brought together panelists from the financial services industry to discuss the impact and growth of responsible AI and the evolving ways in which model risk is evaluated. We’ve distilled the main points below, and you can watch the entire recorded conversation here.

Risk management for financial models

In 2011, the Federal Reserve published a document called SR 11–7 that remains the standard regulatory document for model risk management (MRM). MRM teams are a key function at financial institutions, assessing risk and validating models before they go into production. With the emergence of AI and ML models, the MRM field has evolved and continues to evolve to incorporate new tools and processes. Compared to traditional statistical models, AI models are more complex and less transparent (they’re often compared to a “black box”), with more risks to consider across several key areas:

  • Design and interpretation: Is the model serving its intended purpose? Are the people interpreting the model’s results aware of any assumptions made by the people who designed the model?
  • Data: Do the data sources for the model meet privacy regulations? Are there any data quality issues?
  • Monitoring and incident response: Do the model’s predictions continue to perform well in production? How do we respond when there is a failure?
  • Transparency and bias: Are the model’s decisions explainable to a compliance or regulatory body? Have we ensured that the model is not inherently biased against certain groups of people?
  • Governance: Who is responsible for the model? Does it have any codependencies in the institution’s internal “model ecosystem”?

Design and interpretation

The interpretation of the model’s inputs and outputs is often far more important than the exact machine learning method used to derive the results. In fact, validation is less about proving that the model is correct (since there is no such thing as a 100% correct model), and more about proving that it’s not wrong. Wrong decisions can come from incorrect assumptions or a lack of understanding of the model’s limitations.

Imagine that you have data on aggregate consumer spending for the restaurant industry, and you want to design a model that will predict revenue. The data scientist might decide to simply aggregate the spending data by quarter, compare it to the company’s quarterly reports, and derive the revenue prediction. But the financial analyst will know that this approach doesn’t make sense. For example, Chipotle owns all of their stores, but McDonald’s is a franchise business. While every dollar spent at Chipotle is indeed directly connected to revenue, at McDonald’s, dollar spend is not directly or even necessarily linearly correlated to revenue.

Data

Traditional financial models were bounded by something called the “curse of dimensionality,” meaning that the humans who built these models could only handle a certain amount of data and variables before the complexity became unmanageable. Machine learning models, on the other hand, have an almost endless appetite for data.

As a result, financial institutions often feed their models with diverse, high-cardinality data sets that might hold clues to how markets are behaving (e.g. clickstream data, consumer transactions, business purchase data). Organizations must make sure that they are using this data in compliance with privacy laws. Quality is another key issue, particularly when working with unusual, bespoke data sources. Financial institutions must also defend against malicious actors who seek to use ML data as an attack vector.

Monitoring and incident response

Once a model is deployed to production, the finance industry and its regulators are looking for stability and high-quality predictions. However, production can be full of issues like data drift, broken data pipelines, latency problems, or computational bottlenecks.

Just as we prepare for planes to crash, it’s important to prepare for models to fail. Models can fail in complex and unpredictable ways, and existing regulations may not always address the requirements around responding to failures. It’s important for financial institutions to develop contingency plans. One way that MRM teams are doing this is by getting involved in the entire model lifecycle, from design to deployment and production monitoring, rather than just being involved at the validation stage.

Governance

Model governance is a broader category of risk. Outside of validating a single model, financial institutions need to manage the interdependencies between their models and data. However, since they lack good tools to manage their models in a centralized way (and there may be incentives to develop models “under the radar,” outside of regulations), many financial institutions struggle to track all of the models that they are currently using. Model ownership is also not always clearly defined, and owners may not know who all their users are. When downstream dependencies aren’t inventoried, a change in one model can break another without anyone noticing.

Transparency and bias

Regulators require that the outputs from AI/ML models can be explained, which is a challenge, since these are highly complex, multi-dimensional systems. Regulatory concerns are not as difficult to mitigate today as they were even several years ago, thanks to the adoption of new explainability techniques. While three or four years ago credit decisioning wouldn’t have been possible with AI, today is possible with the right explainable AI tools in place.

Model risk managers also use explainable AI techniques to investigate issues of bias at the level of both the data and the model outputs. Bias in ML is a real problem, leading recently to accusations of gender discrimination in Apple’s algorithmically-determined credit card limits and UnitedHealth’s algorithms being investigated for racial discrimination in patient care. Linear models can be biased, too. But machine learning models are more likely to hide the underlying biases in the data, and they might introduce specific, localized discrimination. As with many other areas of risk, financial institutions have needed to update their existing validation processes to handle the differences between machine learning and more traditional predictive models.

The future of AI/ML in finance

In the next few years, existing model validation infrastructure in finance and the culture of working within regulations and constraints means these institutions are perhaps even better positioned than big tech to achieve responsible AI in finance.

Automating model validation

One change we can expect to see is more automation in model validation. At many financial institutions, especially smaller ones with fewer resources, the way validation happens can still feel stuck in the 20th century. There’s a lot of manual steps involved: validators generate their own independent scenario tests, data quality is reviewed by hand, etc. With careful oversight and advanced tooling, it may be possible to validate models with the help of AI, by comparing predictions against benchmark models. This would reduce the overhead required for model risk management, allowing validators to focus on higher-level tasks.

More applications for AI

With the availability of large-scale data, and advancements in explainable AI to help mitigate regulatory concerns, the finance industry has pushed ahead in adopting AI in the past few years across areas like fraud analysis and credit line assignments. Even where AI isn’t yet trusted to make decisions in finance, it’s being used to narrowing the field of potential decisions. For example, in a situation where a firm is looking to make investments, AI can be used to surface the top recommendations and help the firm prioritize its time.

Retail banking will probably continue to see the earliest adoption of new AI techniques, since there is more access to data in this line of business than other types of financial services. Investment banking will likely be next to adopt AI, with asset and wealth management and commercial banking following behind.

Explainable AI remains a priority

Financial stakeholders are demanding and will continue to demand explainability — whether it’s regulators needing to know how a model made its credit decisions, or clients demanding explanations for a model’s trading decisions. As an example of banks’ commitment to this area, J.P. Morgan has developed a Machine Learning Center of Excellence with a research branch that investigates methodologies around explainability and a development branch that advises model designers on the best ways to develop effective and explainable models.

Conclusion

The financial industry operates under an extreme level of government regulations and public scrutiny, which can be a challenge for implementing AI — but it may also be a blessing in disguise. To get responsible AI right, organizations need to have a culture of creating transparent models, understanding data privacy, addressing discrimination, and testing and monitoring relentlessly. While there is still more work to be done, financial institutions may be even better prepared than big tech to achieve responsible AI.

This article was based on a conversation that brought together panelists from financial institutions, as part of Fiddler’s 3rd annual Explainable AI Summit on October 21, 2020. You can view the recorded conversation here.

Panelists:

Michelle Allade, Head of Bank Model Risk Management, Alliance Data Card Services

Patrick Hall, Visiting Professor at GWU, Principal Scientist, bnh.ai and Advisor to H2O.ai

Jon Hill, Professor of Model Risk Management, NYU Tandon, School of Financial Risk Engineering

Alexander Izydorczyk, Head of Data Science, Coatue Management

Pavan Wadhwa, Managing Director, JPMorgan Chase & Co.

Moderated by Krishna Gade, Founder and CEO, Fiddler

Originally published at https://blog.fiddler.ai on December 15, 2020.


The Impact of Achieving Responsible AI in Finance was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Published via Towards AI

Feedback ↓