The Impact of Achieving Responsible AI in Finance
Last Updated on January 16, 2021 by Editorial Team
Author(s): Anusha Sethuraman
Artificial Intelligence
To take advantage of AI and machine learning, financial institutions have to navigate implementing complex new technology in one of the most regulated industries in the world. In October 2020, Fiddlerβs 3rd annual Explainable AI Summit brought together panelists from the financial services industry to discuss the impact and growth of responsible AI and the evolving ways in which model risk is evaluated. Weβve distilled the main points below, and you can watch the entire recorded conversation here.
Risk management for financial models
In 2011, the Federal Reserve published a document called SR 11β7 that remains the standard regulatory document for model risk management (MRM). MRM teams are a key function at financial institutions, assessing risk and validating models before they go into production. With the emergence of AI and ML models, the MRM field has evolved and continues to evolve to incorporate new tools and processes. Compared to traditional statistical models, AI models are more complex and less transparent (theyβre often compared to a βblack boxβ), with more risks to consider across several keyΒ areas:
- Design and interpretation: Is the model serving its intended purpose? Are the people interpreting the modelβs results aware of any assumptions made by the people who designed theΒ model?
- Data: Do the data sources for the model meet privacy regulations? Are there any data qualityΒ issues?
- Monitoring and incident response: Do the modelβs predictions continue to perform well in production? How do we respond when there is aΒ failure?
- Transparency and bias: Are the modelβs decisions explainable to a compliance or regulatory body? Have we ensured that the model is not inherently biased against certain groups ofΒ people?
- Governance: Who is responsible for the model? Does it have any codependencies in the institutionβs internal βmodel ecosystemβ?
Design and interpretation
The interpretation of the modelβs inputs and outputs is often far more important than the exact machine learning method used to derive the results. In fact, validation is less about proving that the model is correct (since there is no such thing as a 100% correct model), and more about proving that itβs not wrong. Wrong decisions can come from incorrect assumptions or a lack of understanding of the modelβs limitations.
Imagine that you have data on aggregate consumer spending for the restaurant industry, and you want to design a model that will predict revenue. The data scientist might decide to simply aggregate the spending data by quarter, compare it to the companyβs quarterly reports, and derive the revenue prediction. But the financial analyst will know that this approach doesnβt make sense. For example, Chipotle owns all of their stores, but McDonaldβs is a franchise business. While every dollar spent at Chipotle is indeed directly connected to revenue, at McDonaldβs, dollar spend is not directly or even necessarily linearly correlated toΒ revenue.
Data
Traditional financial models were bounded by something called the βcurse of dimensionality,β meaning that the humans who built these models could only handle a certain amount of data and variables before the complexity became unmanageable. Machine learning models, on the other hand, have an almost endless appetite forΒ data.
As a result, financial institutions often feed their models with diverse, high-cardinality data sets that might hold clues to how markets are behaving (e.g. clickstream data, consumer transactions, business purchase data). Organizations must make sure that they are using this data in compliance with privacy laws. Quality is another key issue, particularly when working with unusual, bespoke data sources. Financial institutions must also defend against malicious actors who seek to use ML data as an attackΒ vector.
Monitoring and incidentΒ response
Once a model is deployed to production, the finance industry and its regulators are looking for stability and high-quality predictions. However, production can be full of issues like data drift, broken data pipelines, latency problems, or computational bottlenecks.
Just as we prepare for planes to crash, itβs important to prepare for models to fail. Models can fail in complex and unpredictable ways, and existing regulations may not always address the requirements around responding to failures. Itβs important for financial institutions to develop contingency plans. One way that MRM teams are doing this is by getting involved in the entire model lifecycle, from design to deployment and production monitoring, rather than just being involved at the validation stage.
Governance
Model governance is a broader category of risk. Outside of validating a single model, financial institutions need to manage the interdependencies between their models and data. However, since they lack good tools to manage their models in a centralized way (and there may be incentives to develop models βunder the radar,β outside of regulations), many financial institutions struggle to track all of the models that they are currently using. Model ownership is also not always clearly defined, and owners may not know who all their users are. When downstream dependencies arenβt inventoried, a change in one model can break another without anyone noticing.
Transparency andΒ bias
Regulators require that the outputs from AI/ML models can be explained, which is a challenge, since these are highly complex, multi-dimensional systems. Regulatory concerns are not as difficult to mitigate today as they were even several years ago, thanks to the adoption of new explainability techniques. While three or four years ago credit decisioning wouldnβt have been possible with AI, today is possible with the right explainable AI tools inΒ place.
Model risk managers also use explainable AI techniques to investigate issues of bias at the level of both the data and the model outputs. Bias in ML is a real problem, leading recently to accusations of gender discrimination in Appleβs algorithmically-determined credit card limits and UnitedHealthβs algorithms being investigated for racial discrimination in patient care. Linear models can be biased, too. But machine learning models are more likely to hide the underlying biases in the data, and they might introduce specific, localized discrimination. As with many other areas of risk, financial institutions have needed to update their existing validation processes to handle the differences between machine learning and more traditional predictive models.
The future of AI/ML inΒ finance
In the next few years, existing model validation infrastructure in finance and the culture of working within regulations and constraints means these institutions are perhaps even better positioned than big tech to achieve responsible AI inΒ finance.
Automating model validation
One change we can expect to see is more automation in model validation. At many financial institutions, especially smaller ones with fewer resources, the way validation happens can still feel stuck in the 20th century. Thereβs a lot of manual steps involved: validators generate their own independent scenario tests, data quality is reviewed by hand, etc. With careful oversight and advanced tooling, it may be possible to validate models with the help of AI, by comparing predictions against benchmark models. This would reduce the overhead required for model risk management, allowing validators to focus on higher-level tasks.
More applications forΒ AI
With the availability of large-scale data, and advancements in explainable AI to help mitigate regulatory concerns, the finance industry has pushed ahead in adopting AI in the past few years across areas like fraud analysis and credit line assignments. Even where AI isnβt yet trusted to make decisions in finance, itβs being used to narrowing the field of potential decisions. For example, in a situation where a firm is looking to make investments, AI can be used to surface the top recommendations and help the firm prioritize itsΒ time.
Retail banking will probably continue to see the earliest adoption of new AI techniques, since there is more access to data in this line of business than other types of financial services. Investment banking will likely be next to adopt AI, with asset and wealth management and commercial banking following behind.
Explainable AI remains aΒ priority
Financial stakeholders are demanding and will continue to demand explainabilityβββwhether itβs regulators needing to know how a model made its credit decisions, or clients demanding explanations for a modelβs trading decisions. As an example of banksβ commitment to this area, J.P. Morgan has developed a Machine Learning Center of Excellence with a research branch that investigates methodologies around explainability and a development branch that advises model designers on the best ways to develop effective and explainable models.
Conclusion
The financial industry operates under an extreme level of government regulations and public scrutiny, which can be a challenge for implementing AIβββbut it may also be a blessing in disguise. To get responsible AI right, organizations need to have a culture of creating transparent models, understanding data privacy, addressing discrimination, and testing and monitoring relentlessly. While there is still more work to be done, financial institutions may be even better prepared than big tech to achieve responsible AI.
This article was based on a conversation that brought together panelists from financial institutions, as part of Fiddlerβs 3rd annual Explainable AI Summit on October 21, 2020. You can view the recorded conversation here.
Panelists:
Michelle Allade, Head of Bank Model Risk Management, Alliance Data CardΒ Services
Patrick Hall, Visiting Professor at GWU, Principal Scientist, bnh.ai and Advisor toΒ H2O.ai
Jon Hill, Professor of Model Risk Management, NYU Tandon, School of Financial Risk Engineering
Alexander Izydorczyk, Head of Data Science, Coatue Management
Pavan Wadhwa, Managing Director, JPMorgan Chase &Β Co.
Moderated by Krishna Gade, Founder and CEO,Β Fiddler
Originally published at https://blog.fiddler.ai on December 15,Β 2020.
The Impact of Achieving Responsible AI in Finance was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Published via Towards AI