Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Supporting Responsible Use of AI in Financial Services
Artificial Intelligence

Supporting Responsible Use of AI in Financial Services

Last Updated on May 24, 2022 by Editorial Team

Author(s): Amit Paka

(Photo by Etienne Martin on Unsplash)

Artificial Intelligence

Governor Lael Brainard of the Federal Reserve recently spoke at the AI Symposium about the use of Responsible AI in Financial Services. The speech provides important insights that can be early indicators into how the Fed guidelines might look for AI governance. Financial services companies that are already leveraging AI to provide new or enhanced customer experiences can review these remarks to get a head start and ensure their AI operations are better prepared. A full transcript of the speech can be found here. This post summarizes key points in this speech and how teams should think about its applicability to their ML practices, i.e.Β MLOps.

Benefits of AI to Financial Services

The Fed broadly embraces AI’s benefits in combating fraud and enabling better credit availability. AI enables companies to respond faster and better to fraud which is escalating with the increased digitization of financial services. Machine Learning (ML) models for credit risk and credit decisions built with traditional and alternative data provide more accurate and fair credit decisions to many more people outside of the current credit framework (Joint Fed statement opening up alternative data). However, the Fed cautions that historical data with racial bias might perpetuate the bias if used in opaque AI models without proper guardrails and protections. AI systems need to make a positive impact as well as protect previously marginalized classes.

AI’s Black BoxΒ Problems

The key problem is a lack of ML model transparency and the Fed outlines the reasons behindΒ it:

  1. Unlike statistical models that are designed by humans, ML models are trained on data automatically by algorithms.
  2. As a result of this automated generation, ML models can absorb complex nonlinear interactions from the data that humans cannot otherwise discern.

This complexity obscures how a model converts input to output, and it gets worse for deep learning models, making it difficult to explain and reasonΒ about.

The Importance ofΒ Context

The Fed outlines how context is key in understanding and explaining models. Even as the AI research community has made advances in explaining models, explanations depend on the person asking for them and the type of the model’s prediction. For example, an explanation given to a technical model developer would be a lot more detailed than one given to a compliance officer. In addition, the end-user needs to receive an easy to understand and actionable explanation of a model. For example, if a loan applicant gets a denial, understanding how the decision was made along with suggestions on actions to increase their approval odds will enable them to make changes andΒ reapply.

For financial services teams adopting AI, this highlights the need to have an ML system that caters to all the stakeholders of AI, not just the model developers. It needs to address the varying degree of ML comprehension of these stakeholders and allow for model explanations to be surfaced correctly to the end-user.

Key banking use cases, especially credit lending, are regulated by a host of laws including the Equal Credit Opportunity Act (ECOA), the Fair Housing Act (FHA), the Civil Rights Act, the Immigration Reform Act, etc. The laws require AI models and the data powering them to be understood and assessed to address any unwanted bias. Even if protected attributes like race are not used in model development, models can unknowingly absorb relationships with the protected class from correlated data inputs, i.e. proxy bias. Enabling model development under these stringent constraints to promote equitable outcomes with financial inclusion is, therefore, an active topic ofΒ study.

Financial services are already well set up to assess statistical models for bias. To meet the same requirement for ML models, AI teams need an updated bias testing process with tooling to evaluate and mitigate AI bias in the context of the useΒ case.

Bank management needs confidence that their models are robust, as they make critical decisions. They need to ensure the model will behave correctly when confronted with real-world data that can have more complex interactions. Explanations are a critical tool in providing the model development and assessment teams with this confidence. Not all ML systems, however, need the same level of understanding. For example, a lower threshold for transparency would suffice for secondary challenger systems used in conjunction with the primary AIΒ system.

As teams scale their ML development, the process would need to provide a robust collection of validation and monitoring tools to allow model developers and IT to ensure compliance with regulation and risk requirements from guidelines like SR 11–7 and OCC Bulletin 2011–12. Banks have started to introduce AI validators in their second line of defense to enable model validation.

Forms of Explanations

The speech outlines how explanations can differ based on the complexity and structure of the model. Banks are recommended to consider using the appropriate amount of transparency into the model based on the use case. Some models, for example, can be developed as fully β€˜interpretable’ but potentially less accurate. For example, a logistic regression model decision can be explained by the weights of the input. Other models are more complex and accurate but not inherently interpretable. In this case, explanations are obtained by using model agnostic techniques that provide explanations by probing the model with varying inputs and observing the change in its output. While these β€˜post-hoc’ explanations can enable understanding in certain use cases, they may not be as reliable as explanations from an inherently explainable model. One of the key questions banks will therefore face is whether the model agnostic explanation is acceptable or an interpretable model is necessary. An accurate model explanation, however, does not guarantee a robust and fair model that can only be developed over time and with experience.

Explainable AI, a recent research advancement, is the technology that unlocks the AI black box so humans can understand what’s going on inside AI models to ensure AI-driven decisions are transparent, accountable, and trustworthy. This explainability powers the explanations of model outputs. Financial services companies need to have platforms in place to allow their teams to generate explanations for a wide range of models that can be consumed across a variety of internal and externalΒ teams.

Expectations forΒ Banks

The Fed speech ends with a commitment to support the development of responsible AI and a call for feedback on transparency techniques and their risk implications from experts in theΒ field.

As the Fed seeks input, it is clear that the financial services teams deploying AI models need to explore ways to bolster their ML development with updated processes and tools to bring in transparency across model understanding, robustness, and fairness so they are better prepared for upcoming guidelines.


Supporting Responsible Use of AI in Financial Services was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Published via Towards AI

Feedback ↓