Join thousands of AI enthusiasts and experts at the Learn AI Community.

Publication

Artificial Intelligence

Explainable Monitoring for Successful Impact with AI Deployments

Last Updated on January 29, 2021 by Editorial Team

Author(s): Anusha Sethuraman

Artificial Intelligence

Training and deploying ML models are relatively fast and cheap, but operationalizationโ€Šโ€”โ€Šmaintaining, monitoring, and governing models over timeโ€Šโ€”โ€Šis difficult and expensive. An Explainable ML Monitoring system extends traditional monitoring to provide deep model insights with actionable steps. As part of Fiddlerโ€™s 3rd annual Explainable AI Summit in October 2020, we brought together a panel of technical and product leaders to discuss operationalizing machine learning systems, and the key role that monitoring and explainability have to play in an organizationโ€™s AIย stack.

The shift to operationalization

As Natalia Burina (AI Product Leader, Facebook) noted, โ€œThereโ€™s been a shift towards operations with the rise of MLOps. A recent report gave the figure that 25% of the top 20 fastest-growing Github projects of Q2 2020 concerned ML infrastructure, tooling, and operations.โ€ Abhishek Gupta (Engineering Lead, Facebook; ex-Head of Engineering, Hired, Inc.) predicts that over the next 2โ€“5 years, we will see more and more tools that โ€œSaaSifyโ€ aspects of ML operationalization.

These innovations are a response to more organizations tryingโ€Šโ€”โ€Šand often strugglingโ€Šโ€”โ€Što get their ML projects โ€œout of the lab.โ€ As Peter Skomoroch (Machine Learning Advisor) explained, due to the push around big data years ago, companies have already been investing in data infrastructure to help power analytics on their site. Now theyโ€™re trying to use this data for machine learning, but running into challenges. Traditional engineering processes are based around software that the team writes, tests, and then deploys to the site, and while it might be A/B tested for effectiveness, the software itself isnโ€™t changing. However, the same canโ€™t be said for machine learning. Monitoring and explainability are therefore key components of a successful AIย system.

Case in point:ย COVID-19

Kenny Daniel (Co-founder and CTO, Algorithmia) shared that, โ€œIn the data science communities that I run in, thereโ€™s a picture of a timeseries, any time series, and it looks normal, and thenโ€Šโ€”โ€ŠCOVID hit.โ€ Moral of the story: If you donโ€™t have a way of recognizing when the macro environment has shifted, youโ€™re going to have problems. Airlines experienced this: at the start of the pandemic, their prices dropped dramatically, because the algorithms mistakenly thought that was the way to get people flyingย again.

Many companies had to rapidly retrain their models when COVID hit. Gupta described the situation at Hired as โ€œsurrealโ€ as they saw a sudden drop in hiring and surge in candidates, resulting in their models behaving in less-than-ideal ways. (Gupta has since moved on to an engineering lead role at Facebook.)

Monitoring and explainability

All the panelists agreed that monitoring is especially important for machine learning systems, and that most companiesโ€™ current tools are not sufficient. โ€œYou have to assume that things will go wrong and your machine learning team will be under the gun to fix itโ€Šโ€”โ€Šquickly,โ€ said Skomoroch. โ€œIf you have a model that you canโ€™t interrogate, where you canโ€™t determine why the accuracy is dropping, thatโ€™s a very stressful situation.โ€

This is even more important for high-stakes use cases where youโ€™re dealing with fairness and vulnerable groups, Burina said, and added that โ€œDebugging models is something thatโ€™s developing. We donโ€™t have in the industry a very good way of doing this like we have in traditional software.โ€ Skomoroch agreed: โ€œThatโ€™s why I think stuff like Fiddler is pretty exciting because a lot of this is done manually currently and ad hocโ€Šโ€”โ€Šthereโ€™s some notebooks flying around in emails. We really need to have benchmarks that weโ€™re looking at consistently and continuously.โ€

Gupta said that in his opinion, โ€œML monitoring and the ability to drill down and explain is inextricably linked.โ€ When you have both of these things, you get faster detection and resolution of issues, and at the same time, ML engineers are able to develop a better intuition about which models and features need more work. Gupta explained that โ€œFiddlerโ€™s tool and explainable monitoring has been a gamechanger and a step function improvement to how we monitor and react to challenges that we see in the marketplace.โ€

Monolithic solutions vs best-in-breed approach

The panelists unanimously agreed that the trend in the AI tooling stack is towards a more heterogeneous, โ€œbest-in-breedโ€ approach that combines open source, custom software, and various vendor solutionsโ€Šโ€”โ€Šrather than one tool that does itย all.

According to Daniel, โ€œThe more valuable and the more important the project is, the more you really want to have the best component for each bit.โ€ In traditional software, that means combining different solutions for CI/CD, testing, monitoring, and observability, and the same logic applies for ML. After all, โ€œYou canโ€™t build the end-to-end solution and expect to succeed in an industry thatโ€™s evolving so quickly. You need to be able to switch out parts of the car while youโ€™re driving it, because the things that were popular two years ago are notย today.โ€

Components for an ML tooling stack are increasingly out-sourced, not built in-house. The task for companies now is to pick high-quality tools that are specifically geared towards their domain and use case. โ€œFor companies that are serious from the get-go,โ€ said Burina, โ€œthey should really consider best-of-breed solutions because thatโ€™s going to be their competitive advantage.โ€

Stakeholders forย AI

What are all the different personas that might care about a model and its outputs? Of course, data scientists and engineers are one group. Also, product managers care about the fit of a model with business strategy and purpose. Legal teams, regulators, and end-users will all potentially require access to this information, as well. And C-suite leadership often wants to know how models are doing at a highย level.

As Skomoroch put it, โ€œThereโ€™s a whole world of people who donโ€™t really understand what you [data scientists] do day to day, and the whole team is kind of a black box to them. So thereโ€™s a side benefit to having something like Fiddler, having this observability, and monitoring happening, which is they have something to look at where they can see: whatโ€™s the progress? Whatโ€™s happening with our machine learning models?โ€ Gupta observed that having ML monitoring and explainability provides โ€œa shared understanding of the levers and tradeoffsโ€Šโ€”โ€Šand having a conversation at that level of abstraction goes a longย way.โ€

Algorithmic bias andย fairness

One of the most important use cases for explainable AI and monitoring, and one that stakeholders have a shared interest in, is preventing issues with bias and fairness. โ€œUnwanted consequences can creep in at any part of the pipeline,โ€ said Burina. โ€œCompanies must think about it holistically, from design to development, and they really should have continuous monitoring for bias and fairness.โ€

Continuous monitoring can help teams โ€œtrust but verify,โ€ according to Gupta. With many people working asynchronously to improve the collective performance of an AI system, individual bias can over time creep in, even though no single person is controlling at the macro level how the system must behave. This is where explainable monitoring can reallyย help.

Who is ultimately responsible for making sure AI isnโ€™t biased? After all, as Daniel noted, โ€œJust because itโ€™s in an AI black box doesnโ€™t mean nobodyโ€™s responsible. Somebody still needs to be responsible.โ€ In Skomorochโ€™s opinion, having a dedicated role like a chief data science officer or director focused on AI ethics can be a good choice. This person can make sure that nothing falls through the cracks when work moves from one team to the next. Burina also proposed a new industry-wide role of โ€œmodel quality scientist: someone who would challenge the model, check it for robustness, including anything potentially adversarialโ€ฆ.someone who would approve deployment, really making it a more rigorous process.โ€

At Fiddler weโ€™ve heard about bias concerns from many of the customers weโ€™ve engaged with. In response, weโ€™ve been trying to put together a high-level framework that can showcase where there could be bias, and allow customers to take action from those insights: whether they might want to retrain a model, balance their data set, or continuously monitor over time and use those insights to adjust their applications.

Interested in listening to the full panel discussion? You can watch the live recording hereย . Panelists:

Peter Skomoroch, Machine Learningย Advisor

Abhishek Gupta, Engineering Lead, Facebook; ex-Head of Engineering, Hired

Natalia Burina, AI Product Leader,ย Facebook

Kenny Daniel, Co-Founder and CTO, Algorithmia

Moderated by Rob Harrell, Senior Product Manager,ย Fiddler

Originally published at https://blog.fiddler.ai on January 20,ย 2021.


Explainable Monitoring for Successful Impact with AI Deployments was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Published via Towards AI

Feedback โ†“