
Explainable Monitoring for Successful Impact with AI Deployments
Last Updated on January 29, 2021 by Editorial Team
Author(s): Anusha Sethuraman
Artificial Intelligence
Training and deploying ML models are relatively fast and cheap, but operationalizationโโโmaintaining, monitoring, and governing models over timeโโโis difficult and expensive. An Explainable ML Monitoring system extends traditional monitoring to provide deep model insights with actionable steps. As part of Fiddlerโs 3rd annual Explainable AI Summit in October 2020, we brought together a panel of technical and product leaders to discuss operationalizing machine learning systems, and the key role that monitoring and explainability have to play in an organizationโs AIย stack.
The shift to operationalization
As Natalia Burina (AI Product Leader, Facebook) noted, โThereโs been a shift towards operations with the rise of MLOps. A recent report gave the figure that 25% of the top 20 fastest-growing Github projects of Q2 2020 concerned ML infrastructure, tooling, and operations.โ Abhishek Gupta (Engineering Lead, Facebook; ex-Head of Engineering, Hired, Inc.) predicts that over the next 2โ5 years, we will see more and more tools that โSaaSifyโ aspects of ML operationalization.
These innovations are a response to more organizations tryingโโโand often strugglingโโโto get their ML projects โout of the lab.โ As Peter Skomoroch (Machine Learning Advisor) explained, due to the push around big data years ago, companies have already been investing in data infrastructure to help power analytics on their site. Now theyโre trying to use this data for machine learning, but running into challenges. Traditional engineering processes are based around software that the team writes, tests, and then deploys to the site, and while it might be A/B tested for effectiveness, the software itself isnโt changing. However, the same canโt be said for machine learning. Monitoring and explainability are therefore key components of a successful AIย system.

Case in point:ย COVID-19
Kenny Daniel (Co-founder and CTO, Algorithmia) shared that, โIn the data science communities that I run in, thereโs a picture of a timeseries, any time series, and it looks normal, and thenโโโCOVID hit.โ Moral of the story: If you donโt have a way of recognizing when the macro environment has shifted, youโre going to have problems. Airlines experienced this: at the start of the pandemic, their prices dropped dramatically, because the algorithms mistakenly thought that was the way to get people flyingย again.
Many companies had to rapidly retrain their models when COVID hit. Gupta described the situation at Hired as โsurrealโ as they saw a sudden drop in hiring and surge in candidates, resulting in their models behaving in less-than-ideal ways. (Gupta has since moved on to an engineering lead role at Facebook.)
Monitoring and explainability
All the panelists agreed that monitoring is especially important for machine learning systems, and that most companiesโ current tools are not sufficient. โYou have to assume that things will go wrong and your machine learning team will be under the gun to fix itโโโquickly,โ said Skomoroch. โIf you have a model that you canโt interrogate, where you canโt determine why the accuracy is dropping, thatโs a very stressful situation.โ
This is even more important for high-stakes use cases where youโre dealing with fairness and vulnerable groups, Burina said, and added that โDebugging models is something thatโs developing. We donโt have in the industry a very good way of doing this like we have in traditional software.โ Skomoroch agreed: โThatโs why I think stuff like Fiddler is pretty exciting because a lot of this is done manually currently and ad hocโโโthereโs some notebooks flying around in emails. We really need to have benchmarks that weโre looking at consistently and continuously.โ
Gupta said that in his opinion, โML monitoring and the ability to drill down and explain is inextricably linked.โ When you have both of these things, you get faster detection and resolution of issues, and at the same time, ML engineers are able to develop a better intuition about which models and features need more work. Gupta explained that โFiddlerโs tool and explainable monitoring has been a gamechanger and a step function improvement to how we monitor and react to challenges that we see in the marketplace.โ
Monolithic solutions vs best-in-breed approach
The panelists unanimously agreed that the trend in the AI tooling stack is towards a more heterogeneous, โbest-in-breedโ approach that combines open source, custom software, and various vendor solutionsโโโrather than one tool that does itย all.
According to Daniel, โThe more valuable and the more important the project is, the more you really want to have the best component for each bit.โ In traditional software, that means combining different solutions for CI/CD, testing, monitoring, and observability, and the same logic applies for ML. After all, โYou canโt build the end-to-end solution and expect to succeed in an industry thatโs evolving so quickly. You need to be able to switch out parts of the car while youโre driving it, because the things that were popular two years ago are notย today.โ
Components for an ML tooling stack are increasingly out-sourced, not built in-house. The task for companies now is to pick high-quality tools that are specifically geared towards their domain and use case. โFor companies that are serious from the get-go,โ said Burina, โthey should really consider best-of-breed solutions because thatโs going to be their competitive advantage.โ
Stakeholders forย AI
What are all the different personas that might care about a model and its outputs? Of course, data scientists and engineers are one group. Also, product managers care about the fit of a model with business strategy and purpose. Legal teams, regulators, and end-users will all potentially require access to this information, as well. And C-suite leadership often wants to know how models are doing at a highย level.
As Skomoroch put it, โThereโs a whole world of people who donโt really understand what you [data scientists] do day to day, and the whole team is kind of a black box to them. So thereโs a side benefit to having something like Fiddler, having this observability, and monitoring happening, which is they have something to look at where they can see: whatโs the progress? Whatโs happening with our machine learning models?โ Gupta observed that having ML monitoring and explainability provides โa shared understanding of the levers and tradeoffsโโโand having a conversation at that level of abstraction goes a longย way.โ
Algorithmic bias andย fairness
One of the most important use cases for explainable AI and monitoring, and one that stakeholders have a shared interest in, is preventing issues with bias and fairness. โUnwanted consequences can creep in at any part of the pipeline,โ said Burina. โCompanies must think about it holistically, from design to development, and they really should have continuous monitoring for bias and fairness.โ
Continuous monitoring can help teams โtrust but verify,โ according to Gupta. With many people working asynchronously to improve the collective performance of an AI system, individual bias can over time creep in, even though no single person is controlling at the macro level how the system must behave. This is where explainable monitoring can reallyย help.
Who is ultimately responsible for making sure AI isnโt biased? After all, as Daniel noted, โJust because itโs in an AI black box doesnโt mean nobodyโs responsible. Somebody still needs to be responsible.โ In Skomorochโs opinion, having a dedicated role like a chief data science officer or director focused on AI ethics can be a good choice. This person can make sure that nothing falls through the cracks when work moves from one team to the next. Burina also proposed a new industry-wide role of โmodel quality scientist: someone who would challenge the model, check it for robustness, including anything potentially adversarialโฆ.someone who would approve deployment, really making it a more rigorous process.โ
At Fiddler weโve heard about bias concerns from many of the customers weโve engaged with. In response, weโve been trying to put together a high-level framework that can showcase where there could be bias, and allow customers to take action from those insights: whether they might want to retrain a model, balance their data set, or continuously monitor over time and use those insights to adjust their applications.
Interested in listening to the full panel discussion? You can watch the live recording hereย . Panelists:
Peter Skomoroch, Machine Learningย Advisor
Abhishek Gupta, Engineering Lead, Facebook; ex-Head of Engineering, Hired
Natalia Burina, AI Product Leader,ย Facebook
Kenny Daniel, Co-Founder and CTO, Algorithmia
Moderated by Rob Harrell, Senior Product Manager,ย Fiddler
Originally published at https://blog.fiddler.ai on January 20,ย 2021.
Explainable Monitoring for Successful Impact with AI Deployments was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Published via Towards AI