Explainable Monitoring for Successful Impact with AI Deployments
Last Updated on January 29, 2021 by Editorial Team
Author(s): Anusha Sethuraman
Artificial Intelligence
Training and deploying ML models are relatively fast and cheap, but operationalizationβββmaintaining, monitoring, and governing models over timeβββis difficult and expensive. An Explainable ML Monitoring system extends traditional monitoring to provide deep model insights with actionable steps. As part of Fiddlerβs 3rd annual Explainable AI Summit in October 2020, we brought together a panel of technical and product leaders to discuss operationalizing machine learning systems, and the key role that monitoring and explainability have to play in an organizationβs AIΒ stack.
The shift to operationalization
As Natalia Burina (AI Product Leader, Facebook) noted, βThereβs been a shift towards operations with the rise of MLOps. A recent report gave the figure that 25% of the top 20 fastest-growing Github projects of Q2 2020 concerned ML infrastructure, tooling, and operations.β Abhishek Gupta (Engineering Lead, Facebook; ex-Head of Engineering, Hired, Inc.) predicts that over the next 2β5 years, we will see more and more tools that βSaaSifyβ aspects of ML operationalization.
These innovations are a response to more organizations tryingβββand often strugglingβββto get their ML projects βout of the lab.β As Peter Skomoroch (Machine Learning Advisor) explained, due to the push around big data years ago, companies have already been investing in data infrastructure to help power analytics on their site. Now theyβre trying to use this data for machine learning, but running into challenges. Traditional engineering processes are based around software that the team writes, tests, and then deploys to the site, and while it might be A/B tested for effectiveness, the software itself isnβt changing. However, the same canβt be said for machine learning. Monitoring and explainability are therefore key components of a successful AIΒ system.
Case in point:Β COVID-19
Kenny Daniel (Co-founder and CTO, Algorithmia) shared that, βIn the data science communities that I run in, thereβs a picture of a timeseries, any time series, and it looks normal, and thenβββCOVID hit.β Moral of the story: If you donβt have a way of recognizing when the macro environment has shifted, youβre going to have problems. Airlines experienced this: at the start of the pandemic, their prices dropped dramatically, because the algorithms mistakenly thought that was the way to get people flyingΒ again.
Many companies had to rapidly retrain their models when COVID hit. Gupta described the situation at Hired as βsurrealβ as they saw a sudden drop in hiring and surge in candidates, resulting in their models behaving in less-than-ideal ways. (Gupta has since moved on to an engineering lead role at Facebook.)
Monitoring and explainability
All the panelists agreed that monitoring is especially important for machine learning systems, and that most companiesβ current tools are not sufficient. βYou have to assume that things will go wrong and your machine learning team will be under the gun to fix itβββquickly,β said Skomoroch. βIf you have a model that you canβt interrogate, where you canβt determine why the accuracy is dropping, thatβs a very stressful situation.β
This is even more important for high-stakes use cases where youβre dealing with fairness and vulnerable groups, Burina said, and added that βDebugging models is something thatβs developing. We donβt have in the industry a very good way of doing this like we have in traditional software.β Skomoroch agreed: βThatβs why I think stuff like Fiddler is pretty exciting because a lot of this is done manually currently and ad hocβββthereβs some notebooks flying around in emails. We really need to have benchmarks that weβre looking at consistently and continuously.β
Gupta said that in his opinion, βML monitoring and the ability to drill down and explain is inextricably linked.β When you have both of these things, you get faster detection and resolution of issues, and at the same time, ML engineers are able to develop a better intuition about which models and features need more work. Gupta explained that βFiddlerβs tool and explainable monitoring has been a gamechanger and a step function improvement to how we monitor and react to challenges that we see in the marketplace.β
Monolithic solutions vs best-in-breed approach
The panelists unanimously agreed that the trend in the AI tooling stack is towards a more heterogeneous, βbest-in-breedβ approach that combines open source, custom software, and various vendor solutionsβββrather than one tool that does itΒ all.
According to Daniel, βThe more valuable and the more important the project is, the more you really want to have the best component for each bit.β In traditional software, that means combining different solutions for CI/CD, testing, monitoring, and observability, and the same logic applies for ML. After all, βYou canβt build the end-to-end solution and expect to succeed in an industry thatβs evolving so quickly. You need to be able to switch out parts of the car while youβre driving it, because the things that were popular two years ago are notΒ today.β
Components for an ML tooling stack are increasingly out-sourced, not built in-house. The task for companies now is to pick high-quality tools that are specifically geared towards their domain and use case. βFor companies that are serious from the get-go,β said Burina, βthey should really consider best-of-breed solutions because thatβs going to be their competitive advantage.β
Stakeholders forΒ AI
What are all the different personas that might care about a model and its outputs? Of course, data scientists and engineers are one group. Also, product managers care about the fit of a model with business strategy and purpose. Legal teams, regulators, and end-users will all potentially require access to this information, as well. And C-suite leadership often wants to know how models are doing at a highΒ level.
As Skomoroch put it, βThereβs a whole world of people who donβt really understand what you [data scientists] do day to day, and the whole team is kind of a black box to them. So thereβs a side benefit to having something like Fiddler, having this observability, and monitoring happening, which is they have something to look at where they can see: whatβs the progress? Whatβs happening with our machine learning models?β Gupta observed that having ML monitoring and explainability provides βa shared understanding of the levers and tradeoffsβββand having a conversation at that level of abstraction goes a longΒ way.β
Algorithmic bias andΒ fairness
One of the most important use cases for explainable AI and monitoring, and one that stakeholders have a shared interest in, is preventing issues with bias and fairness. βUnwanted consequences can creep in at any part of the pipeline,β said Burina. βCompanies must think about it holistically, from design to development, and they really should have continuous monitoring for bias and fairness.β
Continuous monitoring can help teams βtrust but verify,β according to Gupta. With many people working asynchronously to improve the collective performance of an AI system, individual bias can over time creep in, even though no single person is controlling at the macro level how the system must behave. This is where explainable monitoring can reallyΒ help.
Who is ultimately responsible for making sure AI isnβt biased? After all, as Daniel noted, βJust because itβs in an AI black box doesnβt mean nobodyβs responsible. Somebody still needs to be responsible.β In Skomorochβs opinion, having a dedicated role like a chief data science officer or director focused on AI ethics can be a good choice. This person can make sure that nothing falls through the cracks when work moves from one team to the next. Burina also proposed a new industry-wide role of βmodel quality scientist: someone who would challenge the model, check it for robustness, including anything potentially adversarialβ¦.someone who would approve deployment, really making it a more rigorous process.β
At Fiddler weβve heard about bias concerns from many of the customers weβve engaged with. In response, weβve been trying to put together a high-level framework that can showcase where there could be bias, and allow customers to take action from those insights: whether they might want to retrain a model, balance their data set, or continuously monitor over time and use those insights to adjust their applications.
Interested in listening to the full panel discussion? You can watch the live recording hereΒ . Panelists:
Peter Skomoroch, Machine LearningΒ Advisor
Abhishek Gupta, Engineering Lead, Facebook; ex-Head of Engineering, Hired
Natalia Burina, AI Product Leader,Β Facebook
Kenny Daniel, Co-Founder and CTO, Algorithmia
Moderated by Rob Harrell, Senior Product Manager,Β Fiddler
Originally published at https://blog.fiddler.ai on January 20,Β 2021.
Explainable Monitoring for Successful Impact with AI Deployments was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Published via Towards AI