Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Unlock the full potential of AI with Building LLMs for Productionβ€”our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

31 Questions that Shape Fortune 500 ML Strategy
Latest   Machine Learning

31 Questions that Shape Fortune 500 ML Strategy

Last Updated on June 14, 2023 by Editorial Team

Author(s): Anirudh Mehta

Originally published on Towards AI.

Source: Image by the author.

In May 2021, Khalid Salama, Jarek Kazmierczak, and Donna Schut from Google published a white paper titled β€œPractitioners Guide to MLOps”. The white paper goes into great depth on the concept of MLOps, its lifecycle, capabilities, and practices. There are hundreds of blogs written on the same topic. As such, my intention with this blog is not to duplicate those definitions but rather to encourage you to question and evaluate your current ML strategy.

I have listed a few critical questions that I often pose to myself and concerned stakeholders on the modernization journey. While ML algorithms & code play a crucial role in success, it’s just a small piece of the large puzzle.

Source: Image by the author.

To consistently achieve the same success, there are a vast array of cross-cutting concerns that need to be addressed. Thus, I have grouped the questions under different stages of an ML delivery pipeline. In no way, the questions are targeted for a particular role owning that stage, but are applicable to everyone involved in the process.

Key objectives:

Before diving into the questions, it’s important to understand the evaluation lens through which they are written. If you have additional objectives, you may want to add more questions to the list.

Automation
U+2713 The system must emphasize automation.
U+2713 The goal should be to automate all aspects, from data acquisition and processing to training, deployment, and monitoring.

Collaboration
U+2713 The system should promote collaboration between data scientists, engineers, and the operation team.
U+2713 It should allow data scientists to effectively share the artifacts and the lineage as created during the model-building process.

Reproducibility
U+2713 The system should allow for easy replication of the current state and progress.

Governance & Compliance
U+2713 The system must ensure data privacy, security, and compliance with relevant regulations and policies.

Critical Questions:

Now that we have defined objectives, it’s time to look into the key questions to ask to evaluate the effectiveness of your current AI strategy.

Data Acquisition & Exploration (EDA)
Data is a fundamental building block of any ML system. Data Scientist understands it, identifies and addresses common issues like duplication, missing data, imbalance, outliers, etc. A significant amount of data scientist time goes into this activity of data exploration. Thus, our strategy should focus to support & accelerate these activities and answer the following questions:

β–’ [Automation] Does the existing platform helps the data scientist to quickly analyze, visualize the data and automatically detect common issues?
β–’ [Automation] Does the existing platform allows integrating and visualizing the relationship between datasets from multiple sources?
β–’ [Collaboration] How can multiple data scientists collaborate in real-time on the same dataset?
β–’ [Reproducibility] How do you track and manage different versions of acquired datasets?
β–’ [Governance & Compliance] How do you ensure that the data privacy or security considerations have been addressed during the acquisition?

Data Transformation & Feature Engineering
After gaining an understanding of the data, the next step is to build and scale the transformations across the dataset. Here are some key questions to consider during this phase:

β–’ [Automation] How can the transformation steps be effectively scaled to the entire dataset?
β–’ [Automation] How can the transformation steps be applied in real-time to the live data before inference?
β–’ [Collaboration] How can a data scientist share and discover the engineered features to avoid effort duplication?
β–’ [Reproducibility] How do you track and manage different versions of transformed datasets?
β–’ [Reproducibility] Where are the transformation steps and associated code stored?
β–’ [Governance & Compliance] How do you track the lineage of data as it moves through transformation stages to ensure reproducibility and audibility?

Experiments, Model Training & Evaluation
Model training is an iterative process where data scientist explores and experiments with different combinations of settings and algorithm to find the best possible model. Here are some key questions to consider during this phase:

β–’ [Automation] How can data scientists automatically partition the data for training, validation, and testing purposes?
β–’ [Automation] Does the existing platform helps to accelerate the evaluation of multiple standard algorithms and tune hyperparameters
β–’ [Collaboration] How can a data scientist share the experiment, configurations & trained models?
β–’ [Reproducibility] How can you ensure the reproducibility of the experiment outputs?
β–’ [Reproducibility] How do you track and manage different versions of trained models?
β–’ [Governance & Compliance] How do you track the model boundaries allowing you to explain the model decisions?

Deployment & Serving
In order to realize the business value of a model, it needs to be deployed. Depending on the nature of your business, it may be distributed, deployed in-house, on the cloud, or at the edge. Effective management of the deployment is crucial to ensure uptime and optimal performance. Here are some key questions to consider during this phase:

β–’ [Automation] How do you ensure that the deployed models can scale with increasing workloads?
β–’ [Automation] How are the new versions rolled out and the process to compare them against the running version? (A/B testing, canary, shadow, etc.)
β–’ [Automation] Are there mechanisms to roll back or revert deployments if issues arise?
β–’ [Collaboration] How can multiple data scientists understand the impact of their version before releasing it? (A/B testing, canary, shadow, etc.)
β–’ [Reproducibility] How do you package your ML models for serving in the cloud or at the edge?
β–’ [Governance & Compliance] How do you track the predicted decisions for auditability and accountability?

Model Pipeline, Monitoring & Continuous Improvement:
As we have seen, going from raw data to actionable insights involves complex series of steps. However, by orchestrating, monitoring, and reacting throughout the workflow, we can easily scale, adapt and make the process more efficient. Here are some key questions to consider during this phase:

β–’ [Automation] How is the end-to-end process of training and deploying the models managed currently?
β–’ [Automation] How can you detect the data or concept drift w.r.t to the historical baseline?
β–’ [Automation] How do you determine when a model needs to be retrained or updated?
β–’ [Collaboration] What are the agreed metrics to measure the effectiveness of each stage and new deployments?
β–’ [Reproducibility] Are there automated pipelines to handle the end-to-end process of retraining and updating models to incorporate feedback and make enhancements?
β–’ [Governance & Compliance] How do you ensure data quality and integrity throughout the process?
β–’ [Governance & Compliance] How do you budget and plan for the infrastructure requirements for the building of your models?

A MLOps system streamlines & brings structure to your strategy and thus, allowing you to answer these questions. It provides the capability to version control and to track various artifacts through the dataset, feature, metadata, and model repositories.

In the upcoming blogs, I will demonstrate how to implement these MLOps best practices using a simple case study on AWS, GCP, Azure, or using open-source technologies.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓