Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Streamline ML Workflow with MLflow️ — Part I
Latest   Machine Learning

Streamline ML Workflow with MLflow️ — Part I

Last Updated on March 14, 2024 by Editorial Team

Author(s): ronilpatil

Originally published on Towards AI.

PoA — Experiment Tracking U+1F504 U+007C Model Registry U+1F3F7️ U+007C Model Serving U+1F680

Photo by 夜 咔罗 on Unsplash

Hi folks, In this blog I’ll explain “How can we leverage MLflow to track the machine learning experiments, register a model, and serve the model into production”. We’ll also create a REST endpoint and Streamlit web app so that users can easily interact with our model. So let’s begin!

Table Of Content
Introduction
Project Structure using Cookie-cutter
Data Gathering
Implement Logger
Data Preprocessing
Feature Engineering
Model Training
Model Tunning
Stitch the Workflow using DVC
Tune ML Model using Streamlit
GitHub Repository
Conclusion

Introduction

MLflow is an open-source platform that helps ML Engineers and Data Scientists manage the entire machine learning lifecycle, from experimentation and development to deployment and monitoring. It provides tools for tracking experiments, packaging code, sharing models, and deploying them into production environments. It’s streamlining the process of building and using machine learning models, making it easier to track progress, reproduce results, and deploy models into real-world applications. It also provides a centralized interface for managing these tasks, along with visualization tools.

Alternatives are also available like DVC, Kubeflow, or Pachyderm but in this blog, our major focus would be on MLFlow.
I don’t want to deep dive into MLFlow theoretically, the official doc is very user-friendly and up-to-date, you can refer it.

Majorly we’ll focus on the hands-on part, we’ll build a Wine Quality Prediction predictive model so let’s roll up your sleeves and deep dive into it practically.

Wine Quality Prediction E2E Workflow

Project Structure

Here I used the cookie-cutter template to create a standardized project structure. If you’re not aware of it, please go through my blog. Here I explained it in-depth.

Data Alchemy: Transformative ML Workflows with DVC & DVClive

Mastering Data Versioning & Model Experimentation using DVC & DVCLive

pub.towardsai.net

Date Gathering

I’ve kept the dataset on Google Drive, so below I implemented the code to pull the data from the drive and save it to local storage. The dataset is available here⬇️.

Implement Logger

You may notice that I’ve used infologger to log the details. I’ve implemented a code to create a log file for each execution and store the logs. Below is the code as well as a snapshot of the log file for better understanding.

Log File’s Snapshot

Source: Image by Author

Data Preprocessing

The dataset that we’re using is almost clean, But in the real world, it won’t be the case. We need to deal with a lot of messy data. So be prepared for it! The major problem is the imbalanced dataset; anyhow, we need to make it balanced; otherwise further, it may be very challenging for us. To solve this problem, I used the oversampling technique SMOTE (Synthetic Minority Over-sampling Technique) to make the dataset balanced and avoid any kind of bias in the model. Internally it uses the K Nearest Neighbors algorithm to generate the synthetic data.

Feature Engineering

Feature Engineering plays a very crucial role in building an efficient machine-learning model. I’ve created some informative and relevant features, which can improve the performance of machine learning models. Well-engineered features can capture important patterns and relationships in the data, leading to more accurate predictions. I used some domain knowledge and generated a few model features.

Model Training

Once we are done with data preprocessing and feature engineering, it’s time to experiment with model training and come up with an outperforming model.

Below added the small code snippet to track the model training experiments using mlflow. Added the comments for better understanding.

Actual model training code snippet:

Note: Before executing any of the mlflow code make sure your mlflow server is up & running because the mlflow server acts as the hub for experiment tracking, model registry, and artifacts storage. When we run our mlflow code internally, it communicates with the MLflow server to log the parameters, metrics, and artifacts. If the server won’t run, then your code won’t be able to log these details. Model Registry fully relies on the MLflow server. Features like collaboration & sharing, managing models, and project artifacts among team members won’t be possible. Without the server running, our MLflow code won’t have access to these essential features. Therefore it must be up!

Turn the Server Up⬆️

Drill Down Server Command :
mlflow server : command to start the server
--backend-store-uri sqlite:///mlflow.db : SQLite as backend storage to store the metadata of experiments, runs, parameters, metrics, and tags. Right now we’re using SQLite as backend storage but as soon as we move toward deployment we can use Azure SQL Database, Amazon RDS, or any other cloud service provider.
--default-artifact-root ./artifacts : store the artifacts such as models, plots, and other files in artifacts dir.
--host localhost : host where the mlflow server will be hosted
-p 5000 : port

Note: Once the server is up visit http://localhost:5000

Experiments vs Runs

The experiment is a container that holds a collection of runs, these experiment is related to a particular machine-learning task. This experiment will have a name, exp ID, description, and tag. To create an experiment we can use MLflow tracking API or MLflow UI.

Runs are the execution of model training or evaluation processes within the experiment. It is a single iteration of training/tuning a model with specific parameters, data, or code. Each run will have a unique name, run ID, description, and tags. Run capture metadata such as parameters, metrics, artifacts, and plots. To create a run we can use MLflow tracking API or MLflow UI.

Below added the snapshot of experiments and runs section of MLflow UI for better understanding.

Source: Image by Author

Inside each run, it shows the following details.

Source: Image by Author

Model Tuning

Model training might involve trying out various algorithm/parameter combinations to optimize the model, but it’s very rare for a model to perform optimally out of the box. Here model tuning comes into the picture, which internally uses statistical techniques to get the hyperparameters to optimize model performance.

I used Hyperopt, the hyperparameter optimization technique to fine-tune the model. I’ll not deep dive into it, we’ll discuss it in further blogs.

While logging the parameters, and matrices, I also logged the confusion metrics image as an artifact. Below I added its implementation.

Stitch the WorkflowU+1F9F5

We’ve covered data gathering, data preprocessing, feature engineering, model training, and the model tunning stage. These stages are very common steps of any machine learning workflow. We can automate the execution of these stages to achieve reproducibility and versioning of machine learning experiments and data processing workflows. Let’s create a DVC pipeline to execute these stages efficiently.

Put this dvc.yaml file into the root directory and run dvc repro command in bash/shell. Refer to the below blog for more details.

Data Alchemy: Transformative ML Workflows with DVC & DVClive

Mastering Data Versioning & Model Experimentation using DVC & DVCLive

pub.towardsai.net

All the configuration parameters are stored in params.yaml, which makes it easier to manage and update the configuration without directly modifying the code. Below is a snippet of params.yaml.

Tune ML Model using Streamlit

This is a useful and effective way to experiment with the model’s parameters and quickly observe how different parameters influence the model’s behavior. Instead of running the entire ML pipeline each time a parameter is changed, users can quickly experiment with different parameters using the Streamlit web app. Also, the model, parameters, and metrics are being logged using MLflow so that they might be used later. Below added the code, must go through it.

Added snapshot of the Streamlit app below.

Source: Image by Author

Here, I added the option to either create a new experiment or use an existing one to log the model/run. Additionally, we may include run and experiment descriptions, which will help us to understand the context, objectives, and outcomes of each experiment or run.

GitHub Repo

The codebase is available here, just Fork it and start experimenting with it.

GitHub – ronylpatil/mlflow-pipeline: Built an E2E MLFlow Pipeline

Built an E2E MLFlow Pipeline. Contribute to ronylpatil/mlflow-pipeline

github.com

Conclusion

This is just Part I, the next part will be super interesting so stay tuned!

If this blog has sparked your curiosity or ignited new ideas, follow me on Medium, GitHub & connect on LinkedIn, and let’s keep the curiosity alive.

Your questions, feedback, and perspectives are not just welcomed but celebrated. Feel free to reach out with any queries or share your thoughts.

Thank youU+1F64C &,
Keep pushing boundariesU+1F680

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓