Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

How Can Lean Methodology Add Value to Your Machine Learning Deployments?
Latest   Machine Learning

How Can Lean Methodology Add Value to Your Machine Learning Deployments?

Last Updated on July 17, 2023 by Editorial Team

Author(s): Antonis Stellas

Originally published on Towards AI.

Photo by Stephen Phillips – Hostreviews.co.uk on Unsplash

Machine learning (ML) is revolutionizing the way companies operate, but deploying ML models can be challenging. In this article, we will explore how lean methodology can add value to your machine-learning deployments.

So, you are a data scientist, and you need to implement and deploy a solution for a company. Let’s assume you start by designing the steps for the project and probably start with a jupyter notebook implementing the core pipeline. If this is a machine learning project, you will perform the classic steps of preprocessing, feature engineering, and model training/testing. A critical question that arises here is:

1) How long will you spend on the notebook?

Eventually, you will have to move to the deployment part, where you will convert your pipeline to individual scripts and publish your model to be used in the real world.

And let’s say you decide to deploy it. Another critical question arises:

2) How sure are you that your model will behave as expected?

These questions might sound obvious. However, when a data scientist/ML engineer is in an actual project, some important hidden factors should be considered before answering.

In this short blog, I will highlight factors borrowed by the entrepreneurial world that will answer these two questions and eventually add value to your next ML project!

Regarding question 1)

Possible answers:

  1. Put the bare minimum required in each part of the pipeline and make a full deployment circle.

Or

2. Stay until the model accuracy satisfies your needs. After this, you will move with the deployment.

Choice A) Is an agile or lean approach, while B) is more like a waterfall approach.

From an entrepreneurship point of view, A) can be connected to what is called the Built-Measure-Learn cycle.

Build-Measure-Learn cycle:

The Build-Measure-Learn cycle is a core part of the Lean Start-up methodology [1], it is a framework for developing and growing businesses. I list these three phases and connect them to their identical ML step:

  • Build: Develop a new product or feature and get it ready for testing. In ML, you build a solution/app or improve an existing one.
  • Measure phase: Test the product or feature with a small group of customers or users to see how it performs. You collect data on how the product is used and how well it meets the needs of the target market. In ML, you can do something similar. You can see the real-world performance and collect cases with low accuracy.
  • Learn phase: Analyze the data collected in the Measure phase and use it to inform your decision-making. Based on what you learn, you can choose to continue developing and improving the product, pivot to a different approach, or abandon the product altogether. In ML, you can enrich your test set and see any bottleneck in the deployment steps or infrastructure.

The Build-Measure-Learn cycle is designed to be iterative, so you can repeat the cycle as many times as necessary until you have a product that meets the needs of your customers and achieves your business goals.

These circles for business are very important. That is how you can learn what your customers really want to improve those specific parts of your product.

The importance of feedback for ML applications

A Machine learning application that will end up in the real world needs the same feedback as a product in a business [2]. Both don’t know all the actual answers (a product doesn’t know how the customer will react, and an ML product how the data and the user will be or shift). However, they can use the feedback circles (of answer A) to improve. Following a waterfall approach (of answer B) is a more engineering way of thinking.

Let’s face it, most of us are engineers and love to optimize what we make. But sometimes, a lot of optimizations might make you work on things that, in the end, might not matter to the customer/stakeholder. If you want to bring actual value, you will have to measure it, and the deployment feedback will help you do that. There is nothing wrong generally with the waterfall approach. It can be very valuable in cases where there is access to immediate feedback or knowing exactly how your action will result. However, in most cases, there is no immediate access to feedback. We must wait for the response of the real world and see how our solution will behave. We can’t sit in our garage/laptop all day designing the “best” solution on assumptions.

In an ML solution, you can spend a lot of time improving your model accuracy by 1%. But when you deploy it, this 1% might not matter if:

  • The data is drifting (see next section).
  • You will need to spend more time on deployment infrastructure (if you don’t have implemented it already).
  • Your customer/stakeholder does not require it (yet).
  • You have not linked the performance metric with the business metrics. Chip Huyen in his book Designing Machine Learning Systems [3] mentions that projects are short-lived when Data Scientists focus a lot on ML hacking and not on business metrics.

So, next time see where you are going to get more value. In 1% accuracy improvement or making a full feedback circle?

Regarding question 2): How sure are you that your model will behave as expected?

You trained your model with a “fixed” dataset. When you deploy it to the real world, you might get what is called data drift. To explain this concept very simply, imagine you are in the year 2000, and you train a model to recognize car brands from images. Your model had a high success rate in the 2000s. However, if you keep the same model today, obviously, this will be reduced because new brands, such as Tesla, will appear that your model does not know. So, you will have to update your model if you detect data drift[4]. How do you detect it?

Entrepreneurs are measuring their product performance using specific key performance indicators (KPIs). As a data scientist, you also need to create and monitor them. For a data scientist, the equivalent would be a model performance metric. Only when he/she would like to connect with the business performance can create some KPIs [5].

Once you build your model based on a set of data, you deploy it to the real world. In that case, the data that is generated cannot be controlled. You cannot know what input you will receive. Like stocks behavior, we don’t now know the future and, most importantly, cannot simulate it (obviously, that is why there is a value from the beginning in creating a predictive model). Thus, you need to allow your database “learn” the new data and re-“build” (when you decide) the new model after measuring the severity of the drift.

These stories and lessons of the blog are easier said than done. Everyone struggles differently. However, you can keep them in mind while you do your personal project or check with them in your team’s project during stand-up or scrum meetings (another iteration).

Summary

ML model deployment can be challenging, and implementing the “Build-Measure-Learn” cycle, a core part of the Lean Start-up methodology, can be beneficial. Since we do not know how our product/application works in the real world, we can’t wait, based on weak assumptions, to make it perfect. We should instead adopt a strategy to make a release as soon as basic requirements are covered. That’s when we get the actual feedback that is going to help us improve our product/application. This leads to gaining value by doing circles of work and checking our ML performance and assumptions frequently.

References

[1] Eric Ries, Lean Start-up U+007C Crown Business

[2] Building Data Products with Machine Learning at Zendesk U+007C Zendesk

[3] Chip Huyen, Designing Machine Learning Systems (Chapter 2) U+007C O’Reilly

[4] https://www.evidentlyai.com/blog/data-and-prediction-drift U+007C Evidently AI

[5] Improving your machine learning model performance is sometimes futile heres why U+007C Towards Data Science

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓