Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Deploying Machine Learning Projects as Serverless APIs
Latest   Machine Learning

Deploying Machine Learning Projects as Serverless APIs

Last Updated on August 1, 2023 by Editorial Team

Author(s): Dolu Solana

Originally published on Towards AI.

A guide to deploying Production Ready Data Science Projects as Serverless APIs with Azure Functions.

Photo by Milad Fakurian on Unsplash

Introduction

When done right, Machine Learning and Data Science have the potential to help improve business processes, today millions of people have taken the rise of AI as a call to action to create more value for themselves and the world. However, even the most well-built model cannot provide value when left statically in a Jupiter Notebook.

One way to ensure your models are valuable to the business is to make them constantly available, for this, we can expose our models as APIs, these APIs can feed into a larger system i.e a recommendation model feeding into an E-commerce application, or your APIs could feed into a dedicated system or application i.e a segmentation model can feed a web-based dashboard for marketing decision making, and a document extraction model can feed a dedicated internal application for document extraction.

In this article, we would build a simple Azure function that re-trains and applies a K-means model on a retail dataset whenever it’s called.

This article would follow this structure:

  1. Standard APIs Vs Serverless APIs: Here we explain what Serverless is.
  2. Introducing the Model: We introduce the model we would deploy.
  3. Functions In Azure: Demonstrating the process of configuring functions in VS Code.
  4. Testing: Then we discuss testing the function locally.
  5. Creating Azure Resources for Functions and Deploying: We then demonstrate how to create the resources needed to run the function on Azure.

Standard APIs Vs Serverless APIs

Serverless software refers to any software that abstracts out the hassle of configuring and managing your own infrastructure, you just bring your code and run it, plug and play. The best part; you only pay for what you need.

This is in contrast to Traditional APIs, where you have to configure and manage a server for your code, and this server is always running costs even when the API is idle.

Serverless APIs are a no-brainer in many ways and offer several cost and flexibility advantages over Traditional APIs. So why then do we still use regular APIs? Well, here are some drawbacks of Serverless Computing you should consider when developing a Machine Learning System of your own:

  1. Serverless Systems usually have a cold start problem. This means it may take longer than regular API to start and execute a task.
  2. They are usually built for short-running tasks that take about 5–10 minutes to run. With some providers allowing up to an hour.

Introducing the Model

Enough theory, in this section we will begin building up to the final cloud function, but first, we have to briefly introduce the model we would be building.

Here we would be using a Segmentation Model, we would assume that a data scientist has already figured out what the optimal parameters would be and all we have to do is to set up the retraining pipeline.

To learn more about developing a clustering model from scratch, check out the references.

The code for retraining has been modularized into four scripts namely:

  • get_data.py: Retrieves the data through an SQL query. Here we use pandas' read_sql function to run a query on a given database engine that will be configured in app.py.
import pandas as pd

def get_data(engine):
# Read in data
query = """
SELECT
Customer_id
,Recency
,Frequency
,MonetaryValue
,Customer_Activation_date
FROM RFM_table
"""

# call data from try and except block to help debug database errors
try:
data = pd.read_sql(query, engine)
print("connection success")
except Exception as ex:
print("Connection could not be made because: \n", ex)

return data
  • preprocess.py: Preprocesses the data, performs feature engineering, and removes unwanted data.
from datetime import datetime
from sklearn.preprocessing import StandardScaler

def pre_process(data):
#create new column tenure, that represents the number of days since a customer first joined
data['Tenure'] = (datetime.now() - data['Customer_Activation_date']).dt.days
data = data.drop("Customer_Activation_date", axis=1)
data = data[~(data.MonetaryValue < 1)]
data = data.set_index("Customer_id")

return data
  • train.py: Retrains the model with new data.
# Installing Packages
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler, FunctionTransformer
from sklearn.cluster import KMeans
from sklearn.pipeline import Pipeline


def train(data, optimal_init):
# train random init
koptimal = KMeans(n_clusters=4, init=optimal_init, n_init=1)
# define transformers
## Log transform
log_transform = FunctionTransformer(np.log)
# standard scaler
Scaler = StandardScaler()
# define pipeline steps
steps = [("log_transform", log_transform), ("Scaler", Scaler), ("kmeans", koptimal)]
# define pipeline
kmeans_pipe = Pipeline(steps=steps)
# fit model
kmeans_pipe.fit(data)
# predict and add cluster labels
data["cluster_labels"] = kmeans_pipe.predict(data)

return data
  • app.py: Ties all the previous modules together, performs clustering on the data and returns a dataframe with the clusters as a new column. It also contains optimal init parameters.
# Installing Packages
import pandas as pd
import numpy as np
from sklearn.cluster import KMeans
from sklearn.pipeline import Pipeline
from get_data import get_data
from preprocess import pre_process
from train import train
import configparser
from sqlalchemy import create_engine
import pymysql


config = configparser.ConfigParser()
config.read('db.cfg')

# define the database credentials
user = config['<DB_NAME>']['USERNAME']
password = config['<DB_NAME>']['PASSWORD']
host = config['<DB_NAME>']['HOST']
port = config['<DB_NAME>']['PORT']
database = config['<DB_NAME>']['DATABASE']


def get_connection():
return create_engine(
url=f"mysql+pymysql://{user}:{password}@{host}:{port}/{database}"
)


if __name__ == '__main__':
try:
# connect to the engine
engine = get_connection()
print(f"Connection to the {host} for user {user} created successfully.")
#the try-except block allows us to debug efficiently.
except Exception as ex:
print("Connection could not be made due to the following error: \n", ex)


# set optimal centriod init
optimal_init = np.array([[ 0.53110261, -1.65631567, -0.46662182, -0.36120566],
[ 0.36156456, 0.35716547, -0.50222171, -0.47586815],
[-0.09470791, 0.41834683, 1.39116768, 1.17794181],
[-1.66140645, 0.42296787, -0.10251636, 0.02357708]])

def main():
RFM = get_data(engine)
RFM = pre_process(RFM)
RFM = train(RFM, optimal_init)
return RFM
  • Assess.py (Bonus): Contains a handy set of custom-defined functions that help assess the performance of the clustering model.
# Assess Clusters
def cluster_stats(cluster):
# Group the data by cluster
grouped = RFM.groupby(cluster)

# Calculate average RFM values and segment sizes per cluster value
return grouped.agg({
'Recency': ['median',"min", "max"],
'Frequency': ['median',"min", 'max'],
'MonetaryValue': ['median', 'min', "max"],
'Tenure': ["median", "count"]
}).round(1)


#prepare data for snake plot
normalized_melt = pd.melt(RFM_normalize.reset_index(),
id_vars=["Customer_id", "cluster_label3", 'cluster_label4', 'cluster_label5'],
value_vars=["Recency", "Frequency", "MonetaryValue", "Tenure"],
var_name="Attribute",
value_name="value")
normalized_melt


#define snakeplot
def snakeplot(df_melt, cluster):
sns.pointplot(x="Attribute", y='value',hue=cluster, data=df_melt)
plt.title(f'Snake Plot for {cluster}')

def relative_imp(data, cluster):
#extract cluster label
cluster_labels = data.loc[: ,cluster]

#extract rfm values
data = data.iloc[:, :4]

# add cluster labels to rfm values
data[cluster] = cluster_labels

# find the average value within cluster
cluster_mean = data.groupby(cluster).mean()

#find population mean
population_mean = data.mean()[:-1]

#find relative importance score
relativeimp_score = cluster_mean / population_mean - 1
relativeimp_score = relativeimp_score.round(2)
return

Functions In Azure

Now that we have introduced the model and its code. We can begin to talk about converting said code into an Azure function, but first, we have to configure the function. There are several ways to do this, for me the most convenient way would be by using a VS Code extension for function apps. Alternatively, you can also use Azure Command-Line and the Azure Portal. To follow along, you will need:

  • An Azure Subscription, you can get a free one here
  • Visual Studio Code installed on your system
  • A Python Runtime

We can set up our function quickly with the following steps:

  1. Download Azure Function Extensions: Go to the extension tab and search for functions, you will see Azure functions come out just click install. (Fig 1.1)
Fig 1.1 Extensions Tab

2. Go to the Azure Icon (Fig 1.2), under the resources section chose the option to sign in to your Azure account.

3. While still in the Azure tab, go to the workspaces section and choose β€œCreate New Function”. (Lighting Icon in Fig 1.2)

Fig 1.2 Azure and Function Icon

4. After you click the icon to create a new function, a box will pop up on top of your screen, asking you what folder to create the function project. Chose an empty folder that will store all of your function’s code.

Fig 1.3 Create Function: Choose Project Folder

5. After choosing a folder, you will be prompted to pick a language for your function’s runtime, here we pick Python, cause our function’s code is in Python.

Fig 1.4 Select Language.

6. Next we have to select a Python interpreter, Azure function uses the runtime of the Python interpreter to build a virtual environment for the function. As such, choosing an interpreter from a custom environment you built is best (i.e a Virtual Environment). You can do this by adding the full path to the Python environment.

Fig 1.5 Select Interpreter

7. Next we select the trigger type for our function. Triggers are the switch that makes our function run. They can be HTTP-based, Timer based or very much.

Fig 1.6 Select Trigger

8. Next we create a name for our function and choose an authorization level, for learning purposes you can just choose β€˜Anonymous’. After this, we are done with the configuration, the function will take a few minutes to complete.

Fig 1.7 Authorization Level.

Now, our function has been configured. The VS Code extension for Azure Functions would provide some templates, including configurations as a JSON file and an init file containing the code to β€˜initialize’ the function. Now, all we have to do is to add some logic. We would use the code for the model we introduced earlier.

Previously, the code was built to run on our local environment, it was also built to be modular, that is; spread across multiple files where each file performs a logical operation. These files and their logic were all tied together and orchestrated by app.py. However, to convert the code into an Azure function we would need to add each of these Python files to the folder for our function. Also, we would need to convert App.py into a Function init file, this involves three main steps:

  1. Import the Azure Functions func object and the logging module (logging is optional but advised).
import azure.functions as func
import logging

2. In the main function definition, accept a func.HttpRequest as an argument and specify func.HttpResponse as an output. i.e

#Former Main function definition
def main()

#New Main Function
def main(req: func.HttpRequest) -> func.HttpResponse:

3. Ensure that the main function returns an HTTP response: Previously our main function returned a data set as a dataframe, now we still want it to return a data set but dataframes can’t be sent over the web(HTTP) as they are, so we must first convert it to a JSON file.

# Former Main Function
def main():
RFM = get_data(engine)
RFM = pre_process(RFM)
RFM = train(RFM, optimal_init)
return RFM

# New Main Function
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
RFM = pd.read_csv('RFM.csv')
RFM = pre_process(RFM)
RFM = train(RFM, optimal_init)
# convert data to json
resp = RFM.to_json()
# make json into Http Response
return func.HttpResponse(resp)

4. Finally, we convert the name of the file app.py to __init__.py.

Testing the Function.

Now, our function is ready to be tested. To run the function locally we first activate the virtual environment for our project, navigate into our project folder and then run the following prompt in the terminal:

func host start

The above prompt runs the function, and if it runs successfully your terminal should look something like this:

Now the function is running, but still waiting for responses. To actually test the function’s logic we need to call a GET/POST request. We can do this with the help of the Python request function, as seen in the following code segment. I call consume.py:

import pandas as pd
import requests

r = requests.get('http://localhost:7071/api/CusCluster')
rfm_clusters = pd.DataFrame(r.json())
rfm_clusters.to_csv("rfm_clusters.csv")

Deployment

Now that the function has been fully tested, it’s time to deploy it. We can also do this straight from VS Code, however, we need to ensure we have these two components ready:

  1. A requirements file: To make sure that all the packages used to run the function locally are available and in their correct versions when deployed, we use a requirement file. To create a requirement file run the code below in the terminal while your virtual machine is activated:
pip freeze > requirements.txt

2. Function App resource in Azure: To deploy the project, we first need to create a function app resource in Azure, you can think of this as a container that holds the environment needed to run the function’s code. We can do this straight from the VS Code IDE by navigating to the β€˜Resources’ section of the Azure tab and clicking on the plus icon then following these steps:

A. Select Subscription: You’ll be prompted to choose an Azure Subscription, if you have only one active subscription under your Azure app this step will be skipped.

B. Enter a globally unique name for the function app: Choose a name that can make up a valid URL path. This name would be validated to ensure that it is unique.

C. Select a runtime stack: Choose the language version that you used to run the function locally.

D. Select a location for new resources: Choose a location where your resource will run from, for the best performance, choose a location close to where your function will be called from.

Once our resources are ready, we can deploy the function code by navigating to the Azure tab, in the workspace section select the deploy button(Fig 2.1).

Fig 2.1 Deploy Button (Cloud Icon)

After the deployment is complete, you can click on β€œView output” to view deployment results.

Fig 2.2 View Output

Now, we have officially built and deployed our first Azure Function.

Conclusions and Next Steps

  • In this article, we learned how to create and deploy Azure functions using VS Code, but you can also build it straight from the Azure Portal and with other tools such as IntelliJ, Eclipse, and the Azure Functions Core Tools.
  • Here, we discussed building a Serverless API, which is an Azure function with an HTTP trigger, however, Functions can be triggered in a variety of ways. Including Timers, Queues, and even changes in blob storage. Read more here.
  • Before deploying to Production, you need to consider certain concepts such as Logging, Security and Monitoring. All of these are streamlined by Azure. You can read more in the documentation here.
  • If you encounter any problems running the instructions in this tutorial you can reach out to me on Linkedin.

References

[1] AurΓ©lien GΓ©ron, Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2019.
[2] Official Documentation for Azure Functions.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓