Join thousands of AI enthusiasts and experts at the Learn AI Community.

Publication

Cloud Computing   Machine Learning

Deploying Machine Learning Models as API using AWS

Last Updated on June 12, 2020 by Editorial Team

Author(s): Tharun Kumar Tallapalli

Machine Learning, Cloud Computing

A guide to accessing SageMaker machine learning model endpoints through API using a Lambda function.

As a machine learning practitioner, I used to build models. But just building models is never sufficient for real-time products. ML models need to be integrated with web or mobile applications. One of the best ways to solve this problem is by deploying the model as API and inferencing the results whenever required.

The main advantage of Deploying model as an API is that ML Engineers can keep the code separate from other developers and Update the model without creating a disturbance to Web or App developers.

ARCHITECTURE (Designed using apps.diagrams.net)

Workflow: The client sends a request to the API. API trigger is added to the Lambda function which results in invoking the SageMaker endpoint and returning predictions back to the client through API.

in this article, I will build a simple classification model and test the deployed model API using Postman.

Let’s get started! The steps we’ll be following are:

  1. Building SageMaker Model Endpoint.
  2. Creating a Lambda Function.
  3. Deploying as API.
  4. Testing with Postman.

Building SageMaker Model EndPoint

Let’s build an Iris Species Prediction Model.

https://medium.com/media/158d0bf2dc1eb2c8a771df7c4833fb3d/href

Note: While training SageMaker classification models, the target variable should be the first column and if it is continuous then convert it into discrete.

https://medium.com/media/22dc1481f467f1ce5a63c3e677cd620c/href

Dataset Structure

1. Creating and Training and Validation data to train and test the model.

https://medium.com/media/e0306eb94be6c06b42a35f6297d82bcc/href

2. For Training, the Model, get Image URI of the model in the current region.

https://medium.com/media/c17e7f11eda4ee01943143344d7a3b06/href

3. Set the Hyperparameters for the model (you can get the best hyperparameters for your model using the SageMaker Auto-Pilot experiment or you can set your own hyperparameters manually).

https://medium.com/media/4eaa46b034c023aa429934e95f8e069b/href

4. Fit the Model with train and validation data.

https://medium.com/media/f92f10cbbc6d1c75b8ac78db6237ded9/href

5. Now create an Endpoint for the Model.

https://medium.com/media/c838a3fbf90c7cde03c178bd5dae40a4/href

You can view the Endpoint Configurations in SageMaker UI.

SageMaker Endpoints

Creating Lambda Function

Now we have a SageMaker model endpoint. Let’s look at how we call it from Lambda. There is an API action called SageMaker Runtime and we use the boto3 sagemaker-runtime.invoke_endpoint(). From the AWS Lambda console, choose to Create function.

  1. Create a New Role such that Lambda Function has permission to invoke SageMaker endpoint.
Lambda Function initial setup(Photo by Author)

https://medium.com/media/17b8f8e48625432ac1f51c493e754dd7/href

2. ENDPOINT_NAME is an environment variable that holds the name of the SageMaker model endpoint we just deployed.

Environment Variable(Photo by Author)

Deploying API

1. Open the Amazon API Gateway console. Choose the Create API, select REST API (as we send a post request and get the response).

(Photo by Author)

2. Name your API and choose endpoint type as Regional (as it should be accessed within your region).

Creating REST API(Photo by Author)

3. Create a Resource choosing from the Actions drop-down list, giving it a name like “irispredict”. Click to create resources.

Creating Resource(Photo by Author)

4. When the resource is created, from the same drop-down list, choose Create Method to create a POST method.

Adding Post Method(Photo by Author)

5. On the screen that appears, do the following:

  • For the Integration type, choose Lambda Function.
  • For Lambda Function, enter the name of the function created.
Connecting Lamda Function with API Gateway(Photo by Author)

6. API Structure will look something like the following image:

(Photo by Author)

7. From Actions select Deploy API. On the page that appears, create a new stage. Call it “species” and click on Deploy.

Deploying API(Photo by Author)

8. A window appears with a stage created. Go to the post method, and invoke URL will be generated which is the final API Endpoint.

API URL(Photo by Author)

Testing With Postman

Postman is a popular API client that makes it easy for developers to create, share, test, and document APIs.

1. Before Invoking the API through Postman, add your AWS Secret Key and Access Key in Authorization section.

(Photo by Author)

2. Test the Postman. Give input as JSON in Body. The output is displayed accordingly when you click on Send request to API.

Testing with Postman(Photo by Author)

Conclusion

Now we have successfully deployed the machine learning model as API using Lambda (a serverless component). This API can be invoked with just one click and inferences are made available easy for users and developers.

Final thoughts

I will get back to you on Deploying ML Models as Web Applications using ElasticBeanStalk and other AWS services. Till then, Stay Home, Stay Safe, and keep exploring!

Get in Touch

I hope you found the article insightful. I would love to hear feedback from you to improvise it and come back better! If you would like to get in touch, connect with me on LinkedIn. Thanks for reading!

References

[1] : AWS Documentation https://docs.aws.amazon.com/sagemaker/latest/dg/getting-started-client-app.html


Deploying Machine Learning Models as API using AWS was originally published in Towards AI — Multidisciplinary Science Journal on Medium, where people are continuing the conversation by highlighting and responding to this story.

Published via Towards AI

Feedback ↓