Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Deploying Machine Learning Models as API using AWS
Cloud Computing   Machine Learning

Deploying Machine Learning Models as API using AWS

Last Updated on January 6, 2023 by Editorial Team

Last Updated on June 12, 2020 by Editorial Team

Author(s): Tharun Kumar Tallapalli

Machine Learning, Cloud Computing

A guide to accessing SageMaker machine learning model endpoints through API using a Lambda function.

As a machine learning practitioner, I used to build models. But just building models is never sufficient for real-time products. ML models need to be integrated with web or mobile applications. One of the best ways to solve this problem is by deploying the model as API and inferencing the results whenever required.

The main advantage of Deploying model as an API is that ML Engineers can keep the code separate from other developers and Update the model without creating a disturbance to Web or App developers.

ARCHITECTURE (Designed using apps.diagrams.net)

Workflow: The client sends a request to the API. API trigger is added to the Lambda function which results in invoking the SageMaker endpoint and returning predictions back to the client throughΒ API.

in this article, I will build a simple classification model and test the deployed model API usingΒ Postman.

Let’s get started! The steps we’ll be following are:

  1. Building SageMaker Model Endpoint.
  2. Creating a Lambda Function.
  3. Deploying asΒ API.
  4. Testing withΒ Postman.

Building SageMaker ModelΒ EndPoint

Let’s build an Iris Species Prediction Model.

Note: While training SageMaker classification models, the target variable should be the first column and if it is continuous then convert it into discrete.

Dataset Structure

1. Creating and Training and Validation data to train and test theΒ model.

2. For Training, the Model, get Image URI of the model in the currentΒ region.

3. Set the Hyperparameters for the model (you can get the best hyperparameters for your model using the SageMaker Auto-Pilot experiment or you can set your own hyperparameters manually).

4. Fit the Model with train and validation data.

5. Now create an Endpoint for theΒ Model.

You can view the Endpoint Configurations in SageMaker UI.

SageMaker Endpoints

Creating LambdaΒ Function

Now we have a SageMaker model endpoint. Let’s look at how we call it from Lambda. There is an API action called SageMaker Runtime and we use the boto3 sagemaker-runtime.invoke_endpoint(). From the AWS Lambda console, choose to Create function.

  1. Create a New Role such that Lambda Function has permission to invoke SageMaker endpoint.
Lambda Function initial setup(Photo byΒ Author)

2. ENDPOINT_NAME is an environment variable that holds the name of the SageMaker model endpoint we just deployed.

Environment Variable(Photo byΒ Author)

Deploying API

1. Open the Amazon API Gateway console. Choose the Create API, select REST API (as we send a post request and get the response).

(Photo byΒ Author)

2. Name your API and choose endpoint type as Regional (as it should be accessed within yourΒ region).

Creating REST API(Photo byΒ Author)

3. Create a Resource choosing from the Actions drop-down list, giving it a name like β€œirispredict”. Click to create resources.

Creating Resource(Photo byΒ Author)

4. When the resource is created, from the same drop-down list, choose Create Method to create a POSTΒ method.

Adding Post Method(Photo byΒ Author)

5. On the screen that appears, do the following:

  • For the Integration type, choose Lambda Function.
  • For Lambda Function, enter the name of the functionΒ created.
Connecting Lamda Function with API Gateway(Photo byΒ Author)

6. API Structure will look something like the following image:

(Photo byΒ Author)

7. From Actions select Deploy API. On the page that appears, create a new stage. Call it β€œspecies” and click onΒ Deploy.

Deploying API(Photo byΒ Author)

8. A window appears with a stage created. Go to the post method, and invoke URL will be generated which is the final API Endpoint.

API URL(Photo byΒ Author)

Testing WithΒ Postman

Postman is a popular API client that makes it easy for developers to create, share, test, and documentΒ APIs.

1. Before Invoking the API through Postman, add your AWS Secret Key and Access Key in Authorization section.

(Photo byΒ Author)

2. Test the Postman. Give input as JSON in Body. The output is displayed accordingly when you click on Send request toΒ API.

Testing with Postman(Photo byΒ Author)

Conclusion

Now we have successfully deployed the machine learning model as API using Lambda (a serverless component). This API can be invoked with just one click and inferences are made available easy for users and developers.

Final thoughts

I will get back to you on Deploying ML Models as Web Applications using ElasticBeanStalk and other AWS services. Till then, Stay Home, Stay Safe, and keep exploring!

Get inΒ Touch

I hope you found the article insightful. I would love to hear feedback from you to improvise it and come back better! If you would like to get in touch, connect with me on LinkedIn. Thanks forΒ reading!

References

[1]Β : AWS Documentation https://docs.aws.amazon.com/sagemaker/latest/dg/getting-started-client-app.html


Deploying Machine Learning Models as API using AWS was originally published in Towards AIβ€Šβ€”β€ŠMultidisciplinary Science Journal on Medium, where people are continuing the conversation by highlighting and responding to this story.

Published via Towards AI

Feedback ↓