Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Free: 6-day Agentic AI Engineering Email Guide.
Learnings from Towards AI's hands-on work with real clients.
Deploying a TensorFlow Model with TensorFlow Serving and Docker
Latest   Machine Learning

Deploying a TensorFlow Model with TensorFlow Serving and Docker

Last Updated on January 20, 2026 by Editorial Team

Author(s): Samith Chimminiyan

Originally published on Towards AI.

Deploying a TensorFlow Model with TensorFlow Serving and Docker

TensorFlow Serving is a powerful tool for deploying machine learning models in a production environment. It allows for easy scaling and management of models, as well as the ability to serve multiple models at once. One of the most convenient ways to use TensorFlow Serving is through the use of Docker containers.

Developers and organisations can use TensorFlow Serving to deploy models in a modular fashion based on the TensorFlow architecture. Therefore, they can manage model versions, improve latencies, and scale dynamically. You can use it in production with a high volume of real-time predictions and robust model management.

Key Features

Now that we know how important it is to deploy ML models efficiently, let’s dive into how some of the TF Serving key features help accomplish that task. What are the most important options and tools that TensorFlow Serving offers to its users that make it stand out in the market?

  • Scalable Model Serving Framework
  • Seamless Integration with AI TensorFlow
  • Efficient TensorFlow Architecture for Performance Optimisation

In this article, we will go through the process of deploying a TensorFlow model using TensorFlow Serving in a Docker container.

The first step in deploying a TensorFlow model using TensorFlow Serving in a Docker container is to export the model. Once the model has been exported, it can be used to create a Docker image for TensorFlow Serving. To do this, we will use the TensorFlow Serving base image and add our exported model to it. The following code snippet shows how to create a Docker image and run the image.

Serving with Docker

Once you have Docker installed, you can pull the latest TensorFlow Serving docker image by running:

docker pull tensorflow/serving

This will pull down a minimal Docker image with TensorFlow Serving installed.

See the Docker Hub tensorflow/serving repo for other versions of images you can pull.

Running a serving image

The serving images (both CPU and GPU) have the following properties:

  • Port 8500 exposed for gRPC
  • Port 8501 exposed for the REST API
  • Optional environment variable MODEL_NAME (defaults to model)
  • Optional environment variable MODEL_BASE_PATH (defaults to /models)

When the serving image runs ModelServer, it runs it as follows:

tensorflow_model_server --port=8500 --rest_api_port=8501 \
--model_name=${MODEL_NAME} --model_base_path=${MODEL_BASE_PATH}/${MODEL_NAME}

To serve with Docker, you’ll need:

  • An open port on your host to serve on
  • A SavedModel to serve
  • A name for your model that your client will refer to

What you’ll do is run the Docker container, publish the container’s ports to your host’s ports, and mounting your host’s path to the SavedModel to where the container expects models.

Let’s look at an example:

docker run -p 8501:8501 \
--mount type=bind,source=/path/to/my_model/,target=/models/my_model \
-e MODEL_NAME=my_model -t tensorflow/serving

In this case, we’ve started a Docker container, published the REST API port 8501 to our host’s port 8501, and taken a model we named my_model and bound it to the default model base path (${MODEL_BASE_PATH}/${MODEL_NAME} = /models/my_model). Finally, we’ve filled in the environment variable MODEL_NAME with my_model, and left MODEL_BASE_PATH to its default value.

This will run in the container:

tensorflow_model_server --port=8500 --rest_api_port=8501 \
--model_name=my_model --model_base_path=/models/my_model

If we wanted to publish the gRPC port, we would use -p 8500:8500. You can have both gRPC and REST API ports open at the same time, or choose to only open one or the other.

Passing additional arguments

tensorflow_model_server supports many additional arguments that you could pass to the serving Docker containers. For example, if we wanted to pass a model config file instead of specifying the model name, we could do the following:

docker run -p 8500:8500 -p 8501:8501 \
--mount type=bind,source=/path/to/my_model/,target=/models/my_model \
--mount type=bind,source=/path/to/my/models.config,target=/models/models.config \
-t tensorflow/serving --model_config_file=/models/models.config

This approach works for any of the other command-line arguments that tensorflow_model_server supports.

How to run HTTP requests

This will start the TensorFlow Serving container and make it available at port 8501. We can now use the model to encode sentences by sending a POST request to the following URL:

http://localhost:8501/v1/models/{modle name}/{versions}/{versions no}:predict

You can either run in any browser or you can use postman application for running the HTTP requests.

The response will be a JSON object containing the encoded sentences.

Conclusion

In conclusion, deploying a TensorFlow model using TensorFlow Serving in a Docker container is a convenient and efficient way to serve machine learning models in a production environment. It allows for easy scaling and management of models, as well as the ability to serve multiple models at once. By following the steps outlined in this article, you can easily deploy the Universal Sentence Encoder, or any other TensorFlow model, using TensorFlow Serving in a Docker container.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Towards AI Academy

We Build Enterprise-Grade AI. We'll Teach You to Master It Too.

15 engineers. 100,000+ students. Towards AI Academy teaches what actually survives production.

Start free — no commitment:

6-Day Agentic AI Engineering Email Guide — one practical lesson per day

Agents Architecture Cheatsheet — 3 years of architecture decisions in 6 pages

Our courses:

AI Engineering Certification — 90+ lessons from project selection to deployed product. The most comprehensive practical LLM course out there.

Agent Engineering Course — Hands on with production agent architectures, memory, routing, and eval frameworks — built from real enterprise engagements.

AI for Work — Understand, evaluate, and apply AI for complex work tasks.

Note: Article content contains the views of the contributing authors and not Towards AI.