Deploying a TensorFlow Model with TensorFlow Serving and Docker
Last Updated on January 20, 2026 by Editorial Team
Author(s): Samith Chimminiyan
Originally published on Towards AI.
TensorFlow Serving is a powerful tool for deploying machine learning models in a production environment. It allows for easy scaling and management of models, as well as the ability to serve multiple models at once. One of the most convenient ways to use TensorFlow Serving is through the use of Docker containers.
Developers and organisations can use TensorFlow Serving to deploy models in a modular fashion based on the TensorFlow architecture. Therefore, they can manage model versions, improve latencies, and scale dynamically. You can use it in production with a high volume of real-time predictions and robust model management.
Key Features
Now that we know how important it is to deploy ML models efficiently, let’s dive into how some of the TF Serving key features help accomplish that task. What are the most important options and tools that TensorFlow Serving offers to its users that make it stand out in the market?
- Scalable Model Serving Framework
- Seamless Integration with AI TensorFlow
- Efficient TensorFlow Architecture for Performance Optimisation
In this article, we will go through the process of deploying a TensorFlow model using TensorFlow Serving in a Docker container.
The first step in deploying a TensorFlow model using TensorFlow Serving in a Docker container is to export the model. Once the model has been exported, it can be used to create a Docker image for TensorFlow Serving. To do this, we will use the TensorFlow Serving base image and add our exported model to it. The following code snippet shows how to create a Docker image and run the image.
Serving with Docker
Once you have Docker installed, you can pull the latest TensorFlow Serving docker image by running:
docker pull tensorflow/serving
This will pull down a minimal Docker image with TensorFlow Serving installed.
See the Docker Hub tensorflow/serving repo for other versions of images you can pull.
Running a serving image
The serving images (both CPU and GPU) have the following properties:
- Port 8500 exposed for gRPC
- Port 8501 exposed for the REST API
- Optional environment variable MODEL_NAME (defaults to model)
- Optional environment variable MODEL_BASE_PATH (defaults to /models)
When the serving image runs ModelServer, it runs it as follows:
tensorflow_model_server --port=8500 --rest_api_port=8501 \
--model_name=${MODEL_NAME} --model_base_path=${MODEL_BASE_PATH}/${MODEL_NAME}
To serve with Docker, you’ll need:
- An open port on your host to serve on
- A SavedModel to serve
- A name for your model that your client will refer to
What you’ll do is run the Docker container, publish the container’s ports to your host’s ports, and mounting your host’s path to the SavedModel to where the container expects models.
Let’s look at an example:
docker run -p 8501:8501 \
--mount type=bind,source=/path/to/my_model/,target=/models/my_model \
-e MODEL_NAME=my_model -t tensorflow/serving
In this case, we’ve started a Docker container, published the REST API port 8501 to our host’s port 8501, and taken a model we named my_model and bound it to the default model base path (${MODEL_BASE_PATH}/${MODEL_NAME} = /models/my_model). Finally, we’ve filled in the environment variable MODEL_NAME with my_model, and left MODEL_BASE_PATH to its default value.
This will run in the container:
tensorflow_model_server --port=8500 --rest_api_port=8501 \
--model_name=my_model --model_base_path=/models/my_model
If we wanted to publish the gRPC port, we would use -p 8500:8500. You can have both gRPC and REST API ports open at the same time, or choose to only open one or the other.
Passing additional arguments
tensorflow_model_server supports many additional arguments that you could pass to the serving Docker containers. For example, if we wanted to pass a model config file instead of specifying the model name, we could do the following:
docker run -p 8500:8500 -p 8501:8501 \
--mount type=bind,source=/path/to/my_model/,target=/models/my_model \
--mount type=bind,source=/path/to/my/models.config,target=/models/models.config \
-t tensorflow/serving --model_config_file=/models/models.config
This approach works for any of the other command-line arguments that tensorflow_model_server supports.
How to run HTTP requests
This will start the TensorFlow Serving container and make it available at port 8501. We can now use the model to encode sentences by sending a POST request to the following URL:
http://localhost:8501/v1/models/{modle name}/{versions}/{versions no}:predict
You can either run in any browser or you can use postman application for running the HTTP requests.
The response will be a JSON object containing the encoded sentences.
Conclusion
In conclusion, deploying a TensorFlow model using TensorFlow Serving in a Docker container is a convenient and efficient way to serve machine learning models in a production environment. It allows for easy scaling and management of models, as well as the ability to serve multiple models at once. By following the steps outlined in this article, you can easily deploy the Universal Sentence Encoder, or any other TensorFlow model, using TensorFlow Serving in a Docker container.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Towards AI Academy
We Build Enterprise-Grade AI. We'll Teach You to Master It Too.
15 engineers. 100,000+ students. Towards AI Academy teaches what actually survives production.
Start free — no commitment:
→ 6-Day Agentic AI Engineering Email Guide — one practical lesson per day
→ Agents Architecture Cheatsheet — 3 years of architecture decisions in 6 pages
Our courses:
→ AI Engineering Certification — 90+ lessons from project selection to deployed product. The most comprehensive practical LLM course out there.
→ Agent Engineering Course — Hands on with production agent architectures, memory, routing, and eval frameworks — built from real enterprise engagements.
→ AI for Work — Understand, evaluate, and apply AI for complex work tasks.
Note: Article content contains the views of the contributing authors and not Towards AI.