Deploying Models with Xinference
Author(s): zhaozhiming
Originally published on Towards AI.
Today, letβs explore Xinference, a deployment and inference tool for Large Language Models (LLMs), characterized by its quick deployment, ease of use, efficient inference, support for various open-source models, and provision of both a WebGUI interface and API endpoints for convenient model deployment and inference. Letβs dive into Xinference together!
Xorbits Inference (Xinference) is a powerful and comprehensive distributed inference framework suitable for various models. With Xinference, you can effortlessly deploy your own or cutting-edge open-source models with just one click. Whether youβre a researcher, developer, or data scientist, Xinference connects you with the latest AI models, unlocking more possibilities. Below is a comparison of Xinference with other model deployment and inference tools:
Xinference supports two installation methods: Docker image and local installation. For those interested in the Docker method, please refer to the official Docker Installation Documentation. Here, we will focus on local installation.
First, install Xinferenceβs Python dependencies:
pip install "xinference[all]"
Since Xinference depends on many third-party libraries, the installation might take some time. Once completed, you can start the Xinference service with the following command:
xinference-local
Upon successful startup, access the Xinference WebGUI interface via http://localhost:9777.
Note: During the installation of Xinference, it might install a different version of PyTorch (due to its dependency on… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI