Home

Awesome

<div align="center"> <img src="./assets/xorbits-logo.png" width="180px" alt="xorbits" />

Xorbits Inference: Model Serving Made Easy ๐Ÿค–

<p align="center"> <a href="https://inference.top/">Xinference Cloud</a> ยท <a href="https://github.com/xorbitsai/enterprise-docs/blob/main/README.md">Xinference Enterprise</a> ยท <a href="https://inference.readthedocs.io/en/latest/getting_started/installation.html#installation">Self-hosting</a> ยท <a href="https://inference.readthedocs.io/">Documentation</a> </p>

PyPI Latest Release License Build Status Slack Twitter

<p align="center"> <a href="./README.md"><img alt="README in English" src="https://img.shields.io/badge/English-454545?style=for-the-badge"></a> <a href="./README_zh_CN.md"><img alt="็ฎ€ไฝ“ไธญๆ–‡็‰ˆ่‡ช่ฟฐๆ–‡ไปถ" src="https://img.shields.io/badge/ไธญๆ–‡ไป‹็ป-d9d9d9?style=for-the-badge"></a> <a href="./README_ja_JP.md"><img alt="ๆ—ฅๆœฌ่ชžใฎREADME" src="https://img.shields.io/badge/ๆ—ฅๆœฌ่ชž-d9d9d9?style=for-the-badge"></a> </p> </div> <br />

Xorbits Inference(Xinference) is a powerful and versatile library designed to serve language, speech recognition, and multimodal models. With Xorbits Inference, you can effortlessly deploy and serve your or state-of-the-art built-in models using just a single command. Whether you are a researcher, developer, or data scientist, Xorbits Inference empowers you to unleash the full potential of cutting-edge AI models.

<div align="center"> <i><a href="https://join.slack.com/t/xorbitsio/shared_invite/zt-1z3zsm9ep-87yI9YZ_B79HLB2ccTq4WA">๐Ÿ‘‰ Join our Slack community!</a></i> </div>

๐Ÿ”ฅ Hot Topics

Framework Enhancements

New Models

Integrations

Key Features

๐ŸŒŸ Model Serving Made Easy: Simplify the process of serving large language, speech recognition, and multimodal models. You can set up and deploy your models for experimentation and production with a single command.

โšก๏ธ State-of-the-Art Models: Experiment with cutting-edge built-in models using a single command. Inference provides access to state-of-the-art open-source models!

๐Ÿ–ฅ Heterogeneous Hardware Utilization: Make the most of your hardware resources with ggml. Xorbits Inference intelligently utilizes heterogeneous hardware, including GPUs and CPUs, to accelerate your model inference tasks.

โš™๏ธ Flexible API and Interfaces: Offer multiple interfaces for interacting with your models, supporting OpenAI compatible RESTful API (including Function Calling API), RPC, CLI and WebUI for seamless model management and interaction.

๐ŸŒ Distributed Deployment: Excel in distributed deployment scenarios, allowing the seamless distribution of model inference across multiple devices or machines.

๐Ÿ”Œ Built-in Integration with Third-Party Libraries: Xorbits Inference seamlessly integrates with popular third-party libraries including LangChain, LlamaIndex, Dify, and Chatbox.

Why Xinference

FeatureXinferenceFastChatOpenLLMRayLLM
OpenAI-Compatible RESTful APIโœ…โœ…โœ…โœ…
vLLM Integrationsโœ…โœ…โœ…โœ…
More Inference Engines (GGML, TensorRT)โœ…โŒโœ…โœ…
More Platforms (CPU, Metal)โœ…โœ…โŒโŒ
Multi-node Cluster Deploymentโœ…โŒโŒโœ…
Image Models (Text-to-Image)โœ…โœ…โŒโŒ
Text Embedding Modelsโœ…โŒโŒโŒ
Multimodal Modelsโœ…โŒโŒโŒ
Audio Modelsโœ…โŒโŒโŒ
More OpenAI Functionalities (Function Calling)โœ…โŒโŒโŒ

Using Xinference

Staying Ahead

Star Xinference on GitHub and be instantly notified of new releases.

star-us

Getting Started

Jupyter Notebook

The lightest way to experience Xinference is to try our Jupyter Notebook on Google Colab.

Docker

Nvidia GPU users can start Xinference server using Xinference Docker Image. Prior to executing the installation command, ensure that both Docker and CUDA are set up on your system.

docker run --name xinference -d -p 9997:9997 -e XINFERENCE_HOME=/data -v </on/your/host>:/data --gpus all xprobe/xinference:latest xinference-local -H 0.0.0.0

K8s via helm

Ensure that you have GPU support in your Kubernetes cluster, then install as follows.

# add repo
helm repo add xinference https://xorbitsai.github.io/xinference-helm-charts

# update indexes and query xinference versions
helm repo update xinference
helm search repo xinference/xinference --devel --versions

# install xinference
helm install xinference xinference/xinference -n xinference --version 0.0.1-v<xinference_release_version>

For more customized installation methods on K8s, please refer to the documentation.

Quick Start

Install Xinference by using pip as follows. (For more options, see Installation page.)

pip install "xinference[all]"

To start a local instance of Xinference, run the following command:

$ xinference-local

Once Xinference is running, there are multiple ways you can try it: via the web UI, via cURL, via the command line, or via the Xinferenceโ€™s python client. Check out our docs for the guide.

web UI

Getting involved

PlatformPurpose
Github IssuesReporting bugs and filing feature requests.
SlackCollaborating with other Xorbits users.
TwitterStaying up-to-date on new features.

Citation

If this work is helpful, please kindly cite as:

@inproceedings{lu2024xinference,
    title = "Xinference: Making Large Model Serving Easy",
    author = "Lu, Weizheng and Xiong, Lingfeng and Zhang, Feng and Qin, Xuye and Chen, Yueguo",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-demo.30",
    pages = "291--300",
}

Contributors

<a href="https://github.com/xorbitsai/inference/graphs/contributors"> <img src="https://contrib.rocks/image?repo=xorbitsai/inference" /> </a>

Star History

Star History Chart