Home

Awesome

<div align="center"> <img src="docs/en/_static/image/lmdeploy-logo.svg" width="450"/>

PyPI PyPI - Downloads license issue resolution open issues

📘Documentation | 🛠️Quick Start | 🤔Reporting Issues

English | 简体中文 | 日本語

👋 join us on Static Badge Static Badge Static Badge

</div>

Latest News 🎉

<details open> <summary><b>2024</b></summary> </details> <details close> <summary><b>2023</b></summary> </details>

Introduction

LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams. It has the following core features:

Performance

v0 1 0-benchmark

For detailed inference benchmarks in more devices and more settings, please refer to the following link:

Supported Models

<table> <tbody> <tr align="center" valign="middle"> <td> <b>LLMs</b> </td> <td> <b>VLMs</b> </td> <tr valign="top"> <td align="left" valign="top"> <ul> <li>Llama (7B - 65B)</li> <li>Llama2 (7B - 70B)</li> <li>Llama3 (8B, 70B)</li> <li>Llama3.1 (8B, 70B)</li> <li>InternLM (7B - 20B)</li> <li>InternLM2 (7B - 20B)</li> <li>InternLM2.5 (7B)</li> <li>Qwen (1.8B - 72B)</li> <li>Qwen1.5 (0.5B - 110B)</li> <li>Qwen1.5 - MoE (0.5B - 72B)</li> <li>Qwen2 (0.5B - 72B)</li> <li>Baichuan (7B)</li> <li>Baichuan2 (7B-13B)</li> <li>Code Llama (7B - 34B)</li> <li>ChatGLM2 (6B)</li> <li>GLM4 (9B)</li> <li>CodeGeeX4 (9B)</li> <li>Falcon (7B - 180B)</li> <li>YI (6B-34B)</li> <li>Mistral (7B)</li> <li>DeepSeek-MoE (16B)</li> <li>DeepSeek-V2 (16B, 236B)</li> <li>Mixtral (8x7B, 8x22B)</li> <li>Gemma (2B - 7B)</li> <li>Dbrx (132B)</li> <li>StarCoder2 (3B - 15B)</li> <li>Phi-3-mini (3.8B)</li> <li>Phi-3.5-mini (3.8B)</li> <li>Phi-3.5-MoE (16x3.8B)</li> </ul> </td> <td> <ul> <li>LLaVA(1.5,1.6) (7B-34B)</li> <li>InternLM-XComposer2 (7B, 4khd-7B)</li> <li>InternLM-XComposer2.5 (7B)</li> <li>Qwen-VL (7B)</li> <li>DeepSeek-VL (7B)</li> <li>InternVL-Chat (v1.1-v1.5)</li> <li>InternVL2 (1B-76B)</li> <li>MiniGeminiLlama (7B)</li> <li>CogVLM-Chat (17B)</li> <li>CogVLM2-Chat (19B)</li> <li>MiniCPM-Llama3-V-2_5</li> <li>MiniCPM-V-2_6</li> <li>Phi-3-vision (4.2B)</li> <li>Phi-3.5-vision (4.2B)</li> <li>GLM-4V (9B)</li> </ul> </td> </tr> </tbody> </table>

LMDeploy has developed two inference engines - TurboMind and PyTorch, each with a different focus. The former strives for ultimate optimization of inference performance, while the latter, developed purely in Python, aims to decrease the barriers for developers.

They differ in the types of supported models and the inference data type. Please refer to this table for each engine's capability and choose the proper one that best fits your actual needs.

Quick Start Open In Colab

Installation

It is recommended installing lmdeploy using pip in a conda environment (python 3.8 - 3.12):

conda create -n lmdeploy python=3.8 -y
conda activate lmdeploy
pip install lmdeploy

The default prebuilt package is compiled on CUDA 12 since v0.3.0. For more information on installing on CUDA 11+ platform, or for instructions on building from source, please refer to the installation guide.

Offline Batch Inference

import lmdeploy
pipe = lmdeploy.pipeline("internlm/internlm2-chat-7b")
response = pipe(["Hi, pls intro yourself", "Shanghai is"])
print(response)

[!NOTE] By default, LMDeploy downloads model from HuggingFace. If you would like to use models from ModelScope, please install ModelScope by pip install modelscope and set the environment variable:

export LMDEPLOY_USE_MODELSCOPE=True

For more information about inference pipeline, please refer to here.

Tutorials

Please review getting_started section for the basic usage of LMDeploy.

For detailed user guides and advanced guides, please refer to our tutorials:

Third-party projects

Contributing

We appreciate all contributions to LMDeploy. Please refer to CONTRIBUTING.md for the contributing guideline.

Acknowledgement

Citation

@misc{2023lmdeploy,
    title={LMDeploy: A Toolkit for Compressing, Deploying, and Serving LLM},
    author={LMDeploy Contributors},
    howpublished = {\url{https://github.com/InternLM/lmdeploy}},
    year={2023}
}

License

This project is released under the Apache 2.0 license.