Home

Awesome

LLMC: Towards Accurate and Efficient LLM Compression

<img src="./imgs/llmc.png" alt="llmc" style="zoom:35%;" /> <div align="center">

License arXiv GitHub Stars visitors Discord Banner QQ Doc Doc

</div>

[ English | 中文 | 日本語 ]

LLMC is an off-the-shell tool designed for compressing LLM, leveraging state-of-the-art compression algorithms to enhance efficiency and reduce model size without compromising performance.

English doc is here.

Chinese doc is here.

Docker hub is here.

Aliyun docker: registry.cn-hangzhou.aliyuncs.com/yongyang/llmcompression:[tag]

You can download the Docker image that can run llmc with the following command. Users in mainland China are recommended to use Alibaba Cloud Docker.

docker hub

docker pull llmcompression/llmc:pure-latest

aliyun docker

docker pull registry.cn-hangzhou.aliyuncs.com/yongyang/llmcompression:pure-latest

Community:

Latest News

<details close> <summary>Previous News</summary> </details>

Highlight Feature

Usage

Please refer to the 🚀Quick Start section in the documentation.

Supported Model List

BLOOM

LLaMA

LLaMA V2

StarCoder

OPT

Falcon

InternLM2

Mistral

LLaMA V3

Mixtral

Qwen V2

LLaVA

InternLM2.5

StableLM

Gemma2

Phi2

Phi 1.5

MiniCPM

SmolLM

DeepSeekv2.5

LLaMA V3.2 Vision

Qwen MOE

Qwen2-VL

InternVL2

You can add your own model type referring to files under llmc/models/*.py.

Supported Backend List

VLLM

LightLLM

Sglang

MLC-LLM

AutoAWQ

Supported Algorithm List

Quantization

✅ Naive

AWQ

GPTQ

SmoothQuant

OS+

OmniQuant

NormTweaking

AdaDim

QUIK

SpQR

DGQ

OWQ

LLM.int8()

HQQ

QuaRot

SpinQuant (See this branch)

TesseraQ

Pruning

✅ Naive(Magnitude)

Wanda

ShortGPT

Acknowledgments

We develop our code referring to the following repos:

Star History

Star History Chart

Citation

If you find our LLM-QBench paper/llmc toolkit useful or relevant to your research, please kindly cite our paper:

@misc{llmc,
   author = {llmc contributors},
   title = {llmc: Towards Accurate and Efficient LLM Compression},
   year = {2024},
   publisher = {GitHub},
   journal = {GitHub repository},
   howpublished = {\url{https://github.com/ModelTC/llmc}},
}

@misc{gong2024llmqbench,
      title={LLM-QBench: A Benchmark Towards the Best Practice for Post-training Quantization of Large Language Models},
      author={Ruihao Gong and Yang Yong and Shiqiao Gu and Yushi Huang and Yunchen Zhang and Xianglong Liu and Dacheng Tao},
      year={2024},
      eprint={2405.06001},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

@misc{gong2024llmcbenchmarkinglargelanguage,
      title={LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit},
      author={Ruihao Gong and Yang Yong and Shiqiao Gu and Yushi Huang and Chentao Lv and Yunchen Zhang and Xianglong Liu and Dacheng Tao},
      year={2024},
      eprint={2405.06001},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2405.06001},
}