Home

Awesome

AutoAWQ

<p align="center"> | <a href="https://github.com/casper-hansen/AutoAWQ/issues/32"><b>Roadmap</b></a> | <a href="https://github.com/casper-hansen/AutoAWQ/tree/main/examples"><b>Examples</b></a> | <a href="https://github.com/casper-hansen/AutoAWQ/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22"><b>Issues: Help Wanted</b></a> | </p> <p align="center" style="margin-bottom: 0px;"> <a href="https://huggingface.co/models?search=awq"> <img alt="Huggingface - Models" src="https://img.shields.io/badge/🤗_1000+_models_available-8A2BE2"> </a> <a href="https://github.com/casper-hansen/AutoAWQ/releases"> <img alt="GitHub - Releases" src="https://img.shields.io/github/release/casper-hansen/AutoAWQ.svg"> </a> <a href="https://pypi.org/project/autoawq/"> <img alt="PyPI - Downloads" src="https://static.pepy.tech/badge/autoawq/month"> </a> </p> <div align="center" style="color: white;"> <p>Supported by</p> <a href="https://runpod.io/?utm_source=referral&utm_medium=autoAWQ"> <img src="https://github.com/aadil-runpod/rp-logo/assets/164108768/a8fc546d-cbab-44c4-9a5a-dfb6c400ad24" alt="RunPod Logo" width="100" border="0"> </a> </div>

AutoAWQ is an easy-to-use package for 4-bit quantized models. AutoAWQ speeds up models by 3x and reduces memory requirements by 3x compared to FP16. AutoAWQ implements the Activation-aware Weight Quantization (AWQ) algorithm for quantizing LLMs. AutoAWQ was created and improved upon from the original work from MIT.

Latest News 🔥

Install

Prerequisites

Install from PyPi

There are a few ways to install AutoAWQ:

  1. Default:

    • pip install autoawq
    • NOTE: The default installation includes no external kernels and relies on Triton for inference.
  2. From release with kernels:

  3. From main branch for Intel CPU and Intel XPU optimized performance:

    • pip install autoawq[cpu]
    • NOTE: Minimum of torch 2.4.0 is required.

Usage

Under examples, you can find examples of how to quantize, run inference, and benchmark AutoAWQ models.

INT4 GEMM vs INT4 GEMV vs FP16

There are two versions of AWQ: GEMM and GEMV. Both names relate to how matrix multiplication runs under the hood. We suggest the following:

Compute-bound vs Memory-bound

At small batch sizes with small 7B models, we are memory-bound. This means we are bound by the bandwidth our GPU has to push around the weights in memory, and this is essentially what limits how many tokens per second we can generate. Being memory-bound is what makes quantized models faster because your weights are 3x smaller and can therefore be pushed around in memory much faster. This is different from being compute-bound where the main time spent during generation is doing matrix multiplication.

In the scenario of being compute-bound, which happens at higher batch sizes, you will not gain a speed-up using a W4A16 quantized model because the overhead of dequantization will slow down the overall generation. This happens because AWQ quantized models only store the weights in INT4 but perform FP16 operations during inference, so we are essentially converting INT4 -> FP16 during inference.

Fused modules

Fused modules are a large part of the speedup you get from AutoAWQ. The idea is to combine multiple layers into a single operation, thus becoming more efficient. Fused modules represent a set of custom modules that work separately from Huggingface models. They are compatible with model.generate() and other Huggingface methods, which comes with some inflexibility in how you can use your model if you activate fused modules:

Examples

More examples can be found in the examples directory.

<details> <summary>Quantization</summary>

Expect this to take 10-15 minutes on smaller 7B models, and around 1 hour for 70B models.

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer

model_path = 'mistralai/Mistral-7B-Instruct-v0.2'
quant_path = 'mistral-instruct-v0.2-awq'
quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }

# Load model
model = AutoAWQForCausalLM.from_pretrained(
    model_path, low_cpu_mem_usage=True, use_cache=False
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)

# Quantize
model.quantize(tokenizer, quant_config=quant_config)

# Save quantized model
model.save_quantized(quant_path)
tokenizer.save_pretrained(quant_path)

print(f'Model is quantized and saved at "{quant_path}"')
</details> <details> <summary>Inference</summary>
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
from awq.utils.utils import get_best_device

device = get_best_device()

quant_path = "TheBloke/zephyr-7B-beta-AWQ"

# Load model
model = AutoAWQForCausalLM.from_quantized(quant_path, fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(quant_path, trust_remote_code=True)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

# Convert prompt to tokens
prompt_template = """\
<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>"""

prompt = "You're standing on the surface of the Earth. "\
        "You walk one mile south, one mile west and one mile north. "\
        "You end up exactly where you started. Where are you?"

tokens = tokenizer(
    prompt_template.format(prompt=prompt), 
    return_tensors='pt'
).input_ids.to(device)

# Generate output
generation_output = model.generate(
    tokens, 
    streamer=streamer,
    max_seq_len=512
)
</details>

Benchmarks

These benchmarks showcase the speed and memory usage of processing context (prefill) and generating tokens (decoding). The results include speed at various batch sizes and different versions of AWQ kernels. We have aimed to test models fairly using the same benchmarking tool that you can use to reproduce the results. Do note that speed may vary not only between GPUs but also between CPUs. What matters most is a GPU with high memory bandwidth and a CPU with high single core clock speed.

Model NameSizeVersionBatch SizePrefill LengthDecode LengthPrefill tokens/sDecode tokens/sMemory (VRAM)
Vicuna7B🟢GEMV16464639.65198.8484.50 GB (19.05%)
Vicuna7B🟢GEMV1204820481123.63133.1916.15 GB (26.02%)
...........................
Mistral7B🔵GEMM164641093.35156.3174.35 GB (18.41%)
Mistral7B🔵GEMM1204820483897.02114.3555.55 GB (23.48%)
Mistral7B🔵GEMM864644199.181185.254.35 GB (18.41%)
Mistral7B🔵GEMM8204820483661.46829.75416.82 GB (71.12%)
...........................
Mistral7B🟢GEMV16464531.99188.294.28 GB (18.08%)
Mistral7B🟢GEMV120482048903.83130.665.55 GB (23.48%)
Mistral7B🔴GEMV86464897.87486.464.33 GB (18.31%)
Mistral7B🔴GEMV820482048884.22411.89316.82 GB (71.12%)
...........................
TinyLlama1B🟢GEMV164641088.63548.9930.86 GB (3.62%)
TinyLlama1B🟢GEMV1204820485178.98431.4682.10 GB (8.89%)
...........................
Llama 213B🔵GEMM16464820.3496.748.47 GB (35.83%)
Llama 213B🔵GEMM1204820482279.4173.821310.28 GB (43.46%)
Llama 213B🔵GEMM364641593.88286.2498.57 GB (36.24%)
Llama 213B🔵GEMM3204820482226.7189.57316.90 GB (71.47%)
...........................
MPT7B🔵GEMM164641079.06161.3443.67 GB (15.51%)
MPT7B🔵GEMM1204820484069.78114.9825.87 GB (24.82%)
...........................
Falcon7B🔵GEMM164641139.93133.5854.47 GB (18.92%)
Falcon7B🔵GEMM1204820482850.97115.736.83 GB (28.88%)
...........................
CodeLlama34B🔵GEMM16464681.7441.0119.05 GB (80.57%)
CodeLlama34B🔵GEMM1204820481072.3635.831620.26 GB (85.68%)
...........................
DeepSeek33B🔵GEMM164641160.1840.2918.92 GB (80.00%)
DeepSeek33B🔵GEMM1204820481012.134.009319.87 GB (84.02%)

Multi-GPU

GPU: 2x NVIDIA GeForce RTX 4090

ModelSizeVersionBatch SizePrefill LengthDecode LengthPrefill tokens/sDecode tokens/sMemory (VRAM)
Mixtral46.7B🔵GEMM13232149.74293.40625.28 GB (53.44%)
Mixtral46.7B🔵GEMM164641489.6493.18425.32 GB (53.53%)
Mixtral46.7B🔵GEMM11281282082.9592.944425.33 GB (53.55%)
Mixtral46.7B🔵GEMM12562562428.5991.518725.35 GB (53.59%)
Mixtral46.7B🔵GEMM15125122633.1189.145725.39 GB (53.67%)
Mixtral46.7B🔵GEMM1102410242598.9584.675325.75 GB (54.44%)
Mixtral46.7B🔵GEMM1204820482446.1577.051627.98 GB (59.15%)
Mixtral46.7B🔵GEMM1409640961985.7877.568934.65 GB (73.26%)

CPU

ModelVersionBatch SizePrefill LengthDecode LengthPrefill tokens/sDecode tokens/sMemory
TinyLlama 1Bgemm13232817.8670.931.94 GB (0.00%)
TinyLlama 1Bgemm1204820485279.1536.832.31 GB (0.00%)
Falcon 7Bgemm13232337.5126.419.57 GB (0.01%)
Falcon 7Bgemm120482048546.7118.813.46 GB (0.01%)
Mistral 7Bgemm13232343.0828.469.74 GB (0.01%)
Mistral 7Bgemm1204820481135.2313.2310.35 GB (0.01%)
Vicuna 7Bgemm13232340.7328.869.59 GB (0.01%)
Vicuna 7Bgemm1204820481143.1911.1410.98 GB (0.01%)
Llama 2 13Bgemm13232220.7918.1417.46 GB (0.02%)
Llama 2 13Bgemm120482048650.946.5419.84 GB (0.02%)
DeepSeek Coder 33Bgemm13232101.618.5840.80 GB (0.04%)
DeepSeek Coder 33Bgemm120482048245.023.4841.72 GB (0.04%)
Phind CodeLlama 34Bgemm13232102.479.0441.70 GB (0.04%)
Phind CodeLlama 34Bgemm120482048237.573.4842.47 GB (0.04%)

Reference

If you find AWQ useful or relevant to your research, you can cite their paper:

@article{lin2023awq,
  title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration},
  author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Dang, Xingyu and Han, Song},
  journal={arXiv},
  year={2023}
}