Home

Awesome

<div align="center">

Intel® Neural Compressor

<h3> An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, and ONNX Runtime)</h3>

python version license coverage Downloads

Architecture   |   Workflow   |   LLMs Recipes   |   Results   |   Documentations


<div align="left">

Intel® Neural Compressor aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks such as TensorFlow, PyTorch, and ONNX Runtime, as well as Intel extensions such as Intel Extension for TensorFlow and Intel Extension for PyTorch. In particular, the tool provides the key features, typical examples, and open collaborations as below:

What's New

Installation

Install Framework

Install torch for CPU

pip install torch --index-url https://download.pytorch.org/whl/cpu

Use Docker Image with torch installed for HPU

https://docs.habana.ai/en/latest/Installation_Guide/Bare_Metal_Fresh_OS.html#bare-metal-fresh-os-single-click

Note: There is a version mapping between Intel Neural Compressor and Gaudi Software Stack, please refer to this table and make sure to use a matched combination.

Install torch/intel_extension_for_pytorch for Intel GPU

https://intel.github.io/intel-extension-for-pytorch/index.html#installation

Install torch for other platform

https://pytorch.org/get-started/locally

Install tensorflow

pip install tensorflow

Install from pypi

# Install 2.X API + Framework extension API + PyTorch dependency
pip install neural-compressor[pt]
# Install 2.X API + Framework extension API + TensorFlow dependency
pip install neural-compressor[tf]

Note: Further installation methods can be found under Installation Guide. check out our FAQ for more details.

Getting Started

Setting up the environment:

pip install "neural-compressor>=2.3" "transformers>=4.34.0" torch torchvision

After successfully installing these packages, try your first quantization program.

FP8 Quantization

Following example code demonstrates FP8 Quantization, it is supported by Intel Gaudi2 AI Accelerator.

To try on Intel Gaudi2, docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in Gaudi Guide.

# Run a container with an interactive shell
docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.17.0/ubuntu22.04/habanalabs/pytorch-installer-2.3.1:latest

Run the example:

from neural_compressor.torch.quantization import (
    FP8Config,
    prepare,
    convert,
)
import torchvision.models as models

model = models.resnet18()
qconfig = FP8Config(fp8_config="E4M3")
model = prepare(model, qconfig)
# customer defined calibration
calib_func(model)
model = convert(model)

Weight-Only Large Language Model Loading (LLMs)

Following example code demonstrates weight-only large language model loading on Intel Gaudi2 AI Accelerator.

from neural_compressor.torch.quantization import load

model_name = "TheBloke/Llama-2-7B-GPTQ"
model = load(
    model_name_or_path=model_name,
    format="huggingface",
    device="hpu",
    torch_dtype=torch.bfloat16,
)

Note:

Intel Neural Compressor will convert the model format from auto-gptq to hpu format on the first load and save hpu_model.safetensors to the local cache directory for the next load. So it may take a while to load for the first time.

Documentation

<table class="docutils"> <thead> <tr> <th colspan="8">Overview</th> </tr> </thead> <tbody> <tr> <td colspan="2" align="center"><a href="./docs/source/3x/design.md#architecture">Architecture</a></td> <td colspan="2" align="center"><a href="./docs/source/3x/design.md#workflows">Workflow</a></td> <td colspan="2" align="center"><a href="https://intel.github.io/neural-compressor/latest/docs/source/api-doc/apis.html">APIs</a></td> <td colspan="1" align="center"><a href="./docs/source/3x/llm_recipes.md">LLMs Recipes</a></td> <td colspan="1" align="center"><a href="./examples/3.x_api/README.md">Examples</a></td> </tr> </tbody> <thead> <tr> <th colspan="8">PyTorch Extension APIs</th> </tr> </thead> <tbody> <tr> <td colspan="2" align="center"><a href="./docs/source/3x/PyTorch.md">Overview</a></td> <td colspan="2" align="center"><a href="./docs/source/3x/PT_DynamicQuant.md">Dynamic Quantization</a></td> <td colspan="2" align="center"><a href="./docs/source/3x/PT_StaticQuant.md">Static Quantization</a></td> <td colspan="2" align="center"><a href="./docs/source/3x/PT_SmoothQuant.md">Smooth Quantization</a></td> </tr> <tr> <td colspan="2" align="center"><a href="./docs/source/3x/PT_WeightOnlyQuant.md">Weight-Only Quantization</a></td> <td colspan="2" align="center"><a href="./docs/source/3x/PT_FP8Quant.md">FP8 Quantization</a></td> <td colspan="2" align="center"><a href="./docs/source/3x/PT_MXQuant.md">MX Quantization</a></td> <td colspan="2" align="center"><a href="./docs/source/3x/PT_MixedPrecision.md">Mixed Precision</a></td> </tr> </tbody> <thead> <tr> <th colspan="8">Tensorflow Extension APIs</th> </tr> </thead> <tbody> <tr> <td colspan="3" align="center"><a href="./docs/source/3x/TensorFlow.md">Overview</a></td> <td colspan="3" align="center"><a href="./docs/source/3x/TF_Quant.md">Static Quantization</a></td> <td colspan="2" align="center"><a href="./docs/source/3x/TF_SQ.md">Smooth Quantization</a></td> </tr> </tbody> <thead> <tr> <th colspan="8">Other Modules</th> </tr> </thead> <tbody> <tr> <td colspan="4" align="center"><a href="./docs/source/3x/autotune.md">Auto Tune</a></td> <td colspan="4" align="center"><a href="./docs/source/3x/benchmark.md">Benchmark</a></td> </tr> </tbody> </table>

Note: From 3.0 release, we recommend to use 3.X API. Compression techniques during training such as QAT, Pruning, Distillation only available in 2.X API currently.

Selected Publications/Events

Note: View Full Publication List.

Additional Content

Communication