Home

Awesome

<p align="center"> <img src="imgs/resnet18_TC.png" width="840"> <br /> <br /> </p>

HAWQ: Hessian AWare Quantization

HAWQ is an advanced quantization library written for PyTorch. HAWQ enables low-precision and mixed-precision uniform quantization, with direct hardware implementation through TVM.

For more details please see:

Installation

git clone https://github.com/Zhen-Dong/HAWQ.git
cd HAWQ
pip install -r requirements.txt

Getting Started

Quantization-Aware Training

An example to run uniform 8-bit quantization for resnet50 on ImageNet.

export CUDA_VISIBLE_DEVICES=0
python quant_train.py -a resnet50 --epochs 1 --lr 0.0001 --batch-size 128 --data /path/to/imagenet/ --pretrained --save-path /path/to/checkpoints/ --act-range-momentum=0.99 --wd 1e-4 --data-percentage 0.0001 --fix-BN --checkpoint-iter -1 --quant-scheme uniform8

The commands for other quantization schemes and for other networks are shown in the model zoo.

Inference Acceleration

Experimental Results

Table I and Table II in HAWQ-V3: Dyadic Neural Network Quantization

ResNet18 on ImageNet

ModelQuantizationModel Size(MB)BOPS(G)Accuracy(%)Inference Speed (batch=8, ms)Download
ResNet18Floating Points44.6185871.479.7 (1.0x)resnet18_baseline
ResNet18W8A811.111671.563.3 (3.0x)resnet18_uniform8
ResNet18Mixed Precision6.77270.222.7 (3.6x)resnet18_bops0.5
ResNet18W4A45.83468.452.2 (4.4x)resnet18_uniform4

ResNet50 on ImageNet

ModelQuantizationModel Size(MB)BOPS(G)Accuracy(%)Inference Speed (batch=8, ms)Download
ResNet50Floating Points97.8395177.7226.2 (1.0x)resnet50_baseline
ResNet50W8A824.524777.588.5 (3.1x)resnet50_uniform8
ResNet50Mixed Precision18.715475.396.9 (3.8x)resnet50_bops0.5
ResNet50W4A413.16774.245.8 (4.5x)resnet50_uniform4

More results for different quantization schemes and different models (also the corresponding commands and important notes) are available in the model zoo.
To download the quantized models through wget, please refer to a simple command in model zoo.
Checkpoints in model zoo are saved in floating point precision. To shrink the memory size, BitPack can be applied on weight_integer tensors, or directly on quantized_checkpoint.pth.tar file.

Related Works

License

THIS SOFTWARE WAS DEPOSITED IN THE BAIR OPEN RESEARCH COMMONS REPOSITORY ON FEB 1, 2023.

HAWQ is released under the MIT license.