Home

Awesome

LQ-Nets

By Dongqing Zhang, Jiaolong Yang, Dongqiangzi Ye, Gang Hua.

Microsoft Research Asia (MSRA).

Introduction

This repository contains the training code of LQ-Nets introduced in our ECCV 2018 paper:

D. Zhang*, J. Yang*, D. Ye* and G. Hua. LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks. ECCV 2018 (*: Equal contribution) PDF

Dependencies

Usage

Download the ImageNet dataset and decompress into the structure like

dir/
  train/
    n01440764/
      n01440764_10026.JPEG
      ...
    ...
  val/
    ILSVRC2012_val_00000001.JPEG
    ...

To train a quantized "pre-activation" ResNet-18, simply run

python imagenet.py --gpu 0,1,2,3 --data /PATH/TO/IMAGENET --mode preact --depth 18 --qw 1 --qa 2 --logdir_id w1a2 

After the training, the result model will be stored in ./train_log/w1a2.

For more options, please refer to python imagenet.py -h.

Results

ImageNet Experiments

Quantizing both weight and activation

ModelBit-width(W/A)Top-1(%)Top-5(%)
ResNet-181/262.684.3
ResNet-182/264.985.9
ResNet-183/368.287.9
ResNet-184/469.388.8
ResNet-341/266.686.9
ResNet-342/269.889.1
ResNet-343/371.990.2
ResNet-501/268.788.4
ResNet-502/271.590.3
ResNet-503/374.291.6
ResNet-504/475.192.4
AlexNet1/255.778.8
AlexNet2/257.480.1
DenseNet-1212/269.689.1
VGG-Variant1/267.187.6
VGG-Variant2/268.888.6
GoogLeNet-Variant1/265.686.4
GoogLeNet-Variant2/268.288.1

Quantizing weight only

ModelBit-width(W/A)Top-1(%)Top-5(%)
ResNet-182/3268.088.0
ResNet-183/3269.388.8
ResNet-184/3270.089.1
ResNet-502/3275.192.3
ResNet-504/3276.493.1
AlexNet2/3260.582.7

More results can be found in the paper.

Citation

If you use our code or models in your research, please cite our paper with

@inproceedings{ZhangYangYeECCV2018,
    author = {Zhang, Dongqing and Yang, Jiaolong and Ye, Dongqiangzi and Hua, Gang},
    title = {LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks},
    booktitle = {European Conference on Computer Vision (ECCV)},
    year = {2018}
}