Home

Awesome

Sub-bit Neural Networks: Learning to Compress and Accelerate Binary Neural Networks

By Yikai Wang, Yi Yang, Fuchun Sun, Anbang Yao.

This pytorch implementation is a two-stage improved version of our paper "Sub-bit Neural Networks: Learning to Compress and Accelerate Binary Neural Networks", published in ICCV 2021.

<p align="center"><img src="intro.png" width="800" /></p>

Dataset

Following this repository,

Requirements:

Training

(1) Step1: binarizing activations (or you can omit this step by using our Step1 model checkpoint_ba.pth.tar),

CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py --data=path/to/ILSVRC2012/  --batch_size=512 --learning_rate=1e-3 --epochs=256 --weight_decay=1e-5

(2) Step2: binarizing weights + activations,

CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py --data=path/to/ILSVRC2012/  --batch_size=512 --learning_rate=1e-3 --epochs=256 --weight_decay=0 --bit-num=5

Results

This implementation is based on ResNet-18 of ReActNet.

Bit-WidthTop1-AccTop5-Acc#ParamsBit-OPsModel & Log
1W / 1A65.7%86.3%10.99Mbit1.677GGoogle Drive
0.67W / 1A63.4%84.5%7.324Mbit0.883GGoogle Drive
0.56W / 1A62.1%83.8%6.103Mbit0.501GGoogle Drive
0.44W / 1A60.7%82.7%4.882Mbit0.297GGoogle Drive

Citation

If you find our code useful for your research, please consider citing:

@inproceedings{wang2021snn,
    title={Sub-bit Neural Networks: Learning to Compress and Accelerate Binary Neural Networks},
    author={Wang, Yikai and Yang, Yi and Sun, Fuchun and Yao, Anbang},
    booktitle={International Conference on Computer Vision (ICCV)},
    year={2021}
}

License

SNN is released under MIT License.