Home

Awesome

LSQ and LSQ+<br>

LSQ+ net or LSQplus net and LSQ net <br>

commit log<br>

2023-01-08 Dorefa and Pact, https://github.com/ZouJiu1/Dorefa_Pact<br> --------------------------------------------------------------------------------------------------------------<br> add torch.nn.Parameter .data, retrain models 18-01-2022<br>

I'm not the author, I just complish an unofficial implementation of LSQ+ or LSQplus and LSQ,the origin paper you can find LSQ+ here arxiv.org/abs/2004.09576 and LSQ here arxiv.org/abs/1902.08153.<br>

pytorch==1.8.1<br>

You should train 32-bit float model firstly, then you can finetune a low bit-width quantization QAT model by loading the trained 32-bit float model<br>

Dataset used for training is CIFAR10 and model used is Resnet18 revised<br>

Version introduction

lsqplus_quantize_V1.py: initialize s、beta of activation quantization according to LSQ+ LSQ+: Improving low-bit quantization through learnable offsets and better initialization<br><br> lsqplus_quantize_V2.py: initialize s、beta of activation quantization according to min max values<br><br> lsqquantize_V1.py:initialize s of activation quantization according to LSQ Learned Step Size Quantization<br><br> lsqquantize_V2.py: initialize s of activation quantization = 1<br><br> lsqplus_quantize_V2.py has the best result when use cifar10 dataset<br>

The Train Results

For the below table all set a_bit=8, w_bit=8

versionweight per_channellearning rateA s initialA beta initialbest epochAccuracymodels
Float 32bit-<=66 0.1<br><=86 0.01<br><=99 0.001<br><=112 0.0001--11292.6https://www.aliyundrive.com/s/6B2AZ45fFjx
lsqplus_quantize_V1×<=31 0.1<br><=61 0.01<br><=81 0.001<br><112 0.00011-1e-99090.3https://www.aliyundrive.com/s/FNZRhoTe8uW
lsqplus_quantize_V2×as before--8792.8https://www.aliyundrive.com/s/WDH3ZnEa7vy
lsqplus_quantize_V1as before--9691.19https://www.aliyundrive.com/s/JATsi4vdurp
lsqplus_quantize_V2as before--6992.8https://www.aliyundrive.com/s/LRWHaBLQGWc
lsqquantize_V1×as before--10291.89https://www.aliyundrive.com/s/nR1KZZRuB23
lsqquantize_V2×as before--6991.82https://www.aliyundrive.com/s/7fjmViqUvh4
lsqquantize_V1as before--10891.29https://www.aliyundrive.com/s/
lsqquantize_V2as before--7291.72https://www.aliyundrive.com/s/7nGvMVZcKp7
<br> all

https://www.aliyundrive.com/s/hng9XsvhYru

<br> A represent activation, I use moving average method to initialize s and beta.<br><br>

LEARNED STEP SIZE QUANTIZATION<br> LSQ+: Improving low-bit quantization through learnable offsets and better initialization<br>

References<br>

https://github.com/666DZY666/micronet<br> https://github.com/hustzxd/LSQuantization<br> https://github.com/zhutmost/lsq-net<br> https://github.com/Zhen-Dong/HAWQ<br> https://github.com/KwangHoonAn/PACT<br> https://github.com/Jermmy/pytorch-quantization-demo<br>