Home

Awesome

Loss-aware-weight-quantization

Implementation of ICLR 2018 paper "Loss-aware Weight Quantization of Deep Networks", tested with GTX TITAN X, python 2.7, theano 0.9.0 and lasagne 0.2.dev1.

This repository is divided in two subrepositories:

Requirements This software is implemented on top of the implementation of BinaryConnect and has all the same requirements.

Example training command on War and Peace dataset:

python warpeace.py --method="LATa" --lr_start=0.002  --len=100
python warpeace.py --method="LAQ_linear" --lr_start=0.002  --len=100

If you find loss-aware weight quantization useful in your research, please consider citing the the paper


@InProceedings{hou2017loss,
	title={Loss-aware Binarization of Deep Networks},
	author={Hou, Lu and Yao, Quanming and Kwok, James T.},
	booktitle={International Conference on Learning Representations},
	year={2017}
}

@InProceedings{hou2018loss,
	title={Loss-aware Weight Quantization of Deep Networks},
	author={Hou, Lu and Kwok, James T.},
	booktitle={International Conference on Learning Representations},
	year={2018}
}

@InProceedings{hou2019analysis,
	title={Analysis of Quantized Models},
	author={Hou, Lu and Zhang, Ruiliang and Kwok, James T.},
	booktitle={International Conference on Learning Representations},
	year={2019}
}