Home

Awesome

Trained Ternary Quantization (TTQ)

TensorFlow implementation of paper:

Trained Ternary Quantization, by Zhu et al.

This implementation is based on tensorpack. Thanks to this framework which made this implementation extremely easy.

Experimental Results:

Error Rate of Finetuned TTQ ResNet models on CIFAR-10:

NetworkFull PrecisionTTQ
ResNet-208.238.87
ResNet-327.677.63
ResNet-447.187.02
ResNet-566.806.44

Error Rate of TTQ AlexNet model on ImageNet from scratch:

NetworkFull PrecisionTTQ
Top1-error42.842.5
Top5-error19.720.3

Dependencies:

pip install --user -r requirements.txt
pip install --user -r opt-requirements.txt (some optional dependencies, you can install later if needed)
export PYTHONPATH=$PYTHONPATH:`readlink -f path/to/tensorpack`

Usage

cd examples/Ternary-Net/
python ./tw-cifar10-resnet.py --gpu 0,1 [--load MODEL_PATH] [--t threshold] [--n NSIZE]

Note: We used 2 GPUs for training and pretrained model can be obtained using /examples/ResNet/

cd examples/Ternary-Net/
python ./p-cifar10-resnet.py --gpu 0,1 [--load MODEL_PATH] [—p sparsity] [--n NSIZE]
cd examples/Ternary-Net/
python ./tw-imagenet-alexnet.py --gpu 0,1,2,3 --data IMAGENET_PATH [--t threshold]

Note: We used 4 GPUs for training

Logs

Some training logs can be found here.

Support

Please use github issues for any issues related to the code. Send email to the authors for general questions related to the paper.

Citation

If you use our code or models in your research, please cite:

@article{zhu2016trained,
  title={Trained Ternary Quantization},
  author={Zhu, Chenzhuo and Han, Song and Mao, Huizi and Dally, William J},
  journal={arXiv preprint arXiv:1612.01064},
  year={2016}
}