Home

Awesome

XNOR-Net-Pytorch

This a PyTorch implementation of the XNOR-Net. I implemented Binarized Neural Network (BNN) for:

DatasetNetwork                Accuracy                  Accuracy of floating-point
MNISTLeNet-599.23%99.34%
CIFAR-10Network-in-Network (NIN)86.28%89.67%
ImageNetAlexNetTop-1: 44.87% Top-5: 69.70%Top-1: 57.1% Top-5: 80.2%

MNIST

I implemented the LeNet-5 structure for the MNIST dataset. I am using the dataset reader provided by torchvision. To run the training:

$ cd <Repository Root>/MNIST/
$ python main.py

Pretrained model can be downloaded here. To evaluate the pretrained model:

$ cp <Pretrained Model> <Repository Root>/MNIST/models/
$ python main.py --pretrained models/LeNet_5.best.pth.tar --evaluate

CIFAR-10

I implemented the NIN structure for the CIFAR-10 dataset. You can download the training and validation datasets here and uncompress the .zip file. To run the training:

$ cd <Repository Root>/CIFAR_10/
$ ln -s <Datasets Root> data
$ python main.py

Pretrained model can be downloaded here. To evaluate the pretrained model:

$ cp <Pretrained Model> <Repository Root>/CIFAR_10/models/
$ python main.py --pretrained models/nin.best.pth.tar --evaluate

ImageNet

I implemented the AlexNet for the ImageNet dataset.

Dataset

The training supports torchvision.

If you have installed Caffe, you can download the preprocessed dataset here and uncompress it. To set up the dataset:

$ cd <Repository Root>/ImageNet/networks/
$ ln -s <Datasets Root> data

AlexNet

To train the network:

$ cd <Repository Root>/ImageNet/networks/
$ python main.py # add "--caffe-data" if you are training with the Caffe dataset

The pretrained models can be downloaded here: pretrained with Caffe dataset; pretrained with torchvision. To evaluate the pretrained model:

$ cp <Pretrained Model> <Repository Root>/ImageNet/networks/
$ python main.py --resume alexnet.baseline.pth.tar --evaluate # add "--caffe-data" if you are training with the Caffe dataset

The training log can be found here: log - Caffe dataset; log - torchvision.

Todo

Notes

Gradients of scaled sign function

In the paper, the gradient in backward after the scaled sign function is

equation

<!-- \frac{\partial C}{\partial W_i}=\frac{\partial C}{\partial {\widetilde{W}}_i} (\frac{1}{n}+\frac{\partial sign(W_i)}{\partial W_i}\cdot \alpha ) -->

However, this equation is actually inaccurate. The correct backward gradient should be

equation

Details about this correction can be found in the notes (section 1).