Home

Awesome

SNN_Calibration

Pytorch Implementation of Spiking Neural Networks Calibration, ICML 2021

Paper Version 1: [PMLR], [arXiv].

Paper Version 2: [IJCV].

When converting ANNs to SNNs, conventional methods ensure the minimization in parameter space, while we focus on the minimization in network output space:

introduction_figure

Feature Comparison of SNN calibration:

FeaturesSNN Direct TrainingANN-SNN ConversionSNN Calibration
Accuracy (T<100​)HighLowHigh
Scalability to ImageNetTinyLargeLarge
Training SpeedSlowFastFast
# Required DataFull-set <br />(1.2M For ImageNet)~1000~1000
Inference SpeedFastSlowFast

Requirements

Pytorch 1.8

For ImageNet experiments, please be sure that you can initialize distributed environments

For CIFAR experiments, one GPU would suffice.

Update (May 31, 2022): New version of the code

We released a new version of the paper, and will update the code to match the experiments with that paper.

Update (Jan 14, 2022): ImageNet experiments

For imagenet experiments, please first download the checkpoints from Google Drive.

We recommend initializing distributed learning environments, and utlizing multi-GPU calibration.

For reproducibility, 8 GPUs are per run and distributed environments are highly encouraged.

For example:

python main_cal_imagenet.py --arch res34 --T 32 --usebn --calib light --dpath PATH/TO/DATA

Pre-training ANN on CIFAR10&100

Train an ANN model with main_train_cifar.py

python main_train_cifar.py --dataset CIFAR10 --arch VGG16 --usebn 

Pre-trained results:

DatasetModelRandom SeedAccuracy
CIFAR10VGG16100095.76
CIFAR10ResNet-20100095.68
CIFAR100VGG16100077.98
CIFAR100ResNet-20100076.52

SNN Calibration on CIFAR10&100

Calibrate an SNN with main_cal_cifar.py.

python main_cal_cifar.py --dataset CIFAR10 --arch VGG16 --T 16 --usebn --calib advanced --dpath PATH/TO/DATA

--T is the time step, --calib is the calibration method, please use none, light, advanced for experiments.

The calibration will run 5 times, and return the mean accuracy as well as the standard deviation.

Example results:

ArchitectureDatasetTRandom SeedCalibrationMean AccStd.
VGG16CIFAR10161000None64.524.12
VGG16CIFAR10161000Light93.300.08
VGG16CIFAR10161000Advanced93.650.25
ResNet-20CIFAR10161000None67.883.63
ResNet-20CIFAR10161000Light93.890.20
ResNet-20CIFAR10161000Advanced94.330.12
VGG16CIFAR100161000None2.690.76
VGG16CIFAR100161000Light65.260.99
VGG16CIFAR100161000Advanced70.910.65
ResNet-20CIFAR100161000None39.272.85
ResNet-20CIFAR100161000Light73.890.15
ResNet-20CIFAR100161000Advanced74.480.16

If you feel this repo helps you, please consider citing the following articles:

@article{li2022converting,
  title={Converting Artificial Neural Networks to Spiking Neural Networks via Parameter Calibration},
  author={Li, Yuhang and Deng, Shikuang and Dong, Xin and Gu, Shi},
  journal={arXiv preprint arXiv:2205.10121},
  year={2022}
}

@article{li2024error,
  title={Error-Aware Conversion from ANN to SNN via Post-training Parameter Calibration},
  author={Li, Yuhang and Deng, Shikuang and Dong, Xin and Gu, Shi},
  journal={International Journal of Computer Vision},
  pages={1--24},
  year={2024},
  publisher={Springer}
}