Home

Awesome

SGLFormer: Spiking Global-Local-Fusion Transformer with high performance, this is link

Our models achieve SOTA performance on several datasets (eg. 83.73 % on ImageNet, 96.76 % on CIFAR10, 82.26 % on CIFAR100, 82.9% on CIFAR10-DVS) in directly trained SNNs in 2024/03.

Reference

If you find this repo useful, please consider citing:

@ARTICLE{10.3389/fnins.2024.1371290,
  AUTHOR={Zhang, Han  and Zhou, Chenlin  and Yu, Liutao  and Huang, Liwei  and Ma, Zhengyu  and Fan, Xiaopeng  and Zhou, Huihui  and Tian, Yonghong },
  TITLE={SGLFormer: Spiking Global-Local-Fusion Transformer with high performance},
  JOURNAL={Frontiers in Neuroscience},
  VOLUME={18},
  YEAR={2024},
  URL={https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2024.1371290},
  DOI={10.3389/fnins.2024.1371290},
  ISSN={1662-453X}
}

Our codes are based on ...

The code will be made public after the paper is published.

Results on ImageNet-1K

ModelResolutionTParam.Top-1 Acc
Swin Transformer224x224-88M83.5
SGLFormer-8-384224x224416.25M79.44
SGLFormer-8-512224x224428.67M82.28
SGLFormer-8-512*224x224428.67M81.93
SGLFormer-8-768*224x224464.02M83.73

Results on CIFAR10/CIFAR100

ModelTParam.CIFAR10 Top-1 AccCIFAR100 Top-1 Acc
SGLFormer-4-38448.85/8.88M96.7682.26

Results on CIFAR10-DVS/DVS128

ModelDatasetTParam.Top-1 Acc
SGLFormer-3-256CIFAR10 DVS102.48M82.9
SGLFormer-3-256CIFAR10 DVS162.58M82.6
SGLFormer-3-256DVS 128102.08M97.2
SGLFormer-3-256DVS 128162.17M98.6

Requirements

timm==0.3.2 for imagenet, timm==0.6.12 for others; cupy==9.6.0; torch==1.10.0; cuda==11.3.1; cudnn==8.2.1; spikingjelly==0.0.0.0.12; pyyaml==5.3.1;

data prepare: ImageNet with the following folder structure, you can extract imagenet by this script.

│imagenet/
├──train/
│  ├── n01440764
│  │   ├── n01440764_10026.JPEG
│  │   ├── n01440764_10027.JPEG
│  │   ├── ......
│  ├── ......
├──val/
│  ├── n01440764
│  │   ├── ILSVRC2012_val_00000293.JPEG
│  │   ├── ILSVRC2012_val_00002138.JPEG
│  │   ├── ......
│  ├── ......