Home

Awesome

Synaptic OPerations (SyOPs) counter for spiking neural networks

Pypi version

<!-- [![Build Status](https://travis-ci.com/iCGY96/syops-counter.svg?branch=master)](https://travis-ci.com/iCGY96/syops-counter) -->

This script is designed to compute the theoretical amount of synaptic operations in spiking neural networks, including accumulated (AC) and multiply-accumulate (MAC) operations. It can also compute the number of parameters and print per-layer computational cost of a given network. This tool is still under construction. Comments, issues, contributions, and collaborations are all welcomed!

Supported layers:

Experimental support:

Requirements: Pytorch >= 1.1, torchvision >= 0.3, spikingjelly<=0.0.0.0.12

Usage

Install the latest version

From PyPI:

pip install syops

From this repository:

pip install --upgrade git+https://github.com/iCGY96/syops-counter

Example

import torch
from spikingjelly.activation_based import surrogate, neuron, functional
from spikingjelly.activation_based.model import spiking_resnet
from syops import get_model_complexity_info

dataloader = ...
with torch.cuda.device(0):
    net = spiking_resnet.spiking_resnet18(pretrained=True, spiking_neuron=neuron.IFNode, 
			surrogate_function=surrogate.ATan(), detach_reset=True)
    ops, params = get_model_complexity_info(net, (3, 224, 224), dataloader, as_strings=True,
                                            print_per_layer_stat=True, verbose=True)
    print('{:<30}  {:<8}'.format('Computational complexity ACs:', acs))
    print('{:<30}  {:<8}'.format('Computational complexity MACs:', macs))
    print('{:<30}  {:<8}'.format('Number of parameters: ', params))

Benchmark

ModelInput ResolutionParams(M)ACs(G)MACs(G)Energy (mJ)Acc@1Acc@5
spiking_resnet18224x22411.690.100.140.73462.3284.05
sew_resnet18224x22411.690.502.7513.1063.1884.53
DSNN18 (AAP)224x22411.691.690.202.4463.4685.14
resnet18224x22411.690.001.828.37269.7689.08

Citation

If you find our work useful for your repo, please consider giving a star :star: and citation :beer::

@article{chen2023training,
  title={Training Full Spike Neural Networks via Auxiliary Accumulation Pathway},
  author={Chen, Guangyao and Peng, Peixi and Li, Guoqi and Tian, Yonghong},
  journal={arXiv preprint arXiv:2301.11929},
  year={2023}
}

Acknowledgements

This repository is developed based on ptflops