Home

Awesome

SSR

structured sparsity regularization

we propose a novel filter pruning scheme, termed structured sparsity regularization (SSR), to simultaneously speedup the computation and reduce the memory overhead of CNN, which can be well supported by various off-the-shelf deep learning libraries.

Citation

If you find our project useful in your research, please consider citing:

@article{lin2019towards,
  title={Towards Compact ConvNets via Structure-Sparsity Regularized Filter Pruning},
  author={Lin, Shaohui and Ji, Rongrong and Li, Yuchao and Deng, Cheng and Li, Xuelong},
  journal={arXiv preprint arXiv:1901.07827},
  year={2019}
}

Running

1. download dataset (mnist)

python dataset/download_and_convert_mnist.py 

2. training and testing

./run.sh

Experimental results

Method#Filter/NodeFLOPs#Param.CPU(ms)SpeedupTop-1 Err.↑
LeNet20-50-5002.3M0.43M26.40%
SSL[23]3-15-175162K45K7.33.62×0.05%
SSL[23]2-11-13491K26K6.04.40×0.20%
TE[42]2-12-12795K27K5.74.62×0.02%
TE[42]2-7-9965K13K5.54.80×0.20%
CGES[57]-332K156K--0.01%
CGES+[57]--43K--0.04%
GSS[43]3-11-109119K21K6.73.94×0.08%
GSS[43]3-8-8295K12K5.64.71×0.20%
SSR-L2,13-11-108118K21K6.64.00×0.05%
SSR-L2,12-8-7767K11K4.85.50×0.18%

Note

[23] W. Wen, C. Wu, Y. Wang, et al. Learning structured sparsity in deep neural networks. In NIPS, 2016.

[42] P. Molchanov, S. Tyree, T. Karras, et al. Pruning convolutional neural networks for resource efficient inference. In ICLR, 2017.

[43] A. Torfi and R. A. Shirvani. Attention-based guided structured sparsity of deep neural networks. arXiv preprint arXiv:1802.09902, 2018.

[57] J. Yoon and S. J. Hwang. Combined group and exclusive sparsity for deep neural networks. In ICML, 2017.