Awesome
SlimConv
This repository contains the code (in PyTorch) for SlimConv: Reducing Channel Redundancy in Convolutional Neural Networks by Features Recombining paper (TIP 2021)
Requirements
Pretrained models on ImageNet
Some of pretrained models are released in Google Drive, which including Sc-ResNet-50, Sc-ResNet-50(cosine), Sc-ResNet-101, Sc-ResNet-50(k=8/3) and Sc-ResNeXt-101(32x3d).
Note
You can use our module on your own tasks to reduce parameters, FLOPs and improve the performance.
Just replace 3x3_conv with slim_conv_3x3 and change the input channel number of the next conv layer.
Comparison with SOTA on ImageNet
Y:Yes, N:No. We use the tool supplied by DMCP to count FLOPs here.
Method | Manual | Top-1 Error | FLOPs(10^9) | Params(10^6) |
---|---|---|---|---|
Sc-ResNeXt-101(32x3d, k=2)(ours) | Y | 21.18 | 4.58 | 23.70 |
DMCP-ResNet-50 | N | 23.50 | 2.80 | 23.18 |
Sc-ResNet-50(k=4/3)(ours) | Y | 22.77 | 2.65 | 16.76 |
DMCP-ResNet-50 | N | 25.90 | 1.10 | 14.47 |
Ghost-ResNet-50 (s=2) | Y | 24.99 | 2.15 | 13.95 |
Sc-ResNet-50(k=8/3)(ours) | Y | 24.48 | 1.88 | 12.10 |
Compressed ratio of ResNet-50 with SlimConv on CIFAR-100
Just adjust k of SimConv.
Citation
If you use our code or method in your work, please cite the following:
@article{qiu2021slimconv,
title={SlimConv: Reducing Channel Redundancy in Convolutional Neural Networks by Features Recombining},
author={Qiu, Jiaxiong and Chen, Cai and Liu, Shuaicheng and Zhang, Hengyu and Zeng, Bing},
journal={IEEE Transactions on Image Processing},
year={2021},
publisher={IEEE}
}
Please direct any questions to Jiaxiong Qiu at qiujiaxiong727@gmail.com