Home

Awesome

SlimConv

This repository contains the code (in PyTorch) for SlimConv: Reducing Channel Redundancy in Convolutional Neural Networks by Features Recombining paper (TIP 2021)

Requirements

Pretrained models on ImageNet

Some of pretrained models are released in Google Drive, which including Sc-ResNet-50, Sc-ResNet-50(cosine), Sc-ResNet-101, Sc-ResNet-50(k=8/3) and Sc-ResNeXt-101(32x3d).

Note

You can use our module on your own tasks to reduce parameters, FLOPs and improve the performance.

Just replace 3x3_conv with slim_conv_3x3 and change the input channel number of the next conv layer.

Comparison with SOTA on ImageNet

Y:Yes, N:No. We use the tool supplied by DMCP to count FLOPs here.

MethodManualTop-1 ErrorFLOPs(10^9)Params(10^6)
Sc-ResNeXt-101(32x3d, k=2)(ours)Y21.184.5823.70
DMCP-ResNet-50N23.502.8023.18
Sc-ResNet-50(k=4/3)(ours)Y22.772.6516.76
DMCP-ResNet-50N25.901.1014.47
Ghost-ResNet-50 (s=2)Y24.992.1513.95
Sc-ResNet-50(k=8/3)(ours)Y24.481.8812.10

Compressed ratio of ResNet-50 with SlimConv on CIFAR-100

Just adjust k of SimConv. image

Citation

If you use our code or method in your work, please cite the following:

@article{qiu2021slimconv,
  title={SlimConv: Reducing Channel Redundancy in Convolutional Neural Networks by Features Recombining},
  author={Qiu, Jiaxiong and Chen, Cai and Liu, Shuaicheng and Zhang, Hengyu and Zeng, Bing},
  journal={IEEE Transactions on Image Processing},
  year={2021},
  publisher={IEEE}
}

Please direct any questions to Jiaxiong Qiu at qiujiaxiong727@gmail.com