Home

Awesome

Squeeze-and-Excitation Networks <sub>(paper)</sub>

By Jie Hu<sup>[1]</sup>, Li Shen<sup>[2]</sup>, Gang Sun<sup>[1]</sup>.

Momenta<sup>[1]</sup> and University of Oxford<sup>[2]</sup>.

Approach

<div align="center"> <img src="https://github.com/hujie-frank/SENet/blob/master/figures/SE-pipeline.jpg"> </div> <p align="center"> Figure 1: Diagram of a Squeeze-and-Excitation building block. </p> <div align="center">  <img src="https://github.com/hujie-frank/SENet/blob/master/figures/SE-Inception-module.jpg" width="420"> <img src="https://github.com/hujie-frank/SENet/blob/master/figures/SE-ResNet-module.jpg" width="420"> </div> <p align="center"> Figure 2: Schema of SE-Inception and SE-ResNet modules. We set r=16 in all our models. </p>

Implementation

In this repository, Squeeze-and-Excitation Networks are implemented by Caffe.

Augmentation

MethodSettings
Random MirrorTrue
Random Crop8% ~ 100%
Aspect Ratio3/4 ~ 4/3
Random Rotation-10° ~ 10°
Pixel Jitter-20 ~ 20

Note:

Trained Models

Table 1. Single crop validation error on ImageNet-1k (center 224x224 crop from resized image with shorter side = 256). The SENet-154 is one of our superior models used in ILSVRC 2017 Image Classification Challenge where we won the 1st place (Team name: WMW).

ModelTop-1Top-5SizeCaffe ModelCaffe Model
SE-BN-Inception23.627.0446 MGoogleDriveBaiduYun
SE-ResNet-5022.376.36107 MGoogleDriveBaiduYun
SE-ResNet-10121.755.72189 MGoogleDriveBaiduYun
SE-ResNet-15221.345.54256 MGoogleDriveBaiduYun
SE-ResNeXt-50 (32 x 4d)20.975.54105 MGoogleDriveBaiduYun
SE-ResNeXt-101 (32 x 4d)19.814.96187 MGoogleDriveBaiduYun
SENet-15418.684.47440 MGoogleDriveBaiduYun

Here we obtain better performance than those reported in the paper. We re-train the SENets described in the paper on a single GPU server with 8 NVIDIA Titan X cards, using a mini-batch of 256 and a initial learning rate of 0.1 with more epoches. In contrast, the results reported in the paper were obtained by training the networks with a larger batch size (1024) and learning rate (0.6) across 4 servers.

Third-party re-implementations

  1. Caffe. SE-mudolues are integrated with a modificated ResNet-50 using a stride 2 in the 3x3 convolution instead of the first 1x1 convolution which obtains better performance: Repository.
  2. TensorFlow. SE-modules are integrated with a pre-activation ResNet-50 which follows the setup in fb.resnet.torch: Repository.
  3. TensorFlow. Simple Tensorflow implementation of SENets using Cifar10: Repository.
  4. MatConvNet. All the released SENets are imported into MatConvNet: Repository.
  5. MXNet. SE-modules are integrated with the ResNeXt and more architectures are coming soon: Repository.
  6. PyTorch. Implementation of SENets by PyTorch: Repository.
  7. Chainer. Implementation of SENets by Chainer: Repository.

Citation

If you use Squeeze-and-Excitation Networks in your research, please cite the paper:

@inproceedings{hu2018senet,
  title={Squeeze-and-Excitation Networks},
  author={Jie Hu and Li Shen and Gang Sun},
  journal={IEEE Conference on Computer Vision and Pattern Recognition},
  year={2018}
}