Home

Awesome

Squeeze-and-Excitation Networks

This directory contains code to evaluate the classification models released by the authors of the paper:

Squeeze-and-Excitation Networks, 
Jie Hu, Li Shen, Gang Sun, arxiv 2017

This code is based on the original implementation (which uses caffe).

Pretrained Models

Each of the Squeeze-and-Excitation networks released by the authors has been imported into MatConvNet and can be downloaded here:

SE Networks

The run_se_benchmarks.m script will evaluate each of these models on the ImageNet validation set. It will download the models automatically if you have not already done so (note that these evaluations require a copy of the imagenet data). The results of the evaluations are given below - note there are minor differences to the original scores (listed under official) due to variations in preprocessing (full details of the evaluation can be found here):

modeltop-1 error (offical)top-5 error (official)
SE-ResNet-50-mcn22.30 (22.37)6.30 (6.36)
SE-ResNet-101-mcn21.59 (21.75)5.81 (5.72)
SE-ResNet-152-mcn21.38 (21.34)5.60 (5.54)
SE-BN-Inception-mcn24.16 (23.62)7.35 (7.04)
SE-ResNeXt-50-32x4d-mcn21.01 (20.97)5.58 (5.54)
SE-ResNeXt-101-32x4d-mcn19.73 (19.81)4.98 (4.96)
SENet-mcn18.67 (18.68)4.50 (4.47)

There may be some difference in how the Inception network should be preprocessed relative to the others (this model exhibits a noticeable degradation). To give some idea of the relative computational burdens of each model, esimates are provided below:

modelinput sizeparam memoryfeature memoryflops
SE-ResNet-50224 x 224107 MB103 MB4 GFLOPs
SE-ResNet-101224 x 224189 MB155 MB8 GFLOPs
SE-ResNet-152224 x 224255 MB220 MB11 GFLOPs
SE-BN-Inception224 x 22446 MB43 MB2 GFLOPs
SE-ResNeXt-50-32x4d224 x 224105 MB132 MB4 GFLOPs
SE-ResNeXt-101-32x4d224 x 224187 MB197 MB8 GFLOPs
SENet224 x 224440 MB347 MB21 GFLOPs

Each estimate corresponds to computing a single element batch. This table was generated with convnet-burden - the repo has a list of the assumptions used produce estimations. Clicking on the model name should give a more detailed breakdown.

Dependencies

This code uses the following two modules:

Both of these can be setup directly with vl_contrib (i.e. run vl_contrib install <module-name> then vl_contrib setup <module-name>).

Notes

Installation

The easiest way to use this module is to install it with the vl_contrib package manager:

vl_contrib('install', 'mcnSENets') ;
vl_contrib('setup', 'mcnSENets') ;
vl_contrib('test', 'mcnSENets') ; % optional

The ordering of the imagenet labels differs from the standard ordering commonly found in caffe, pytorch etc. These are remapped automically in the evaluation code. The mapping between the synsets indices can be found here.