Home

Awesome

Gradually Updated Neural Networks for Large-Scale Image Recognition

Torch implementation for gradually updated neural networks:
Gradually Updated Neural Networks for Large-Scale Image Recognition
Siyuan Qiao, Zhishuai Zhang, Wei Shen, Bo Wang, Alan Yuille
In Thirty-fifth International Conference on Machine Learning (ICML), 2018.

The code is built on fb.resnet.torch.

@inproceedings{Gunn,
   title = {Gradually Updated Neural Networks for Large-Scale Image Recognition},
   author = {Siyuan Qiao and Zhishuai Zhang and Wei Shen and Bo Wang and Alan L. Yuille},
   booktitle = {International Conference on Machine Learning (ICML)},
   year = {2018}
}

Introduction

The state-of-the-art network architectures usually increase the depths by cascading convolutional layers or building blocks. Gradually Updated Neural Network (GUNN) presents an alternative method to increase the depth. It introduces computation orderings to the channels within convolutional layers or blocks, based on which it gradually computes the outputs in a channel-wise manner. The added orderings not only increase the depths and the learning capacities of the networks without any additional computation costs, but also eliminate the overlap singularities so that the networks are able to converge faster and perform better.

<img src="intro.png"/>

Usage

Install Torch and required packages following here. Training on CIFAR

th main.lua -netType gunn-15 -dataset cifar10 -batchSize 64 -nGPU 4 -nThreads 8 -shareGradInput true -nEpochs 300

For CIFAR-100, please change cifar10 to cifar100 after -dataset. Training on ImageNet

th main.lua -netType gunn-18 -dataset imagenet -batchSize 256 -nGPU 4 -nThreads 16 -shareGradInput true -nEpochs 120 -data [data folder]

Results

ModelParametersCIFAR-10CIFAR-100
GUNN-151.6M4.1520.45
GUNN-2429.6M3.2116.69
ModelParametersImageNet Top-1ImageNet Top-5
GUNN-1828.9M21.655.87
Wide GUNN-1845.6M20.595.52