Home

Awesome

Group Normalization

As part of the implementation series of Joseph Lim's group at USC, our motivation is to accelerate (or sometimes delay) research in the AI community by promoting open-source projects. To this end, we implement state-of-the-art research papers, and publicly share them with concise reports. Please visit our group github site for other projects.

This project is implemented by Shao-Hua Sun and the codes have been reviewed by Te-Lin Wu before being published.

Descriptions

This project includes a Tensorflow implementation of Group Normalizations proposed in the paper Group Normalization by Wu et al. Batch Normalization (BN) has been widely employed in the trainings of deep neural networks to alleviate the internal covariate shift [1].Specifically, BN aims to transform the inputs of each layer in such a way that they have a mean output activation of zero and standard deviation of one. While BN demonstrates it effectiveness in a variety of fields including computer vision, natural language processing, speech processing, robotics, etc., BN's performance substantially decrease when the training batch size become smaller, which limits the gain of utilizing BN in a task requiring small batches constrained by memory consumption.

Motivated by this phenomenon, the Group Normalization (GN) technique is proposed. Instead of normalizing along the batch dimension, GN divides the channels into groups and computes within each group the mean and variance. Therefore, GN’s computation is independent of batch sizes, and so does its accuracy. The experiment section of the paper demonstrates the effectiveness of GN in a wide range of visual tasks, which include image classification (ImageNet), object detection and segmentation (COCO), and video classification (Kinect). This repository is simply a toy repository for those who want to quickly test GN and compare it against BN.

<img src="figure/gn.png" height="250"/>

The illustration from the original GN paper. Each cube represent a 4D tensor of feature maps. Note that the spatial dimension are combined as a single dimension for visualization. N denotes the batch axis, C denotes the channel axis, and H, W denote the spatial axes. The values in blue are normalized by the same mean and variance, computed by aggregating the values of these pixels.

Based on the implementation of this repository, GN is around 20% slower than BN on datasets such as CIFAR-10 and SVHN, which is probably because of the extra reshape and transpose operations. However, when the network goes deeper and the number of channels increase, GN gets even slower due to a larger group size. The model uses GN is around 4 times slower than the one uses BN when being trained ImageNet. This is not reported in the original GN paper.

*This code is still being developed and subject to change.

Prerequisites

Usage

Datasets

Train models on MNIST, Fashion MNIST, SVHN, CIFAR-10 datasets:

$ python download.py --dataset MNIST Fashion SVHN CIFAR10

Train models on Tiny ImageNet

Train models on ImageNet

Train models with downloaded datasets:

Specify the type of normalization you want to use by --norm_type batch or --norm_type group and specify the batch size with --batch_size BATCH_SIZE.

$ python trainer.py --dataset MNIST --learning_rate 1e-3
$ python trainer.py --dataset Fashion --prefix test
$ python trainer.py --dataset SVHN --batch_size 128
$ python trainer.py --dataset CIFAR10 

Train and test your own datasets:

$ mkdir datasets/YOUR_DATASET
$ python trainer.py --dataset YOUR_DATASET
$ python evaler.py --dataset YOUR_DATASET

Results

CIFAR-10

ColorBatch Size
Orange1
Blue2
Sky blue4
Red8
Green16
Pink32
<img src="figure/cifar_group_acc.png" height="250"/>

SVHN

ColorBatch Size
Pink1
Blue2
Sky blue4
Green8
Red16
Orange32
<img src="figure/svhn_group_acc.png" height="250"/>

ImageNet

The trainings are ongoing...

ColorNorm Type
OrangeGroup Normalization
BlueBatch Normalization
<img src="figure/imagenet_ongoing.png" height="250"/>

Conclusion

The Group Normalization divides the channels into groups and computes within each group the mean and variance, and therefore its performance independent of training batch sizes, which is verified with this implementation. However, the performance of Batch Normalization does not vary a lot with different batch sizes on smaller image datasets including CIFAR-10, SVHN, etc. The ImageNet experiments are ongoing and the results will be updated later.

Related works

Author

Shao-Hua Sun / @shaohua0116 @ Joseph Lim's research lab @ USC