Home

Awesome

Fast AutoAugment (Accepted at NeurIPS 2019)

Official Fast AutoAugment implementation in PyTorch.

<p align="center"> <img src="etc/search.jpg" height=350> </p>

Results

CIFAR-10 / 100

Search : 3.5 GPU Hours (1428x faster than AutoAugment), WResNet-40x2 on Reduced CIFAR-10

Model(CIFAR-10)BaselineCutoutAutoAugmentFast AutoAugment<br/>(transfer/direct)
Wide-ResNet-40-25.34.13.73.6 / 3.7Download
Wide-ResNet-28-103.93.12.62.7 / 2.7Download
Shake-Shake(26 2x32d)3.63.02.52.7 / 2.5Download
Shake-Shake(26 2x96d)2.92.62.02.0 / 2.0Download
Shake-Shake(26 2x112d)2.82.61.92.0 / 1.9Download
PyramidNet+ShakeDrop2.72.31.51.8 / 1.7Download
Model(CIFAR-100)BaselineCutoutAutoAugmentFast AutoAugment<br/>(transfer/direct)
Wide-ResNet-40-226.025.220.720.7 / 20.6Download
Wide-ResNet-28-1018.818.417.117.3 / 17.3Download
Shake-Shake(26 2x96d)17.116.014.314.9 / 14.6Download
PyramidNet+ShakeDrop14.012.210.711.9 / 11.7Download

ImageNet

Search : 450 GPU Hours (33x faster than AutoAugment), ResNet-50 on Reduced ImageNet

ModelBaselineAutoAugmentFast AutoAugment<br/>(Top1/Top5)
ResNet-5023.7 / 6.922.4 / 6.222.4 / 6.3Download
ResNet-20021.5 / 5.820.0 / 5.019.4 / 4.7Download

Notes

We have conducted additional experiments with EfficientNet.

ModelBaselineAutoAugmentOur Baseline(Batch)+Fast AA
B023.222.722.9622.68

SVHN Test

Search : 1.5 GPU Hours

BaselineAutoAug / OurFast AutoAugment
Wide-Resnet28x101.51.11.1

Run

We conducted experiments under

Search a augmentation policy

Please read ray's document to construct a proper ray cluster : https://github.com/ray-project/ray, and run search.py with the master's redis address.

$ python search.py -c confs/wresnet40x2_cifar10_b512.yaml --dataroot ... --redis ...

Train a model with found policies

You can train network architectures on CIFAR-10 / 100 and ImageNet with our searched policies.

$ export PYTHONPATH=$PYTHONPATH:$PWD
$ python FastAutoAugment/train.py -c confs/wresnet40x2_cifar10_b512.yaml --aug fa_reduced_cifar10 --dataset cifar10
$ python FastAutoAugment/train.py -c confs/wresnet40x2_cifar10_b512.yaml --aug fa_reduced_cifar10 --dataset cifar100
$ python FastAutoAugment/train.py -c confs/wresnet28x10_cifar10_b512.yaml --aug fa_reduced_cifar10 --dataset cifar10
$ python FastAutoAugment/train.py -c confs/wresnet28x10_cifar10_b512.yaml --aug fa_reduced_cifar10 --dataset cifar100
...
$ python FastAutoAugment/train.py -c confs/resnet50_b512.yaml --aug fa_reduced_imagenet
$ python FastAutoAugment/train.py -c confs/resnet200_b512.yaml --aug fa_reduced_imagenet

By adding --only-eval and --save arguments, you can test trained models without training.

If you want to train with multi-gpu/node, use torch.distributed.launch such as

$ python -m torch.distributed.launch --nproc_per_node={num_gpu_per_node} --nnodes={num_node} --master_addr={master} --master_port={master_port} --node_rank={0,1,2,...,num_node} FastAutoAugment/train.py -c confs/efficientnet_b4.yaml --aug fa_reduced_imagenet

Citation

If you use this code in your research, please cite our paper.

@inproceedings{lim2019fast,
  title={Fast AutoAugment},
  author={Lim, Sungbin and Kim, Ildoo and Kim, Taesup and Kim, Chiheon and Kim, Sungwoong},
  booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
  year={2019}
}

Contact for Issues

References & Opensources

We increase the batch size and adapt the learning rate accordingly to boost the training. Otherwise, we set other hyperparameters equal to AutoAugment if possible. For the unknown hyperparameters, we follow values from the original references or we tune them to match baseline performances.