Awesome
AutoAugment - Learning Augmentation Policies from Data
Unofficial implementation of the ImageNet, CIFAR10 and SVHN Augmentation Policies learned by AutoAugment, described in this Google AI Blogpost.
Update July 13th, 2018: Wrote a Blogpost about AutoAugment and Double Transfer Learning.
Tested with Python 3.6. Needs pillow>=5.0.0
Example
from autoaugment import ImageNetPolicy
image = PIL.Image.open(path)
policy = ImageNetPolicy()
transformed = policy(image)
To see examples of all operations and magnitudes applied to images, take a look at AutoAugment_Exploration.ipynb.
Example as a PyTorch Transform - ImageNet
from autoaugment import ImageNetPolicy
data = ImageFolder(rootdir, transform=transforms.Compose(
[transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(), ImageNetPolicy(),
transforms.ToTensor(), transforms.Normalize(...)]))
loader = DataLoader(data, ...)
Example as a PyTorch Transform - CIFAR10
from autoaugment import CIFAR10Policy
data = ImageFolder(rootdir, transform=transforms.Compose(
[transforms.RandomCrop(32, padding=4, fill=128), # fill parameter needs torchvision installed from source
transforms.RandomHorizontalFlip(), CIFAR10Policy(),
transforms.ToTensor(),
Cutout(n_holes=1, length=16), # (https://github.com/uoguelph-mlrg/Cutout/blob/master/util/cutout.py)
transforms.Normalize(...)]))
loader = DataLoader(data, ...)
Example as a PyTorch Transform - SVHN
from autoaugment import SVHNPolicy
data = ImageFolder(rootdir, transform=transforms.Compose(
[SVHNPolicy(),
transforms.ToTensor(),
Cutout(n_holes=1, length=20), # (https://github.com/uoguelph-mlrg/Cutout/blob/master/util/cutout.py)
transforms.Normalize(...)]))
loader = DataLoader(data, ...)
Results with AutoAugment
Generalizable Data Augmentations
Finally, we show that policies found on one task can generalize well across different models and datasets. For example, the policy found on ImageNet leads to significant improvements on a variety of FGVC datasets. Even on datasets for which fine-tuning weights pre-trained on ImageNet does not help significantly [26], e.g. Stanford Cars [27] and FGVC Aircraft [28], training with the ImageNet policy reduces test set error by 1.16% and 1.76%, respectively. This result suggests that transferring data augmentation policies offers an alternative method for transfer learning.