Awesome
Masked Autoencoders: A PyTorch Implementation
<p align="center"> <img src="https://user-images.githubusercontent.com/11435359/146857310-f258c86c-fde6-48e8-9cee-badd2b21bd2c.png" width="480"> </p>This is a PyTorch/GPU re-implementation of the paper Masked Autoencoders Are Scalable Vision Learners:
@Article{MaskedAutoencoders2021,
author = {Kaiming He and Xinlei Chen and Saining Xie and Yanghao Li and Piotr Doll{\'a}r and Ross Girshick},
journal = {arXiv:2111.06377},
title = {Masked Autoencoders Are Scalable Vision Learners},
year = {2021},
}
-
The original implementation was in TensorFlow+TPU. This re-implementation is in PyTorch+GPU.
-
This repo is a modification on the DeiT repo. Installation and preparation follow that repo.
-
This repo is based on
timm==0.3.2
, for which a fix is needed to work with PyTorch 1.8.1+.
Catalog
- Visualization demo
- Pre-trained checkpoints + fine-tuning code
- Pre-training code
Visualization demo
Run our interactive visualization demo using Colab notebook (no GPU needed):
<p align="center"> <img src="https://user-images.githubusercontent.com/11435359/147859292-77341c70-2ed8-4703-b153-f505dcb6f2f8.png" width="600"> </p>Fine-tuning with pre-trained checkpoints
The following table provides the pre-trained checkpoints used in the paper, converted from TF/TPU to PT/GPU:
<table><tbody> <!-- START TABLE --> <!-- TABLE HEADER --> <th valign="bottom"></th> <th valign="bottom">ViT-Base</th> <th valign="bottom">ViT-Large</th> <th valign="bottom">ViT-Huge</th> <!-- TABLE BODY --> <tr><td align="left">pre-trained checkpoint</td> <td align="center"><a href="https://dl.fbaipublicfiles.com/mae/pretrain/mae_pretrain_vit_base.pth">download</a></td> <td align="center"><a href="https://dl.fbaipublicfiles.com/mae/pretrain/mae_pretrain_vit_large.pth">download</a></td> <td align="center"><a href="https://dl.fbaipublicfiles.com/mae/pretrain/mae_pretrain_vit_huge.pth">download</a></td> </tr> <tr><td align="left">md5</td> <td align="center"><tt>8cad7c</tt></td> <td align="center"><tt>b8b06e</tt></td> <td align="center"><tt>9bdbb0</tt></td> </tr> </tbody></table>The fine-tuning instruction is in FINETUNE.md.
By fine-tuning these pre-trained models, we rank #1 in these classification tasks (detailed in the paper):
<table><tbody> <!-- START TABLE --> <!-- TABLE HEADER --> <th valign="bottom"></th> <th valign="bottom">ViT-B</th> <th valign="bottom">ViT-L</th> <th valign="bottom">ViT-H</th> <th valign="bottom">ViT-H<sub>448</sub></th> <td valign="bottom" style="color:#C0C0C0">prev best</td> <!-- TABLE BODY --> <tr><td align="left">ImageNet-1K (no external data)</td> <td align="center">83.6</td> <td align="center">85.9</td> <td align="center">86.9</td> <td align="center"><b>87.8</b></td> <td align="center" style="color:#C0C0C0">87.1</td> </tr> <td colspan="5"><font size="1"><em>following are evaluation of the same model weights (fine-tuned in original ImageNet-1K):</em></font></td> <tr> </tr> <tr><td align="left">ImageNet-Corruption (error rate) </td> <td align="center">51.7</td> <td align="center">41.8</td> <td align="center"><b>33.8</b></td> <td align="center">36.8</td> <td align="center" style="color:#C0C0C0">42.5</td> </tr> <tr><td align="left">ImageNet-Adversarial</td> <td align="center">35.9</td> <td align="center">57.1</td> <td align="center">68.2</td> <td align="center"><b>76.7</b></td> <td align="center" style="color:#C0C0C0">35.8</td> </tr> <tr><td align="left">ImageNet-Rendition</td> <td align="center">48.3</td> <td align="center">59.9</td> <td align="center">64.4</td> <td align="center"><b>66.5</b></td> <td align="center" style="color:#C0C0C0">48.7</td> </tr> <tr><td align="left">ImageNet-Sketch</td> <td align="center">34.5</td> <td align="center">45.3</td> <td align="center">49.6</td> <td align="center"><b>50.9</b></td> <td align="center" style="color:#C0C0C0">36.0</td> </tr> <td colspan="5"><font size="1"><em>following are transfer learning by fine-tuning the pre-trained MAE on the target dataset:</em></font></td> </tr> <tr><td align="left">iNaturalists 2017</td> <td align="center">70.5</td> <td align="center">75.7</td> <td align="center">79.3</td> <td align="center"><b>83.4</b></td> <td align="center" style="color:#C0C0C0">75.4</td> </tr> <tr><td align="left">iNaturalists 2018</td> <td align="center">75.4</td> <td align="center">80.1</td> <td align="center">83.0</td> <td align="center"><b>86.8</b></td> <td align="center" style="color:#C0C0C0">81.2</td> </tr> <tr><td align="left">iNaturalists 2019</td> <td align="center">80.5</td> <td align="center">83.4</td> <td align="center">85.7</td> <td align="center"><b>88.3</b></td> <td align="center" style="color:#C0C0C0">84.1</td> </tr> <tr><td align="left">Places205</td> <td align="center">63.9</td> <td align="center">65.8</td> <td align="center">65.9</td> <td align="center"><b>66.8</b></td> <td align="center" style="color:#C0C0C0">66.0</td> </tr> <tr><td align="left">Places365</td> <td align="center">57.9</td> <td align="center">59.4</td> <td align="center">59.8</td> <td align="center"><b>60.3</b></td> <td align="center" style="color:#C0C0C0">58.0</td> </tr> </tbody></table>Pre-training
The pre-training instruction is in PRETRAIN.md.
License
This project is under the CC-BY-NC 4.0 license. See LICENSE for details.