Awesome
A PyTorch Implementation of i-MAE: Linearly Separable Representation in MAE
Project Page
| Paper
| BibTeX
We provide a PyTorch/GPU based implementation of our technical report i-MAE: Are Latent Representations in Masked Autoencoders Linearly Separable?
Catalog
- Pretrain demo with Colab
- Pre-training and Fine-tuning code
- Weights Upload
Pre-training
The pre-training instruction is in PRETRAIN.md.
Fine-tuning
The fine-tuning instruction is in FINETUNE.md.
Visualization demo
Please visit our interactive demo on our website, or run our visualization demo with a Colab notebook
Acknowledgement
This repository is based on timm and MAE repositories.
License
This project is under the CC-BY-NC 4.0 license. See LICENSE for details.
Citation
If you find this repository helpful, please consider citing our work:
@article{zhang2022i-mae,
title={i-MAE: Are Latent Representations in Masked Autoencoders Linearly Separable?},
author = {Zhang, Kevin and Shen, Zhiqiang},
journal={arXiv preprint arXiv:2210.11470},
year={2022}
}