Home

Awesome

Pytorch implementation of MixMAE (CVPR 2023)

tenser

This repo is the offcial implementation of the paper MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers

@article{MixMAE,
  author  = {Jihao Liu, Xin Huang, Jinliang Zheng, Yu Liu, Hongsheng Li},
  journal = {arXiv:2205.13137},
  title   = {MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers},
  year    = {2022},
}

Availble pretrained models

ModelsParams (M)FLOPs (G)Pretrain EpochsTop-1 Acc.Pretrain_ckptFinetune_ckpt
Swin-B/W148816.360085.1base_600epbase_600ep_ft
Swin-B/W16-384x38489.652.660086.3base_600epbase_600ep_ft_384x384
Swin-L/W1419735.960085.9large_600eplarge_600ep_ft
Swin-L/W16-384x38419911260086.9large_600eplarge_600ep_ft_384x384

Training and evaluation

We use Slurm for multi-node distributed pretraining and finetuning.

Pretrain

sh exp/base_600ep/pretrain.sh partition 16 /path/to/imagenet

Finetune

sh exp/base_600ep/finetune.sh partition 8 /path/to/imagenet