Awesome
Awesome Masked Autoencoders
<img src="https://img.shields.io/badge/Contributions-Welcome-278ea5" alt="Contrib"/> <img src="https://img.shields.io/badge/Number%20of%20Papers-274-FF6F00" alt="PaperNum"/>
<p align="center"> <img width = "700" height = "382" src="mae.png" /> <p align="center">Fig. 1. Masked Autoencoders from Kaiming He et al.</p>Masked Autoencoder (MAE, Kaiming He et al.) has renewed a surge of interest due to its capacity to learn useful representations from rich unlabeled data. Until recently, MAE and its follow-up works have advanced the state-of-the-art and provided valuable insights in research (particularly vision research). Here I list several follow-up works after or concurrent with MAE to inspire future research.
*:octocat: code link, 🌐 project page
Vision
- 🔥Masked Autoencoders Are Scalable Vision Learners :octocat: :octocat: Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick
- 🔥SimMIM: A Simple Framework for Masked Image Modeling :octocat: Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, Han Hu
- 🔥BEIT: BERT Pre-Training of Image Transformers :octocat: Hangbo Bao, Li Dong, Furu Wei
- Student Collaboration Improves Self-Supervised Learning: Dual-Loss Adaptive Masked Autoencoder for Brain Cell Image Analysis :octocat: Son T. Ly, Bai Lin, Hung Q. Vo, Dragan Maric, Badri Roysam, Hien V. Nguyen
- A Mask-Based Adversarial Defense Scheme Weizhen Xu, Chenyi Zhang, Fangzhen Zhao, Liangda Fang
- Adversarial Masking for Self-Supervised Learning :octocat: Yuge Shi, N. Siddharth, Philip H.S. Torr, Adam R. Kosiorek
- Beyond Masking: Demystifying Token-Based Pre-Training for Vision Transformers :octocat: Yunjie Tian, Lingxi Xie, Jiemin Fang, Mengnan Shi, Junran Peng, Xiaopeng Zhang, Jianbin Jiao, Qi Tian, Qixiang Ye
- Context Autoencoder for Self-Supervised Representation Learning :octocat: Xiaokang Chen, Mingyu Ding, Xiaodi Wang, Ying Xin, Shentong Mo, Yunhao Wang, Shumin Han, Ping Luo, Gang Zeng, Jingdong Wang
- Contextual Representation Learning beyond Masked Language Modeling :octocat: Zhiyi Fu, Wangchunshu Zhou, Jingjing Xu, Hao Zhou, Lei Li
- ContrastMask: Contrastive Learning to Segment Every Thing :octocat: Xuehui Wang, Kai Zhao, Ruixin Zhang, Shouhong Ding, Yan Wang, Wei Shen
- ConvMAE: Masked Convolution Meets Masked Autoencoders :octocat: Peng Gao, Teli Ma, Hongsheng Li, Ziyi Lin, Jifeng Dai, Yu Qiao
- Exploring Plain Vision Transformer Backbones for Object Detection Yanghao Li, Hanzi Mao, Ross Girshick, Kaiming He
- Global Contrast Masked Autoencoders Are Powerful Pathological Representation Learners :octocat: Hao Quan, Xingyu Li, Weixing Chen, Qun Bai, Mingchen Zou, Ruijie Yang, Tingting Zheng, Ruiqun Qi, Xinghua Gao, Xiaoyu Cui
- iBOT: Image Bert Pre-Training With Online Tokenizer :octocat: Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie, Alan Yuille, Tao Kong
- MADE: Masked Autoencoder for Distribution Estimation :octocat: Mathieu Germain, Karol Gregor, Iain Murray, Hugo Larochelle
- Mask Transfiner for High-Quality Instance Segmentation :octocat: Lei Ke, Martin Danelljan, Xia Li, Yu-Wing Tai, Chi-Keung Tang, Fisher Yu
- Masked Autoencoders As Spatiotemporal Learners Christoph Feichtenhofer, Haoqi Fan, Yanghao Li, Kaiming He
- Masked Feature Prediction for Self-Supervised Visual Pre-Training Chen Wei, Haoqi Fan, Saining Xie, Chao-Yuan Wu, Alan Yuille, Christoph Feichtenhofer
- Masked Image Modeling Advances 3D Medical Image Analysis Zekai Chen, Devansh Agarwal, Kshitij Aggarwal, Wiem Safta, Mariann Micsinai Balan, Venkat Sethuraman, Kevin Brown
- Masked Siamese Networks for Label-Efficient Learning :octocat: Masked Siamese Networks for Label-Efficient Learning
- MaskGIT: Masked Generative Image Transformer :octocat: Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, William T. Freeman
- MLIM: Vision-and-Language Model Pre-training with Masked Language and Image Modeling Tarik Arici, Mehmet Saygin Seyfioglu, Tal Neiman, Yi Xu, Son Train, Trishul Chilimbi, Belinda Zeng, Ismail Tutar
- SimMC: Simple Masked Contrastive Learning of Skeleton Representations for Unsupervised Person Re-Identification :octocat: Haocong Rao, Chunyan Miao
- VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training :octocat: Zhan Tong, Yibing Song, Jue Wang, Limin Wang
- What to Hide from Your Students: Attention-Guided Masked Image Modeling Ioannis Kakogeorgiou, Spyros Gidaris, Bill Psomas, Yannis Avrithis, Andrei Bursuc, Konstantinos Karantzalos, Nikos Komodakis :octocat:
- Uniform Masking: Enabling MAE Pre-training for Pyramid-based Vision Transformers with Locality :octocat: Xiang Li, Wenhai Wang, Lingfeng Yang, Jian Yang
- Self-supervised 3D anatomy segmentation using self-distilled masked image transformer (SMIT) Jue Jiang, Neelam Tyagi, Kathryn Tringale, Christopher Crane, Harini Veeraraghavan
- FaceMAE: Privacy-Preserving Face Recognition via Masked Autoencoders :octocat: Kai Wang, Bo Zhao, Xiangyu Peng, Zheng Zhu, Jiankang Deng, Xinchao Wang, Hakan Bilen, Yang You
- Deeper vs Wider: A Revisit of Transformer Configuration Fuzhao Xue, Jianghai Chen, Aixin Sun, Xiaozhe Ren, Zangwei Zheng, Xiaoxin He, Xin Jiang, Yang You
- Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation :octocat: Vikram Voleti, Alexia Jolicoeur-Martineau, Christopher Pal
- Green Hierarchical Vision Transformer for Masked Image Modeling :octocat: Lang Huang, Shan You, Mingkai Zheng, Fei Wang, Chen Qian, Toshihiko Yamasaki
- Revealing the Dark Secrets of Masked Image Modeling Zhenda Xie, Zigang Geng, Jingcheng Hu, Zheng Zhang, Han Hu, Yue Cao
- MixMIM: Mixed and Masked Image Modeling for Efficient Visual Representation Learning :octocat: Jihao Liu, Xin Huang, Yu Liu, Hongsheng Li
- Contrastive Learning Rivals Masked Image Modeling in Fine-tuning via Feature Distillation :octocat: Yixuan Wei, Han Hu, Zhenda Xie, Zheng Zhang, Yue Cao, Jianmin Bao, Dong Chen, Baining Guo
- Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN :octocat: Siyuan Li, Di Wu, Fang Wu, Zelin Zang, Kai Wang, Lei Shang, Baigui Sun, Hao Li, Stan.Z.Li
- SupMAE: Supervised Masked Autoencoders Are Efficient Vision Learners :octocat: Feng Liang, Yangguang Li, Diana Marculescu
- Object-wise Masked Autoencoders for Fast Pre-training Jiantao Wu, Shentong Mo
- Multimodal Masked Autoencoders Learn Transferable Representations Xinyang Geng, Hao Liu, Lisa Lee, Dale Schuurmans, Sergey Levine, Pieter Abbeel
- MaskOCR: Text Recognition with Masked Encoder-Decoder Pretraining Pengyuan Lyu, Chengquan Zhang, Shanshan Liu, Meina Qiao, Yangliu Xu, Liang Wu, Kun Yao, Junyu Han, Errui Ding, Jingdong Wang
- Mask DINO: Towards A Unified Transformer-based Framework for Object Detection and Segmentation :octocat: Feng Li, Hao Zhang, Huaizhe xu, Shilong Liu, Lei Zhang, Lionel M. Ni, Heung-Yeung Shum
- Efficient Self-supervised Vision Pretraining with Local Masked Reconstruction Jun Chen, Ming Hu, Boyang Li, Mohamed Elhoseiny
- Masked Unsupervised Self-training for Zero-shot Image Classification :octocat: Junnan Li, Silvio Savarese, Steven C.H. Hoi
- On Data Scaling in Masked Image Modeling Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Yixuan Wei, Qi Dai, Han Hu
- Draft-and-Revise: Effective Image Generation with Contextual RQ-Transformer :octocat: Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, Wook-Shin Han
- Layered Depth Refinement with Mask Guidance :octocat: Soo Ye Kim, Jianming Zhang, Simon Niklaus, Yifei Fan, Simon Chen, Zhe Lin, Munchurl Kim
- MVP: Multimodality-guided Visual Pre-training Longhui Wei, Lingxi Xie, Wengang Zhou, Houqiang Li, Qi Tian
- Masked Autoencoders are Robust Data Augmentors :octocat: Haohang Xu, Shuangrui Ding, Xiaopeng Zhang, Hongkai Xiong, Qi Tian
- Discovering Object Masks with Transformers for Unsupervised Semantic Segmentation :octocat: Wouter Van Gansbeke, Simon Vandenhende, Luc Van Gool
- Masked Frequency Modeling for Self-Supervised Visual Pre-Training :octocat: Jiahao Xie, Wei Li, Xiaohang Zhan, Ziwei Liu, Yew Soon Ong, Chen Change Loy
- Adapting Self-Supervised Vision Transformers by Probing Attention-Conditioned Masking Consistency :octocat: Viraj Prabhu, Sriram Yenamandra, Aaditya Singh, Judy Hoffman
- OmniMAE: Single Model Masked Pretraining on Images and Videos :octocat: Rohit Girdhar, Alaaeldin El-Nouby, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra
- A Unified Framework for Masked and Mask-Free Face Recognition via Feature Rectification :octocat: Shaozhe Hao, Chaofeng Chen, Zhenfang Chen, Kwan-Yee K. Wong
- Integral Migrating Pre-trained Transformer Encoder-decoders for Visual Object Detection Xiaosong Zhang, Feng Liu, Zhiliang Peng, Zonghao Guo, Fang Wan, Xiangyang Ji, Qixiang Ye
- SemMAE: Semantic-Guided Masking for Learning Masked Autoencoders Gang Li, Heliang Zheng, Daqing Liu, Bing Su, Changwen Zheng
- MaskViT: Masked Visual Pre-Training for Video Prediction :octocat: Agrim Gupta, Stephen Tian, Yunzhi Zhang, Jiajun Wu, Roberto Martín-Martín, Li Fei-Fei
- Masked World Models for Visual Control :octocat: Younggyo Seo, Danijar Hafner, Hao Liu, Fangchen Liu, Stephen James, Kimin Lee, Pieter Abbeel
- Training Vision-Language Transformers from Captions Alone :octocat: Liangke Gui, Qiuyuan Huang, Alex Hauptmann, Yonatan Bisk, Jianfeng Gao
- Masked Generative Distillation :octocat: Zhendong Yang, Zhe Li, Mingqi Shao, Dachuan Shi, Zehuan Yuan, Chun Yuan
- k-means Mask Transformer :octocat: Qihang Yu, Huiyu Wang, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hatwig Adam, Alan Yuille, Liang-Chieh Chen
- Bootstrapped Masked Autoencoders for Vision BERT Pretraining :octocat: Xiaoyi Dong, Jianmin Bao, Ting Zhang, Dongdong Chen, Weiming Zhang, Lu Yuan, Dong Chen, Fang Wen, Nenghai Yu
- SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite Imagery :octocat: Yezhen Cong, Samar Khanna, Chenlin Meng, Patrick Liu, Erik Rozi, Yutong He, Marshall Burke, David B. Lobell, Stefano Ermon
- Contrastive Masked Autoencoders are Stronger Vision Learners Zhicheng Huang, Xiaojie Jin, Chengze Lu, Qibin Hou, Ming-Ming Cheng, Dongmei Fu, Xiaohui Shen, Jiashi Feng
- SdAE: Self-distillated Masked Autoencoder :octocat: Yabo Chen, Yuchen Liu, Dongsheng Jiang, Xiaopeng Zhang, Wenrui Dai, Hongkai Xiong, Qi Tian
- Less is More: Consistent Video Depth Estimation with Masked Frames Modeling Yiran Wang, Zhiyu Pan, Xingyi Li, Zhiguo Cao, Ke Xian, Jianming Zhang
- Masked Vision and Language Modeling for Multi-modal Representation Learning Gukyeong Kwon, Zhaowei Cai, Avinash Ravichandran, Erhan Bas, Rahul Bhotika, Stefano Soatto
- Masked Feature Prediction for Self-Supervised Visual Pre-Training :octocat: Chen Wei, Haoqi Fan, Saining Xie, Chao-Yuan Wu, Alan Yuille, Christoph Feichtenhofer
- Understanding Masked Image Modeling via Learning Occlusion Invariant Feature Xiangwen Kong, Xiangyu Zhang
- BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers :octocat: Zhiliang Peng, Li Dong, Hangbo Bao, Qixiang Ye, Furu Wei
- MILAN: Masked Image Pretraining on Language Assisted Representation :octocat: Zejiang Hou, Fei Sun, Yen-Kuang Chen, Yuan Xie, Sun-Yuan Kung
- Open-Vocabulary Panoptic Segmentation with MaskCLIP Zheng Ding, Jieke Wang, Zhuowen Tu
- VLMAE: Vision-Language Masked Autoencoder Sunan He, Taian Guo, Tao Dai, Ruizhi Qiao, Chen Wu, Xiujun Shu, Bo Ren
- MaskCLIP: Masked Self-Distillation Advances Contrastive Language-Image Pretraining Xiaoyi Dong, Yinglin Zheng, Jianmin Bao, Ting Zhang, Dongdong Chen, Hao Yang, Ming Zeng, Weiming Zhang, Lu Yuan, Dong Chen, Fang Wen, Nenghai Yu
- Masked Autoencoders Enable Efficient Knowledge Distillers :octocat: Yutong Bai, Zeyu Wang, Junfei Xiao, Chen Wei, Huiyu Wang, Alan Yuille, Yuyin Zhou, Cihang Xie
- Multi-Modal Masked Autoencoders for Medical Vision-and-Language Pre-Training :octocat: Zhihong Chen, Yuhao Du, Jinpeng Hu, Yang Liu, Guanbin Li, Xiang Wan, Tsung-Hui Chang
- MetaMask: Revisiting Dimensional Confounder for Self-Supervised Learning :octocat: Jiangmeng Li, Wenwen Qiang, Yanan Zhang, Wenyi Mo, Changwen Zheng, Bing Su, Hui Xiong
- NamedMask: Distilling Segmenters from Complementary Foundation Models :octocat: Gyungin Shin, Weidi Xie, Samuel Albanie
- Exploring Target Representations for Masked Autoencoders Xingbin Liu, Jinghao Zhou, Tao Kong, Xianming Lin, Rongrong Ji
- Self-Supervised Masked Convolutional Transformer Block for Anomaly Detection :octocat: Neelu Madan, Nicolae-Catalin Ristea, Radu Tudor Ionescu, Kamal Nasrollahi, Fahad Shahbaz Khan, Thomas B. Moeslund, Mubarak Shah
- Exploring The Role of Mean Teachers in Self-supervised Masked Auto-Encoders Youngwan Lee, Jeffrey Willette, Jonghee Kim, Juho Lee, Sung Ju Hwang
- Self-Distillation for Further Pre-training of Transformers Seanie Lee, Minki Kang, Juho Lee, Sung Ju Hwang, Kenji Kawaguchi
- MAMO: Masked Multimodal Modeling for Fine-Grained Vision-Language Representation Learning Zijia Zhao, Longteng Guo, Xingjian He, Shuai Shao, Zehuan Yuan, Jing Liu
- It Takes Two: Masked Appearance-Motion Modeling for Self-supervised Video Transformer Pre-training Yuxin Song, Min Yang, Wenhao Wu, Dongliang He, Fu Li, Jingdong Wang
- Exploring Long-Sequence Masked Autoencoders :octocat: Ronghang Hu, Shoubhik Debnath, Saining Xie, Xinlei Chen
- M3Video: Masked Motion Modeling for Self-Supervised Video Representation Learning Xinyu Sun, Peihao Chen, Liangwei Chen, Thomas H. Li, Mingkui Tan, Chuang Gan
- Motion-Guided Masking for Spatiotemporal Representation Learning David Fan, Jue Wang, Shuai Liao, Yi Zhu, Vimal Bhat, Hector Santos-Villalobos, Rohith MV, Xinyu Li
- MGMAE: Motion Guided Masking for Video Masked Autoencoding Bingkun Huang, Zhiyu Zhao, Guozhen Zhang, Yu Qiao, Limin Wang
- EVEREST: Efficient Masked Video Autoencoder by Removing Redundant Spatiotemporal Tokens Sunil Hwang, Jaehong Yoon, Youngwan Lee, Sung Ju Hwang
- Ensemble Learning using Transformers and Convolutional Networks for Masked Face Recognition :octocat: Mohammed R. Al-Sinan, Aseel F. Haneef, Hamzah Luqman
- MOVE: Unsupervised Movable Object Segmentation and Detection Adam Bielski, Paolo Favaro
- Denoising Masked AutoEncoders are Certifiable Robust Vision Learners :octocat: Quanlin Wu, Hang Ye, Yuntian Gu, Huishuai Zhang, Liwei Wang, Di He
- How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders :octocat: Qi Zhang, Yifei Wang, Yisen Wang
- MultiMAE: Multi-modal Multi-task Masked Autoencoders :octocat: 🌐 Roman Bachmann, David Mizrahi, Andrei Atanov, Amir Zamir
- A Unified View of Masked Image Modeling :octocat: Zhiliang Peng, Li Dong, Hangbo Bao, Qixiang Ye, Furu Wei
- i-MAE: Are Latent Representations in Masked Autoencoders Linearly Separable? 🌐 :octocat: Kevin Zhang, Zhiqiang Shen
- MixMask: Revisiting Masked Siamese Self-supervised Learning in Asymmetric Distance :octocat: Kirill Vishniakov, Eric Xing, Zhiqiang Shen
- DiffEdit: Diffusion-based semantic image editing with mask guidance Guillaume Couairon, Jakob Verbeek, Holger Schwenk, Matthieu Cord
- Masked Modeling Duo: Learning Representations by Encouraging Both Networks to Model the Input Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, Noboru Harada, Kunio Kashino
- A simple, efficient and scalable contrastive masked autoencoder for learning visual representations :octocat: Shlok Mishra, Joshua Robinson, Huiwen Chang, David Jacobs, Aaron Sarna, Aaron Maschinot, Dilip Krishnan
- Siamese Transition Masked Autoencoders as Uniform Unsupervised Visual Anomaly Detector Haiming Yao, Xue Wang, Wenyong Yu
- Bootstrapped Masked Autoencoders for Vision BERT Pretraining :octocat: Xiaoyi Dong, Jianmin Bao, Ting Zhang, Dongdong Chen, Weiming Zhang, Lu Yuan, Dong Chen, Fang Wen, Nenghai Yu
- MaskTune: Mitigating Spurious Correlations by Forcing to Explore :octocat: Saeid Asgari Taghanaki, Aliasghar Khani, Fereshte Khani, Ali Gholami, Linh Tran, Ali Mahdavi-Amiri, Ghassan Hamarneh
- Exploring the Limits of Masked Visual Representation Learning at Scale :octocat: Yuxin Fang, Wen Wang, Binhui Xie, Quan Sun, Ledell Wu, Xinggang Wang, Tiejun Huang, Xinlong Wang, Yue Cao
- MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis :octocat: Tianhong Li, Huiwen Chang, Shlok Kumar Mishra, Han Zhang, Dina Katabi, Dilip Krishnan
- Stare at What You See: Masked Image Modeling without Reconstruction :octocat: Hongwei Xue, Peng Gao, Hongyang Li, Yu Qiao, Hao Sun, Houqiang Li, Jiebo Luo
- Mask-based Latent Reconstruction for Reinforcement Learning :octocat: Tao Yu, Zhizheng Zhang, Cuiling Lan, Yan Lu, Zhibo Chen
- AdaMAE: Adaptive Masking for Efficient Spatiotemporal Learning with Masked Autoencoders :octocat: Wele Gedara Chaminda Bandara, Naman Patel, Ali Gholami, Mehdi Nikkhah, Motilal Agrawal, Vishal M. Patel
- Efficient Video Representation Learning via Masked Video Modeling with Motion-centric Token Selection :octocat: Sunil Hwang, Jaehong Yoon, Youngwan Lee, Sung Ju Hwang
- Contrastive Masked Autoencoders for Self-Supervised Video Hashing :octocat: Yuting Wang, Jinpeng Wang, Bin Chen, Ziyun Zeng, Shutao Xia
- MCVD: Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation 🌐 :octocat: Vikram Voleti, Alexia Jolicoeur-Martineau, Christopher Pal
- MAEDAY: MAE for few and zero shot AnomalY-Detection :octocat: Eli Schwartz, Assaf Arbelle, Leonid Karlinsky, Sivan Harary, Florian Scheidegger, Sivan Doveh, Raja Giryes
- What's Behind the Mask: Estimating Uncertainty in Image-to-Image Problems Gilad Kutiel, Regev Cohen, Michael Elad, Daniel Freedman
- Good helper is around you: Attention-driven Masked Image Modeling Jie Gui, Zhengqi Liu, Hao Luo
- Scaling Language-Image Pre-training via Masking Yanghao Li, Haoqi Fan, Ronghang Hu, Christoph Feichtenhofer, Kaiming He
- Learning Imbalanced Data with Vision Transformers :octocat: Zhengzhuo Xu, Ruikang Liu, Shuo Yang, Zenghao Chai, Chun Yuan
- Masked Contrastive Pre-Training for Efficient Video-Text Retrieval Fangxun Shu, Biaolong Chen, Yue Liao, Shuwen Xiao, Wenyu Sun, Xiaobo Li, Yousong Zhu, Jinqiao Wang, Si Liu
- MIC: Masked Image Consistency for Context-Enhanced Domain Adaptation :octocat: Lukas Hoyer, Dengxin Dai, Haoran Wang, Luc Van Gool
- Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning 🌐 Rui Wang, Dongdong Chen, Zuxuan Wu, Yinpeng Chen, Xiyang Dai, Mengchen Liu, Lu Yuan, Yu-Gang Jiang
- MAGVIT: Masked Generative Video Transformer 🌐 Lijun Yu, Yong Cheng, Kihyuk Sohn, José Lezama, Han Zhang, Huiwen Chang, Alexander G. Hauptmann, Ming-Hsuan Yang, Yuan Hao, Irfan Essa, Lu Jiang
- FastMIM: Expediting Masked Image Modeling Pre-training for Vision :octocat: Jianyuan Guo, Kai Han, Han Wu, Yehui Tang, Yunhe Wang, Chang Xu
- Efficient Self-supervised Learning with Contextualized Target Representations for Vision, Speech and Language :octocat: Alexei Baevski, Arun Babu, Wei-Ning Hsu, Michael Auli
- Swin MAE: Masked Autoencoders for Small Datasets Zi'an Xu, Yin Dai, Fayu Liu, Weibing Chen, Yue Liu, Lifu Shi, Sheng Liu, Yuhang Zhou
- Semi-MAE: Masked Autoencoders for Semi-supervised Vision Transformers Haojie Yu, Kang Zhao, Xiaoming Xu
- MS-DINO: Efficient Distributed Training of Vision Transformer Foundation Model in Medical Domain through Masked Sampling Sangjoon Park, Ik-Jae Lee, Jun Won Kim, Jong Chul Ye
- TinyMIM: An Empirical Study of Distilling MIM Pre-trained Models :octocat: Sucheng Ren, Fangyun Wei, Zheng Zhang, Han Hu
- ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders :octocat: Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie
- Disjoint Masking with Joint Distillation for Efficient Masked Image Modeling :octocat: Xin Ma, Chang Liu, Chunyu Xie, Long Ye, Yafeng Deng, Xiangyang Ji
- Masked Siamese ConvNets: Towards an Effective Masking Strategy for General-purpose Siamese Networks Li Jing, Jiachen Zhu, Yann LeCun
- Efficient Masked Autoencoders with Self-Consistency Zhaowen Li, Yousong Zhu, Zhiyang Chen, Wei Li, Chaoyang Zhao, Liwei Wu, Rui Zhao, Ming Tang, Jinqiao Wang
- PixMIM: Rethinking Pixel Reconstruction in Masked Image Modeling :octocat: Yuan Liu, Songyang Zhang, Jiacheng Chen, Kai Chen, Dahua Lin
- Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is All You Need Jingyao Li, Pengguang Chen, Shaozuo Yu, Zexin He, Shu Liu, Jiaya Jia
- Masked Image Modeling with Denoising Contrast :octocat: Kun Yi, Yixiao Ge, Xiaotong Li, Shusheng Yang, Dian Li, Jianping Wu, Ying Shan, Xiaohu Qie
- Layer Grafted Pre-training: Bridging Contrastive Learning And Masked Image Modeling For Label-Efficient Representations :octocat: Ziyu Jiang, Yinpeng Chen, Mengchen Liu, Dongdong Chen, Xiyang Dai, Lu Yuan, Zicheng Liu, Zhangyang Wang
- MaskedKD: Efficient Distillation of Vision Transformers with Masked Images Seungwoo Son, Namhoon Lee, Jaeho Lee
- Generic-to-Specific Distillation of Masked Autoencoders :octocat: Wei Huang, Zhiliang Peng, Li Dong, Furu Wei, Jianbin Jiao, Qixiang Ye
- Masked Image Modeling with Local Multi-Scale Reconstruction :octocat: Haoqing Wang, Yehui Tang, Yunhe Wang, Jianyuan Guo, Zhi-Hong Deng, Kai Han
- StrucTexTv2: Masked Visual-Textual Prediction for Document Image Pre-training :octocat: Yuechen Yu, Yulin Li, Chengquan Zhang, Xiaoqiang Zhang, Zengyuan Guo, Xiameng Qin, Kun Yao, Junyu Han, Errui Ding, Jingdong Wang
- Masked Distillation with Receptive Tokens :octocat: Tao Huang, Yuan Zhang, Shan You, Fei Wang, Chen Qian, Jian Cao, Chang Xu
- DeepMIM: Deep Supervision for Masked Image Modeling :octocat: Sucheng Ren, Fangyun Wei, Samuel Albanie, Zheng Zhang, Han Hu
- 3D Masked Autoencoding and Pseudo-labeling for Domain Adaptive Segmentation of Heterogeneous Infant Brain MRI :octocat: Xuzhe Zhang, Yuhao Wu, Jia Guo, Jerod M. Rasmussen, Thomas G. O'Connor, Hyagriv N. Simhan, Sonja Entringer, Pathik D. Wadhwa, Claudia Buss, Cristiane S. Duarte, Andrea Jackowski, Hai Li, Jonathan Posner, Andrew F. Laine, Yun Wang
- The effectiveness of MAE pre-pretraining for billion-scale pretraining Mannat Singh, Quentin Duval, Kalyan Vasudev Alwala, Haoqi Fan, Vaibhav Aggarwal, Aaron Adcock, Armand Joulin, Piotr Dollár, Christoph Feichtenhofer, Ross Girshick, Rohit Girdhar, Ishan Misra
- VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking Limin Wang, Bingkun Huang, Zhiyu Zhao, Zhan Tong, Yinan He, Yi Wang, Yali Wang, Yu Qiao
- Scale-MAE: A Scale-Aware Masked Autoencoder for Multiscale Geospatial Representation Learning Colorado J. Reed, Ritwik Gupta, Shufan Li, Sarah Brockman, Christopher Funk, Brian Clipp, Kurt Keutzer, Salvatore Candido, Matt Uyttendaele, Trevor Darrell :octocat:
- Siamese Masked Autoencoders Agrim Gupta, Jiajun Wu, Jia Deng, Li Fei-Fei :octocat:
- MaskDiff: Modeling Mask Distribution with Diffusion Probabilistic Model for Few-Shot Instance Segmentation Minh-Quan Le, Tam V. Nguyen, Trung-Nghia Le, Thanh-Toan Do, Minh N. Do, Minh-Triet Tran
- DiffuMask: Synthesizing Images with Pixel-level Annotations for Semantic Segmentation Using Diffusion Models :octocat: Weijia Wu, Yuzhong Zhao, Mike Zheng Shou, Hong Zhou, Chunhua Shen
- Mixed Autoencoder for Self-supervised Visual Representation Learning Kai Chen, Zhili Liu, Lanqing Hong, Hang Xu, Zhenguo Li, Dit-Yan Yeung
- DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking Tasks :octocat: Qiangqiang Wu, Tianyu Yang, Ziquan Liu, Baoyuan Wu, Ying Shan, Antoni B. Chan
- MM-BSN: Self-Supervised Image Denoising for Real-World with Multi-Mask based on Blind-Spot Network :octocat: Dan Zhang, Fangfang Zhou, Yuwen Jiang, Zhengming Fu
- Hard Patches Mining for Masked Image Modeling :octocat: Haochen Wang, Kaiyou Song, Junsong Fan, Yuxi Wang, Jin Xie, Zhaoxiang Zhang
- SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked Autoencoders Qingsen Yan, Song Zhang, Weiye Chen, Hao Tang, Yu Zhu, Jinqiu Sun, Luc Van Gool, Yanning Zhang
- FreMAE: Fourier Transform Meets Masked Autoencoders for Medical Image Segmentation Wenxuan Wang, Jing Wang, Chen Chen, Jianbo Jiao, Lichao Sun, Yuanxiu Cai, Shanshan Song, Jiangyun Li
- An Empirical Study of End-to-End Video-Language Transformers with Masked Visual Modeling :octocat: Tsu-Jui Fu, Linjie Li, Zhe Gan, Kevin Lin, William Yang Wang, Lijuan Wang, Zicheng Liu
- PMatch: Paired Masked Image Modeling for Dense Geometric Matching :octocat: Shengjie Zhu, Xiaoming Liu
- Medical supervised masked autoencoders: Crafting a better masking strategy and efficient fine-tuning schedule for medical image classification Jiawei Mao, Shujian Guo, Yuanqi Chang, Xuesong Yin, Binling Nie
- Maskomaly:Zero-Shot Mask Anomaly Segmentation Jan Ackermann, Christos Sakaridis, Fisher Yu
- Unsupervised Anomaly Detection in Medical Images Using Masked Diffusion Model :octocat: Hasan Iqbal, Umar Khalid, Jing Hua, Chen Chen
- Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles :octocat: Chaitanya Ryali, Yuan-Ting Hu, Daniel Bolya, Chen Wei, Haoqi Fan, Po-Yao Huang, Vaibhav Aggarwal, Arkabandhu Chowdhury, Omid Poursaeed, Judy Hoffman, Jitendra Malik, Yanghao Li, Christoph Feichtenhofer
- CM-MaskSD: Cross-Modality Masked Self-Distillation for Referring Image Segmentation Wenxuan Wang, Jing Liu, Xingjian He, Yisi Zhang, Chen Chen, Jiachen Shen, Yan Zhang, Jiangyun Li
- R-MAE: Regions Meet Masked Autoencoders :octocat: Duy-Kien Nguyen, Vaibhav Aggarwal, Yanghao Li, Martin R. Oswald, Alexander Kirillov, Cees G. M. Snoek, Xinlei Chen
- Exploring Effective Mask Sampling Modeling for Neural Image Compression Lin Liu, Mingming Zhao, Shanxin Yuan, Wenlong Lyu, Wengang Zhou, Houqiang Li, Yanfeng Wang, Qi Tian
- Automatic Image Blending Algorithm Based on SAM and DINO Haochen Xue, Mingyu Jin, Chong Zhang, Yuxuan Huang, Qian Weng, Xiaobo Jin
- A Survey on Masked Autoencoder for Visual Self-supervised Learning Chaoning Zhang, Chenshuang Zhang, Junha Song, John Seon Keun Yi, In So Kweon
- MGMAE: Motion Guided Masking for Video Masked Autoencoding :octocat: Bingkun Huang, Zhiyu Zhao, Guozhen Zhang, Yu Qiao, Limin Wang+ Masked Autoencoders are Efficient Class Incremental Learners :octocat: Jiang-Tian Zhai, Xialei Liu, Andrew D. Bagdanov, Ke Li, Ming-Ming Cheng
- Motion-Guided Masking for Spatiotemporal Representation Learning David Fan, Jue Wang, Shuai Liao, Yi Zhu, Vimal Bhat, Hector Santos-Villalobos, Rohith MV, Xinyu Li
- CL-MAE: Curriculum-Learned Masked Autoencoders :octocat: Neelu Madan, Nicolae-Catalin Ristea, Kamal Nasrollahi, Thomas B. Moeslund, Radu Tudor Ionescu
- Contrastive Feature Masking Open-Vocabulary Vision Transformer Dahun Kim, Anelia Angelova, Weicheng Kuo
- Masked Autoencoders are Scalable Learners of Cellular Morphology :octocat: Oren Kraus, Kian Kenyon-Dean, Saber Saberian, Maryam Fallah, Peter McLean, Jess Leung, Vasudev Sharma, Ayla Khan, Jia Balakrishnan, Safiye Celik, Maciej Sypetkowski, Chi Vicky Cheng, Kristen Morse, Maureen Makes, Ben Mabey, Berton Earnshaw
- Diffusion Models as Masked Audio-Video Learners Elvis Nunez, Yanzi Jin, Mohammad Rastegari, Sachin Mehta, Maxwell Horton
- Limited Data, Unlimited Potential: A Study on ViTs Augmented by Masked Autoencoders :octocat: Srijan Das, Tanmay Jain, Dominick Reilly, Pranav Balaji, Soumyajit Karmakar, Shyam Marjit, Xiang Li, Abhijit Das, Michael S. Ryoo
- Concatenated Masked Autoencoders as Spatial-Temporal Learner :octocat: Zhouqiang Jiang, Bowen Wang, Tong Xiang, Zhaofeng Niu, Hong Tang, Guangshun Li, Liangzhi Li
- Asymmetric Masked Distillation for Pre-Training Small Foundation Models :octocat: Zhiyu Zhao, Bingkun Huang, Sen Xing, Gangshan Wu, Yu Qiao, Limin Wang
- MIMIR: Masked Image Modeling for Mutual Information-based Adversarial Robustness Xiaoyun Xu, Shujian Yu, Jingzheng Wu, Stjepan Picek
- Continual-MAE: Adaptive Distribution Masked Autoencoders for Continual Test-Time Adaptation Jiaming Liu, Ran Xu, Senqiao Yang, Renrui Zhang, Qizhe Zhang, Zehui Chen, Yandong Guo, Shanghang Zhang
- MaskCRT: Masked Conditional Residual Transformer for Learned Video Compression Yi-Hsin Chen, Hong-Sheng Xie, Cheng-Wei Chen, Zong-Lin Gao, Wen-Hsiao Peng, Martin Benjak, Jörn Ostermann
- Rethinking Patch Dependence for Masked Autoencoders 🌐 Letian Fu, Long Lian, Renhao Wang, Baifeng Shi, Xudong Wang, Adam Yala, Trevor Darrell, Alexei A. Efros, Ken Goldberg
- VideoPrism: A Foundational Visual Encoder for Video Understanding Long Zhao, Nitesh B. Gundavarapu, Liangzhe Yuan, Hao Zhou, Shen Yan, Jennifer J. Sun, Luke Friedman, Rui Qian, Tobias Weyand, Yue Zhao, Rachel Hornung, Florian Schroff, Ming-Hsuan Yang, David A. Ross, Huisheng Wang, Hartwig Adam, Mikhail Sirotenko, Ting Liu, Boqing Gong
- Attention-Guided Masked Autoencoders For Learning Image Representations Leon Sick, Dominik Engel, Pedro Hermosilla, Timo Ropinski
- VideoMAC: Video Masked Autoencoders Meet ConvNets Gensheng Pei, Tao Chen, Xiruo Jiang, Huafeng Liu, Zeren Sun, Yazhou Yao
- Masked Capsule Autoencoders Miles Everett, Mingjun Zhong, Georgios Leontidis
- FocusMAE: Gallbladder Cancer Detection from Ultrasound Videos with Focused Masked Autoencoders Soumen Basu, Mayuna Gupta, Chetan Madan, Pankaj Gupta, Chetan Arora
- DailyMAE: Towards Pretraining Masked Autoencoders in One Day Jiantao Wu, Shentong Mo, Sara Atito, Zhenhua Feng, Josef Kittler, Muhammad Awais
- Label-free Anomaly Detection in Aerial Agricultural Images with Masked Image Modeling Sambal Shikhar, Anupam Sobti
- MiM: Mask in Mask Self-Supervised Pre-Training for 3D Medical Image Analysis Jiaxin Zhuang, Linshan Wu, Qiong Wang, Varut Vardhanabhuti, Lin Luo, Hao Chen
- MaskMatch: Boosting Semi-Supervised Learning Through Mask Autoencoder-Driven Feature Learning Wenjin Zhang, Keyi Li, Sen Yang, Chenyang Gao, Wanzhao Yang, Sifan Yuan, Ivan Marsic
- CorrMAE: Pre-training Correspondence Transformers with Masked Autoencoder Tangfei Liao, Xiaoqin Zhang, Guobao Xiao, Min Li, Tao Wang, Mang Ye
- An Image is Worth More Than 16x16 Patches: Exploring Transformers on Individual Pixels Duy-Kien Nguyen, Mahmoud Assran, Unnat Jain, Martin R. Oswald, Cees G. M. Snoek, Xinlei Chen
- Efficient Image Pre-Training with Siamese Cropped Masked Autoencoders :octocat: Alexandre Eymaël, Renaud Vandeghen, Anthony Cioppa, Silvio Giancola, Bernard Ghanem, Marc Van Droogenbroeck
Audio
- MAE-AST: Masked Autoencoding Audio Spectrogram Transformer :octocat: Alan Baade, Puyuan Peng, David Harwath
- Group masked autoencoder based density estimator for audio anomaly detection Ritwik Giri, Fangzhou Cheng, Karim Helwani, Srikanth V. Tenneti, Umut Isik, Arvindh Krishnaswamy
- Masked Autoencoders that Listen :octocat: Po-Yao (Bernie)Huang, Hu Xu, Juncheng Li, Alexei Baevski, Michael Auli, Wojciech Galuba, Florian Metze, Christoph Feichtenhofer
- Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical Alignment :octocat: Mustafa Shukor, Guillaume Couairon, Matthieu Cord
- Contrastive Audio-Visual Masked Autoencoder Yuan Gong, Andrew Rouditchenko, Alexander H. Liu, David Harwath, Leonid Karlinsky, Hilde Kuehne, James Glass
- Masked Spectrogram Modeling using Masked Autoencoders for Learning General-purpose Audio Representation :octocat: Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, Noboru Harada, Kunio Kashino
- Masked Lip-Sync Prediction by Audio-Visual Contextual Exploitation in Transformers 🌐 Yasheng Sun, Hang Zhou, Kaisiyuan Wang, Qianyi Wu, Zhibin Hong, Jingtuo Liu, Errui Ding, Jingdong Wang, Ziwei Liu, Hideki Koike
- Audiovisual Masked Autoencoders Mariana-Iuliana Georgescu, Eduardo Fonseca, Radu Tudor Ionescu, Mario Lucic, Cordelia Schmid, Anurag Arnab
- HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised Audio-Visual Emotion Recognition Licai Sun, Zheng Lian, Bin Liu, Jianhua Tao
- Scaling up masked audio encoder learning for general audio classification Heinrich Dinkel, Zhiyong Yan, Yongqing Wang, Junbo Zhang, Yujun Wang, Bin Wang
- Genuine-Focused Learning using Mask AutoEncoder for Generalized Fake Audio Detection Xiaopeng Wang, Ruibo Fu, Zhengqi Wen, Zhiyong Wang, Yuankun Xie, Yukun Liu, Jianhua Tao, Xuefei Liu, Yongwei Li, Xin Qi, Yi Lu, Shuchen Shi
- AVFF: Audio-Visual Feature Fusion for Video Deepfake Detection Trevine Oorloff, Surya Koppisetti, Nicolò Bonettini, Divyaraj Solanki, Ben Colman, Yaser Yacoob, Ali Shahriyari, Gaurav Bharaj
Graph
- MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs :octocat: Qiaoyu Tan, Ninghao Liu, Xiao Huang, Rui Chen, Soo-Hyun Choi, Xia Hu
- Graph Masked Autoencoder with Transformers :octocat: Sixiao Zhang, Hongxu Chen, Haoran Yang, Xiangguo Sun, Philip S. Yu, Guandong Xu
- What's Behind the Mask: Understanding Masked Graph Modeling for Graph Autoencoders :octocat: Jintang Li, Ruofan Wu, Wangbin Sun, Liang Chen, Sheng Tian, Liang Zhu, Changhua Meng, Zibin Zheng, Weiqiang Wang
- GraphMAE: Self-Supervised Masked Graph Autoencoders :octocat: Zhenyu Hou, Xiao Liu, Yukuo Cen, Yuxiao Dong, Hongxia Yang, Chunjie Wang, Jie Tang
- Heterogeneous Graph Masked Autoencoders :octocat: Yijun Tian, Kaiwen Dong, Chunhui Zhang, Chuxu Zhang, Nitesh V. Chawla
- Graph Masked Autoencoder Enhanced Predictor for Neural Architecture Search :octocat: Kun Jing , Jungang Xu, Pengfei Li
- Bi-channel Masked Graph Autoencoders for Spatially Resolved Single-cell Transcriptomics Data Imputation Hongzhi Wen, Wei Jin, Jiayuan Ding, Christopher Xu, Yuying Xie, Jiliang Tang
- Masked Graph Auto-Encoder Constrained Graph Pooling :octocat: Chuang Liu, Yibing Zhan, Xueqi Ma, Dapeng Tao, Bo Du, Wenbin Hu
- BatmanNet: Bi-branch Masked Graph Transformer Autoencoder for Molecular Representation Zhen Wang, Zheng Feng, Yanjun Li, Bowen Li, Yongrui Wang, Chulin Sha, Min He, Xiaolin Li
- Jointly Learning Visual and Auditory Speech Representations from Raw Data Alexandros Haliassos, Pingchuan Ma, Rodrigo Mira, Stavros Petridis, Maja Pantic
- S2GAE: Self-Supervised Graph Autoencoders are Generalizable Learners with Graph Masking :octocat: Qiqoyu Tan, Ninghao Liu, Xiao Huang, Soo-Hyun Choi, Li Li, Rui Chen, Xia Hu
- GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner :octocat: Zhenyu Hou, Yufei He, Yukuo Cen, Xiao Liu, Yuxiao Dong, Evgeny Kharlamov, Jie Tang
- SeeGera: Self-supervised Semi-implicit Graph Variational Auto-encoders with Masking :octocat: Xiang Li, Tiandi Ye, Caihua Shan, Dongsheng Li, Ming Gao
- GiGaMAE: Generalizable Graph Masked Autoencoder via Collaborative Latent Space Reconstruction :octocat: Yucheng Shi, Yushun Dong, Qiaoyu Tan, Jundong Li, Ninghao Liu
- Rethinking Tokenizer and Decoder in Masked Graph Modeling for Molecules :octocat: Zhiyuan Liu, Yaorui Shi, An Zhang, Enzhi Zhang, Kenji Kawaguchi, Xiang Wang, Tat-Seng Chua+ GPT-ST: Generative Pre-Training of Spatio-Temporal Graph Neural Networks :octocat: Zhonghang Li, Lianghao Xia, Yong Xu, Chao Huang
- GAMC: An Unsupervised Method for Fake News Detection using Graph Autoencoder with Masking Shu Yin, Chao Gao, Zhen Wang
- Masked AutoEncoder for Graph Clustering without Pre-defined Cluster Number k Yuanchi Ma, Hui He, Zhongxiang Lei, Zhendong Niu
- Graph Transformer GANs with Graph Masked Modeling for Architectural Layout Generation Hao Tang, Ling Shao, Nicu Sebe, Luc Van Gool
- Masked Graph Autoencoder with Non-discrete Bandwidths Ziwen Zhao, Yuhua Li, Yixiong Zou, Jiliang Tang, Ruixuan Li
- Rethinking Graph Masked Autoencoders through Alignment and Uniformity :octocat: Liang Wang, Xiang Tao, Qiang Liu, Shu Wu, Liang Wang
- UGMAE: A Unified Framework for Graph Masked Autoencoders Yijun Tian, Chuxu Zhang, Ziyi Kou, Zheyuan Liu, Xiangliang Zhang, Nitesh V. Chawla
- SELECTOR: Heterogeneous graph network with convolutional masked autoencoder for multimodal robust prediction of cancer survival Liangrui Pan, Yijun Peng, Yan Li, Xiang Wang, Wenjuan Liu, Liwen Xu, Qingchun Liang, Shaoliang Peng
- Exploring Task Unification in Graph Representation Learning via Generative Approach Yulan Hu, Sheng Ouyang, Zhirui Yang, Ge Chen, Junchen Wan, Xiao Wang, Yong Liu
- Generative-Enhanced Heterogeneous Graph Contrastive Learning Yu Wang, Lei Sang, Yi Zhang, Yiwen Zhang
- Where to Mask: Structure-Guided Masking for Graph Masked Autoencoders Chuang Liu, Yuyao Wang, Yibing Zhan, Xueqi Ma, Dapeng Tao, Jia Wu, Wenbin Hu
Point Cloud
- Masked Discrimination for Self-Supervised Learning on Point Clouds :octocat: Haotian Liu, Mu Cai, Yong Jae Lee
- Voxel-MAE: Masked Autoencoders for Pre-training Large-scale Point Clouds :octocat: Chen Min, Dawei Zhao, Liang Xiao, Yiming Nie, Bin Dai
- Masked Autoencoders for Self-Supervised Learning on Automotive Point Clouds :octocat: Georg Hess, Johan Jaxing, Elias Svensson, David Hagerman, Christoffer Petersson, Lennart Svensson
- Learning 3D Representations from 2D Pre-trained Models via Image-to-Point Masked Autoencoders :octocat: Renrui Zhang, Liuhui Wang, Yu Qiao, Peng Gao, Hongsheng Li
- BEV-MAE: Bird's Eye View Masked Autoencoders for Outdoor Point Cloud Pre-training Zhiwei Lin, Yongtao Wang
- PiMAE: Point Cloud and Image Interactive Masked Autoencoders for 3D Object Detection :octocat: Anthony Chen, Kevin Zhang, Renrui Zhang, Zihan Wang, Yuheng Lu, Yandong Guo, Shanghang Zhang
- Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training :octocat: Renrui Zhang, Ziyu Guo, Peng Gao, Rongyao Fang, Bin Zhao, Dong Wang, Yu Qiao, Hongsheng Li
- GD-MAE: Generative Decoder for MAE Pre-training on LiDAR Point Clouds :octocat: Honghui Yang, Tong He, Jiaheng Liu, Hua Chen, Boxi Wu, Binbin Lin, Xiaofei He, Wanli Ouyang
- MAELi -- Masked Autoencoder for Large-Scale LiDAR Point Clouds Georg Krispel, David Schinagl, Christian Fruhwirth-Reisinger, Horst Possegger, Horst Bischof
- GeoMAE: Masked Geometric Target Prediction for Self-supervised Point Cloud Pre-Training :octocat: Xiaoyu Tian, Haoxi Ran, Yue Wang, Hang Zhao
- Masked Autoencoders for Point Cloud Self-Supervised Learning :octocat: Yatian Pang, Wenxiao Wang, Francis E.H. Tay, Wei Liu, Yonghong Tian, Li Yuan
- Masked Autoencoder for Self-Supervised Pre-Training on Lidar Point Clouds :octocat: Georg Hess, Johan Jaxing, Elias Svensson, David Hagerman, Christoffer Petersson, Lennart Svensson
- Joint-MAE: 2D-3D Joint Masked Autoencoders for 3D Point Cloud Pre-training Ziyu Guo, Renrui Zhang, Longtian Qiu, Xianzhi Li, Pheng-Ann Heng
- Point Cloud Self-supervised Learning via 3D to Multi-view Masked Autoencoder Zhimin Chen, Yingwei Li, Longlong Jing, Liang Yang, Bing Li
- T-MAE: Temporal Masked Autoencoders for Point Cloud Representation Learning Weijie Wei, Fatemeh Karimi Nejadasl, Theo Gevers, Martin R. Oswald
- DiffPMAE: Diffusion Masked Autoencoders for Point Cloud Reconstruction Yanlong Li, Chamara Madarasingha, Kanchana Thilakarathna
- PAME: Self-Supervised Masked Autoencoder for No-Reference Point Cloud Quality Assessment Ziyu Shan, Yujie Zhang, Qi Yang, Haichen Yang, Yiling Xu, Shan Liu
- MaskLRF: Self-supervised Pretraining via Masked Autoencoding of Local Reference Frames for Rotation-invariant 3D Point Set Analysis :octocat: Takahiko Furuya
Language (Omitted)
There has been a surge of language research focused on such masking-and-predicting paradigm, e.g. BERT, so I'm not going to report these works.
Miscellaneous
- Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning :octocat: Johnathan Xie, Yoonho Lee, Annie S. Chen, Chelsea Finn
- Masked Bayesian Neural Networks : Computation and Optimality Insung Kong, Dongyoon Yang, Jongjin Lee, Ilsang Ohn, Yongdai Kim
- How to Understand Masked Autoencoders Shuhao Cao, Peng Xu, David A. Clifton
- Towards Understanding Why Mask-Reconstruction Pretraining Helps in Downstream Tasks Jiachun Pan, Pan Zhou, Shuicheng Yan
- MET: Masked Encoding for Tabular Data Kushal Majmundar, Sachin Goyal, Praneeth Netrapalli, Prateek Jain
- Masked Self-Supervision for Remaining Useful Lifetime Prediction in Machine Tools Haoren Guo, Haiyue Zhu, Jiahui Wang, Vadakkepat Prahlad, Weng Khuen Ho, Tong Heng Lee
- MAR: Masked Autoencoders for Efficient Action Recognition :octocat: Zhiwu Qing, Shiwei Zhang, Ziyuan Huang, Xiang Wang, Yuehuan Wang, Yiliang Lv, Changxin Gao, Nong Sang
- MeshMAE: Masked Autoencoders for 3D Mesh Data Analysis :octocat: Yaqian Liang, Shanshan Zhao, Baosheng Yu, Jing Zhang, Fazhi He
- A Dual-Masked Auto-Encoder for Robust Motion Capture with Spatial-Temporal Skeletal Token Completion :octocat: Junkun Jiang, Jie Chen, Yike Guo
- [Survey] A Survey on Masked Autoencoder for Self-supervised Learning in Vision and Beyond Chaoning Zhang, Chenshuang Zhang, Junha Song, John Seon Keun Yi, Kang Zhang, In So Kweon
- Masked Imitation Learning: Discovering Environment-Invariant Modalities in Multimodal Demonstrations :octocat: Yilun Hao, Ruinan Wang, Zhangjie Cao, Zihan Wang, Yuchen Cui, Dorsa Sadigh
- Real-World Robot Learning with Masked Visual Pre-training :octocat: Ilija Radosavovic, Tete Xiao, Stephen James, Pieter Abbeel, Jitendra Malik, Trevor Darrell
- Self-supervised Video Representation Learning with Motion-Aware Masked Autoencoders :octocat: Haosen Yang, Deng Huang, Bin Wen, Jiannan Wu, Hongxun Yao, Yi Jiang, Xiatian Zhu, Zehuan Yuan
- MAEEG: Masked Auto-encoder for EEG Representation Learning Hsiang-Yun Sherry Chien, Hanlin Goh, Christopher M. Sandino, Joseph Y. Cheng
- Masked Autoencoding for Scalable and Generalizable Decision Making :octocat: Fangchen Liu, Hao Liu, Aditya Grover, Pieter Abbeel
- MHCCL: Masked Hierarchical Cluster-wise Contrastive Learning for Multivariate Time Series :octocat: Qianwen Meng, Hangwei Qian, Yong Liu, Yonghui Xu, Zhiqi Shen, Lizhen Cui
- Advancing Radiograph Representation Learning with Masked Record Modeling :octocat: Hong-Yu Zhou, Chenyu Lian, Liansheng Wang, Yizhou Yu
- FlowFormer++: Masked Cost Volume Autoencoding for Pretraining Optical Flow Estimation Xiaoyu Shi, Zhaoyang Huang, Dasong Li, Manyuan Zhang, Ka Chun Cheung, Simon See, Hongwei Qin, Jifeng Dai, Hongsheng Li
- Traj-MAE: Masked Autoencoders for Trajectory Prediction Hao Chen, Jiaze Wang, Kun Shao, Furui Liu, Jianye Hao, Chenyong Guan, Guangyong Chen, Pheng-Ann Heng
- Self-supervised Pre-training with Masked Shape Prediction for 3D Scene Understanding Li Jiang, Zetong Yang, Shaoshuai Shi, Vladislav Golyanik, Dengxin Dai, Bernt Schiele
- ReMasker: Imputing Tabular Data with Masked Autoencoding Tianyu Du, Luca Melis, Ting Wang+ CROMA: Remote Sensing Representations with Contrastive Radar-Optical Masked Autoencoders Anthony Fuller, Koreen Millard, James R. Green
- Masked Autoencoders Are Robust Neural Architecture Search Learners Yiming Hu, Xiangxiang Chu, Bo Zhang
- Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning Johnathan Xie, Yoonho Lee, Annie S. Chen, Chelsea Finn
- T4P: Test-Time Training of Trajectory Prediction via Masked Autoencoder and Actor-specific Token Memory Daehee Park, Jaeseok Jeong, Sung-Hoon Yoon, Jaewoo Jeong, Kuk-Jin Yoon
- Binary Noise for Binary Tasks: Masked Bernoulli Diffusion for Unsupervised Anomaly Detection Julia Wolleb, Florentin Bieder, Paul Friedrich, Peter Zhang, Alicia Durrer, Philippe C. Cattin
- Technical Report: Masked Skeleton Sequence Modeling for Learning Larval Zebrafish Behavior Latent Embeddings Lanxin Xu, Shuo Wang
- Masked Autoencoders are PDE Learners Anthony Zhou, Amir Barati Farimani
- Detecting Generative Parroting through Overfitting Masked Autoencoders Saeid Asgari Taghanaki, Joseph Lambourne
- NeRF-MAE: Masked AutoEncoders for Self-Supervised 3D Representation Learning for Neural Radiance Fields 🌐 Muhammad Zubair Irshad, Sergey Zakahrov, Vitor Guizilini, Adrien Gaidon, Zsolt Kira, Rares Ambrus
- SCE-MAE: Selective Correspondence Enhancement with Masked Autoencoder for Self-Supervised Landmark Estimation Kejia Yin, Varshanth R. Rao, Ruowei Jiang, Xudong Liu, Parham Aarabi, David B. Lindell