Awesome
awesome-mixup
This repo is a collection of AWESOME things about mixup, including papers, code, etc. Feel free to star and fork. We borrow a lot from openmixup, awesome-domain-adaptation, and PromptPapers.
Some of these papers are summarized with tables in Google Sheet. Please find the link here: Summary
<!-- * **[]()** x. x. [[code](x)] *x* -->
Basics
This section contains the exploration on the improvements aspects of raw mixup.
-
mixup'18 mixup: Beyond Empirical Risk Minimization. ICLR 2018. [code].
Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz.
-
Manifold Mixup'19 Manifold Mixup: Better Representations by Interpolating Hidden States. ICML 2019. [code]
Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, Yoshua Bengio
-
AdaMixup'19 MixUp as Locally Linear Out-Of-Manifold Regularization. AAAI 2019.
Hongyu Guo, Yongyi Mao, Richong Zhang.
-
CutMix'19 CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features. ICCV 2019. [code]
Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, Youngjoon Yoo.
-
AugMix'20 AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty. ICLR 2020. [code]
Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, Balaji Lakshminarayanan.
-
PuzzleMix'20 Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup. ICML 2020. [code]
Jang-Hyun Kim, Wonho Choo, Hyun Oh Song.
-
SaliencyMix'21 SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization. ICLR 2021. [code]
A F M Shahab Uddin and Mst. Sirazam Monira and Wheemyung Shin and TaeChoong Chung and Sung-Ho Bae.
-
CoMixup'21 Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity. ICLR 2021. [code]
Jang-Hyun Kim, Wonho Choo, Hosan Jeong, Hyun Oh Song.
-
NFM'22 Noisy Feature Mixup. ICLR 2022. [code]
Soon Hoe Lim, N. Benjamin Erichson, Francisco Utrera, Winnie Xu, Michael W. Mahoney
-
CsaNMT'22 Learning to Generalize to More: Continuous Semantic Augmentation for Neural Machine Translation. ACL 2022. [code]
Xiangpeng Wei, Heng Yu, Yue Hu, Rongxiang Weng, Weihua Luo, Jun Xie, Rong Jin.
-
AlignMix'22 AlignMix: Improving representation by interpolating aligned features. CVPR 2022. [code]
Shashanka Venkataramanan, Ewa Kijak, Laurent Amsaleg, Yannis Avrithis.
-
TransMix'22 TransMix: Attend to Mix for Vision Transformers. CVPR 2022. [code]
Jie-Neng Chen, Shuyang Sun, Ju He, Philip Torr, Alan Yuille, Song Bai.
-
GenLabel'22 GenLabel: Mixup Relabeling using Generative Models. ICML 2022. [code]
Jy-yong Sohn, Liang Shang, Hongxu Chen, Jaekyun Moon, Dimitris Papailiopoulos, Kangwook Lee.
-
VLMixer'22 VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix. ICML 2022. [code]
Teng Wang, Wenhao Jiang, Zhichao Lu, Feng Zheng, Ran Cheng, Chengguo Yin, Ping Luo
-
AutoMix'22 AutoMix: Unveiling the Power of Mixup for Stronger Classifiers. ECCV 2022. [code]
Zicheng Liu, Siyuan Li, Di Wu, Zihan Liu, Zhiyuan Chen, Lirong Wu, Stan Z. Li.
Contrastive Learning with Mixup
-
MixCo'20 MixCo: Mix-up Contrastive Learning for Visual Representation. NeurIPSW. [code]
MixCo: Sungnyun Kim, Gihun Lee, Sangmin Bae, Se-Young Yun.
-
MoCHi'20 Hard Negative Mixing for Contrastive Learning. NeurIPS 2020. [code]
Yannis Kalantidis, Mert Bulent Sariyildiz, Noe Pion, Philippe Weinzaepfel, Diane Larlus.
-
i-Mix'21 i-Mix A Domain-Agnostic Strategy for Contrastive Representation Learning. ICLR 2021. [code]
Kibok Lee, Yian Zhu, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin, Honglak Lee.
-
FT'19 Improving Contrastive Learning by Visualizing Feature Transformation. ICCV 2019 (Oral). [code]
Rui Zhu, Bingchen Zhao, Jingen Liu, Zhenglong Sun, Chang Wen Chen.
-
Core-tuning'21 Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning. NeurIPS 2021. [code]
Yifan Zhang, Bryan Hooi, Dapeng Hu, Jian Liang, Jiashi Feng.
-
MixSiam'22 MixSiam: A Mixture-based Approach to Self-supervised Representation Learning. AAAI 2022.
Xiaoyang Guo, Tianhao Zhao, Yutian Lin, Bo Du.
-
Un-Mix'22 Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation. AAAI 2022. [code]
Zhiqiang Shen, Zechun Liu, Zhuang Liu, Marios Savvides, Trevor Darrell, Eric Xing.
-
Metrix'22 It Takes Two to Tango: Mixup for Deep Metric Learning. ICLR 2022. [code]
Shashanka Venkataramanan, Bill Psomas, Ewa Kijak, Laurent Amsaleg, Konstantinos Karantzalos, Yannis Avrithis.
-
ProGCL'22 ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning. ICML 2022. [code]
Jun Xia, Lirong Wu, Ge Wang, Jintao Chen, Stan Z.Li.
Semi-supervised Learning with Mixup
-
ICT'19 Interpolation Consistency Training for Semi-Supervised Learning. IJCAI 2019. [code]
Vikas Verma, Kenji Kawaguchi, Alex Lamb, Juho Kannala, Yoshua Bengio, David Lopez-Paz
-
MixMatch'19 MixMatch: A Holistic Approach to Semi-Supervised Learning. NeurIPS 2019. [code]
David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel.