Home

Awesome

<div align="center"> <h2><a href="https://arxiv.org/abs/2103.13027">AutoMix: Unveiling the Power of Mixup for Stronger Classifiers</a></h2> (ECCV 2022 Oral)

Zicheng Liu<sup>*,1,2</sup>, Siyuan Li<sup>*,1,2</sup>, Di Wu<sup>1,2</sup>, Zhiyuan Chen<sup>1</sup>, Lirong Wu<sup>1,2</sup>, Stan Z. Li<sup>†,1</sup>

<sup>1</sup>Westlake University, <sup>2</sup>Zhejiang University

</div> <p align="center"> <a href="https://arxiv.org/abs/2103.13027" alt="arXiv"> <img src="https://img.shields.io/badge/arXiv-2210.13452-b31b1b.svg?style=flat" /></a> <a href="https://github.com/Westlake-AI/AutoMix/blob/main/LICENSE" alt="license"> <img src="https://img.shields.io/badge/license-Apache--2.0-%23B7A800" /></a> <!-- <a href="https://colab.research.google.com/github/Westlake-AI/MogaNet/blob/main/demo.ipynb" alt="Colab"> <img src="https://colab.research.google.com/assets/colab-badge.svg" /></a> --> <a href="https://zhuanlan.zhihu.com/p/550300558" alt="license"> <img src="https://img.shields.io/badge/zhihu-automix-blue" /></a> </p>

We propose a novel automatic mixup (AutoMix) framework, where the mixup policy is parameterized and serves the ultimate classification goal directly. Specifically, AutoMix reformulates the mixup classification into two sub-tasks (i.e., mixed sample generation and mixup classification) with corresponding sub-networks and solves them in a bi-level optimization framework. For the generation, a learnable lightweight mixup generator, Mix Block, is designed to generate mixed samples by modeling patch-wise relationships under the direct supervision of the corresponding mixed labels. To prevent the degradation and instability of bi-level optimization, we further introduce a momentum pipeline to train AutoMix in an end-to-end manner. Extensive experiments on nine image benchmarks prove the superiority of AutoMix compared with state-of-the-arts in various classification scenarios and downstream tasks.

<p align="center"> <img src="https://user-images.githubusercontent.com/44519745/174272662-19ce57ad-7b08-4e73-81b1-3bb81fee2fe5.png" width=100% height=100% class="center"> </p> <!-- <details> <summary>Table of Contents</summary> <ol> <li><a href="#catalog">Catalog</a></li> <li><a href="#image-classification">Image Classification</a></li> <li><a href="#license">License</a></li> <li><a href="#acknowledgement">Acknowledgement</a></li> <li><a href="#citation">Citation</a></li> </ol> </details> -->

Catalog

We plan to update this timm implementation of AutoMix in a few months. Please watch us for the latest release or use our OpenMixup implementations.

Installation

Please check INSTALL.md for installation instructions.

Small-scale Image Classification

Please refer to OpenMixup implementations of CIFAR-100 and Tiny-ImageNet.

ImageNet Classification

1. Training and Validation

See TRAINING.md for ImageNet-1K training and validation instructions, or refer to our OpenMixup implementations. We released pre-trained models on OpenMixup.

<!-- Here is a notebook [demo](demo.ipynb) of AutoMix which run the steps to perform inference for image classification and generate mixup samples. -->

2. ImageNet-1K Trained Models

Please refer to mixup_benchmarks in OpenMixup implementations for results and models.

<p align="right">(<a href="#top">back to top</a>)</p>

License

This project is released under the Apache 2.0 license.

Acknowledgement

Our implementation is mainly based on the following codebases. We gratefully thank the authors for their wonderful works.

Citation

If you find this repository helpful, please consider citing:

@InProceedings{liu2022automix,
      title={AutoMix: Unveiling the Power of Mixup for Stronger Classifiers},
      author={Zicheng Liu and Siyuan Li and Di Wu and Zhiyuan Chen and Lirong Wu and Jianzhu Guo and Stan Z. Li},
      booktitle={European Conference on Computer Vision},
      pages={441--458},
      year={2022},
}
<p align="right">(<a href="#top">back to top</a>)</p>