Home

Awesome

S2E

ICML'20: Searching to Exploit Memorization Effect in Learning from Corrupted Labels (PyTorch implementation).

This is the code for the paper: Searching to Exploit Memorization Effect in Learning from Corrupted Labels Quanming Yao, Hansi Yang, Bo Han, Gang Niu, James Kwok.

Requirements

Python = 3.7, PyTorch = 1.3.1, NumPy = 1.18.5, SciPy = 1.4.1 All packages can be installed by Conda.

Running S2E on benchmark datasets

Example usage for MNIST with 50% symmetric noise

python heng_mnist_main.py --noise_type symmetric --noise_rate 0.5 --num_workers 1 --n_iter 10 --n_samples 6

CIFAR-10 with 50% symmetric noise

python heng_main.py --noise_type symmetric --noise_rate 0.5 --num_workers 1 --n_iter 10 --n_samples 6

And CIFAR-100 with 50% symmetric noise

python heng_100_main.py --noise_type symmetric --noise_rate 0.5 --num_workers 1 --n_iter 10 --n_samples 6

Or see scripts (.sh files) for a quick start.

Citation

If you find this work helpful for your research, please cite the following paper:

@inproceedings{s2e2020icml,
  title={Searching to Exploit Memorization Effect in Learning from Corrupted Labels},
  author={Yao, Quanming and Yang, Hansi and Han, Bo and Niu, Gang and Kwok, James},
  booktitle={International Conference on Machine Learning},
  year={2020}
}
@TechReport{yao2018taking,
  author      = {Yao, Quanming and Wang, Mengshuo},
  institution = {arXiv preprint},
  title       = {Taking Human out of Learning Applications: A Survey on Automated Machine Learning},
  year        = {2018},
}

Relavent resources

Example applications

S2E (AutoML version of Co-teaching) is based on the small-loss trick and the memorization effect, the following examples have applied these princeples in various applications

New Opportunities