Home

Awesome

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise

NeurIPS 2021: This repository is the official implementation of ODNL.

Requirements

To install requirements:

pip install -r requirements.txt

Training

To train the model(s) in the paper, run this command:

python train.py cifar10 --alg odnl -m wrn --noise_type symmetric --noise_rate 0.4 --exp_name test --gpu 0 --lambda_o 3.0

Evaluation

To evaluate the model on CIFAR-10, run:

python test.py cifar10 --method_name cifar10_symmetric_04_wrn_test_odnl --num_to_avg 10 --gpu 0 --seed 1 --prefetch 0 --out_as_pos

Hyperparameter

The best test accuracy (%) and the value of \eta on CIFAR-10/100 using vanilla ODNL is shown as follow:

DatasetMethodSym-20%Sym-50%AsymDependentOpen
CIFAR-10Ours91.0682.5090.0085.3791.47
-\eta2.52.53.03.52.0
CIFAR-100Ours68.8254.0858.6162.4566.95
-\eta1.01.02.02.01.0

Datasets

You can download 300K Random Images datasets (from OE) in the following url:

300K Random Images

What's More?

Below are my other research works related to this topic:

  1. Can we use OOD examples to rebalance long-tailed dataset? ICML 22 | Code
  2. How to handle noisy labels in domain adaptation: AAAI 2022 | Code
  3. How to handle multiple noisy labels? TNNLS
  4. Combating noisy labels with Agreement: CVPR 2020 | Code
<!-- 1. Using open-set noisy labels to improve robustness against inherent noisy labels: [NeurIPS 2021](https://arxiv.org/pdf/2106.10891.pdf) | [Code](https://github.com/hongxin001/ODNL) -->

Citation

If you find this useful in your research, please consider citing:

@article{wei2021odnl,
  title={Open-set Label Noise Can Improve Robustness Against Inherent Label Noise},
  author={Wei, Hongxin and Tao, Lue and Xie, Renchunzi and An, Bo},
  booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
  year={2021}
}