Awesome
PENCIL.pytorch
PyTorch implementation of Probabilistic End-to-end Noise Correction for Learning with Noisy Labels, CVPR 2019.
Requirements:
- python3.6
- numpy
- torch-0.4.1
- torchvision-0.2.0
Usage
- On CIFAR-10, we retained 10% of the CIFAR-10 training data as the validation set and modify the original correct labels to obtain different noisy label datasets.
- So the validation set is part of
data_batch_5
, and both of them have 5000 samples - Add symmetric noise on CIFAR-10:
python addnoise_SN.py
- Add asymmetric noise on CIFAR-10:
python addnoise_AN.py
PENCIL.py
is used for both training a model on dataset with noisy labels and validating it
options
b
: batch sizelr
: initial learning rate of stage1lr2
: initial learning rate of stage3alpha
: the coefficient of Compatibility Lossbeta
: the coefficient of Entropy Losslambda1
: the value of lambdastage1
: number of epochs utill the end of stage1stage2
: number of epochs utill the end of stage2epoch
: number of total epochs to rundatanum
: number of train dataset samplesclassnum
: number of train dataset classes
The framework of PENCIL
<div align="center"> <img src="https://github.com/yikun2019/PENCIL/blob/master/framework.png" height="336" width="652" > </div> ## The proportion of correct labels on CIFAR-10 <div align="center"> <img src="https://github.com/yikun2019/PENCIL/blob/master/SN70.png" height="360" width="480" > <img src="https://github.com/yikun2019/PENCIL/blob/master/AN30.png" height="360" width="480" > </div>The results on real-world dataset Clothing1M
# | method | Test Accuracy (%) |
---|---|---|
1 | Cross Entropy Loss | 68.94 |
2 | Forward [1] | 69.84 |
3 | Tanaka et al. [2] | 72.16 |
4 | PENCIL | 73.49 |
Citing this repository
If you find this code useful in your research, please consider citing us:
@inproceedings{PENCIL_CVPR_2019,
author = {Kun, Yi and Jianxin, Wu},
title = {{Probabilistic End-to-end Noise Correction for Learning with Noisy Labels}},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2019}
}
Reference
[1] Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: A loss correction approach. In CVPR, pages 1944–1952, 2017. </br>[2] Daiki Tanaka, Daiki Ikami, Toshihiko Yamasaki, and Kiyoharu Aizawa. Joint optimization framework for learning with noisy labels. In CVPR, pages 5552–5560, 2018.