Home

Awesome

PENCIL.pytorch

PyTorch implementation of Probabilistic End-to-end Noise Correction for Learning with Noisy Labels, CVPR 2019.

Requirements:

Usage

options

The framework of PENCIL

<div align="center"> <img src="https://github.com/yikun2019/PENCIL/blob/master/framework.png" height="336" width="652" > </div> ## The proportion of correct labels on CIFAR-10 <div align="center"> <img src="https://github.com/yikun2019/PENCIL/blob/master/SN70.png" height="360" width="480" > <img src="https://github.com/yikun2019/PENCIL/blob/master/AN30.png" height="360" width="480" > </div>

The results on real-world dataset Clothing1M

#methodTest Accuracy (%)
1Cross Entropy Loss68.94
2Forward [1]69.84
3Tanaka et al. [2]72.16
4PENCIL73.49

Citing this repository

If you find this code useful in your research, please consider citing us:

@inproceedings{PENCIL_CVPR_2019,
author = {Kun, Yi and Jianxin, Wu},
title = {{Probabilistic End-to-end Noise Correction for Learning with Noisy Labels}},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2019}
}

Reference

[1] Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: A loss correction approach. In CVPR, pages 1944–1952, 2017. </br>[2] Daiki Tanaka, Daiki Ikami, Toshihiko Yamasaki, and Kiyoharu Aizawa. Joint optimization framework for learning with noisy labels. In CVPR, pages 5552–5560, 2018.