Home

Awesome

Co-teaching - Robust training of deep neural networks with extremely noisy labels

Unofficial tensorflow implementation of the CIFAR-10 learned by Co-teaching. Official torch implementation is here

Publication </br> Han, B., Yao, Q., Yu, X., Niu, G., Xu, M., Hu, W., Tsang, I., and Sugiyama, M. Co-teaching: Robust training of deep neural networks with extremely noisy labels," Advances in Neural Information Processing Systems (NIPS), pp. 8536–8546, 2018.

1. Summary

For robust training on noisy labels, Co-teaching uses two neural networks. Each network selects its small-loss samples as clean samples, and feeds such clean samples to its peer network for futher training. Below figure demonstrates the overall procedures of Co-teaching. For each iteration, two networks forward-propagate the same mini-batch to identify clean samples. Then, each selected clean subset is back-propagated into peer network to update the model parameter.

<p align="center"> <img src="figures/overview.png " width="650"> </p>

2. Noise Injection and Network Architecture

3. Environment

4. How to Run

5. Tutorial 1: Comparison of learning curves at the noise rate of 40%.

<p align="center"> <img src="figures/tutorial_1(1).png " width="650"> </p>

6. Tutorial 2: Comparison of the best test error with varying noise rates.

<p align="center"> <img src="figures/tutorial_2(1).png " width="400"> </p>