Home

Awesome

MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks

This is the code for the paper:

<a href="https://arxiv.org/abs/1712.05055">MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels </a> <br> Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, Li Fei-Fei <br> Presented at ICML 2018

Please note that this is not an officially supported Google product.

If you find this code useful in your research then please cite

@inproceedings{jiang2018mentornet,
  title={MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels},
  author={Jiang, Lu and Zhou, Zhengyuan and Leung, Thomas and Li, Li-Jia and Fei-Fei, Li},
  booktitle={ICML},
  year={2018}
}

Introduction

We are interested in training a deep network using curriculum learning (Bengio et al., 2009), i.e. learning examples with focus. Each curriculum is implemented as a network (called MentorNet).

Training Overview

Setups

All code was developed and tested on Nvidia V100/P100 (16GB) the following environment.

Download Cloud SDK to get data and models. Next we need to download the dataset and pre-trained MentorNet models. Put them into the same directory as the code directory.

gsutil -m cp -r gs://mentornet_project/data .
gsutil -m cp -r gs://mentornet_project/mentornet_models .

Alternatively, you may download the zip files: data and models.

Running MentorNet on CIFAR

export PYTHONPATH="$PYTHONPATH:$PWD/code/"

python code/cifar_train_mentornet.py \
  --dataset_name=cifar10   \
  --trained_mentornet_dir=mentornet_models/models/mentornet_pd1_g_1/mentornet_pd \
  --loss_p_precentile=0.75  \
  --nofixed_epoch_after_burn_in  \
  --burn_in_epoch=0  \
  --example_dropout_rates="0.5,17,0.05,83" \
  --data_dir=data/cifar10/0.2 \
  --train_log_dir=cifar_models/cifar10/resnet/0.2/mentornet_pd1_g_1/train \
  --studentnet=resnet101 \
  --max_number_of_steps=39000

A full list of commands can be found in this file. The training script has a number of command-line flags that you can use to configure the model architecture, hyperparameters, and input / output settings:

To evaluate a model, run the evaluation job in parallel with the training job (on a different GPU).

python cifar/cifar_eval.py \
 --dataset_name=cifar10 \
 --data_dir=cifar/data/cifar10/val/ \
 --checkpoint_dir=cifar_models/cifar10/resnet/0.2/mentornet_pd1_g_1/train \
 --eval_dir=cifar_models/cifar10/resnet/0.2/mentornet_pd1_g_1//eval_val \
 --studentnet=resnet101 \
 --device_id=1

A complete list of commands of running experiments can be found at commands/train_studentnet_resnet.sh and commands/train_studentnet_inception.sh.

MentorNet Framework

MentorNet is a general framework for curriculum learning, where various curriculums can be learned by the same MentorNet structure of different parameters.

It is flexible as we can switch curriculums by attaching different MentorNets without modifying the pipeline.

We train a few MentorNets listed below. We can think of a MentorNet as a hyper-parameter and will be tuned for different problems.

CurriculumVisualizationIntuitionModel Name
No curriculumimageAssign uniform weight to every sample uniform.baseline_mentornet
Self-paced <br/>(Kuma et al. 2010)imageFavor samples of smaller loss.self_paced_mentornet
SPCL linear <br/>(Jiang et al. 2015)imageDiscount the weight by loss linearly.spcl_linear_mentornet
Hard example mining <br/>(Felzenszwalb et al., 2008)imageFavor samples of greater loss.hard_example_mining_mentornet
Focal loss <br/>(Lin et al., 2017)imageIncrease the weight by loss by the exponential CDF.focal_loss_mentornet
Predefined MixtureimageMixture of SPL and SPCL changing by epoch.mentornet_pd
MentorNet Data-drivenimageLearned on a small subset of the CIFAR data.mentornet_dd

Note there are many more curriculums can be trained by MentorNet, for example, prediction variance (Chang et al., 2017), implicit regularizer (Fan et al. 2017), self-paced with diversity (Jiang et al. 2014), sample re-weighting (Dehghani et al., 2018, Ren et al., 2018), etc.

Performance

The numbers are slightly different from the ones reported in the paper due to the re-implementation on the third party library.

CIFAR-10 ResNet

noise_fractionbaselineself_pacedfocal_lossmentornet_pdmentornet_dd
0.20.7960.8220.7970.9100.914
0.40.5680.8020.6340.7760.887
0.80.2380.2970.250.2830.463

CIFAR-100 ResNet

noise_fractionbaselineself_pacedfocal_lossmentornet_pdmentornet_dd
0.20.6240.6520.6130.7330.726
0.40.4480.5090.4670.5670.675
0.80.0840.0890.0790.1930.301

CIFAR-10 Inception

noise_fractionbaselineself_pacedfocal_lossmentornet_pdmentornet_dd
0.20.7750.7840.7470.7980.800
0.40.720.7330.6950.7310.763
0.80.290.2720.3090.3120.461

CIFAR-100 Inception

noise_fractionbaselineself_pacedfocal_lossmentornet_pdmentornet_dd
0.20.420.4080.3910.4510.466
0.40.3460.320.3130.3860.411
0.80.1080.0910.1070.1250.203

Algorithm

We propose an algorithm to optimize the StudentNet model parameter w jointly with a

given MentorNet. Unlike the alternating minimization, it minimizes w (StudentNet parameter) and v (sample weight) stochastically over mini-batches.

The curriculum can change during training, and MentorNet is updated a few times in the algorithm.

Algorithm

To learn new curriculums (Step 6), see this page.

We found specific MentorNet architectures do not matter that much.

References