Home

Awesome

<br><br><br>

Generative vs Discriminative: Rethinking The Meta-Continual Learning (NeurIPS 2021) (Link)

In this repository we provide PyTorch implementations for GeMCL; a generative approach for meta-continual learning. The directory outline is as follows:

root
 ├── code                 # The folder containing all pytorch implementations
       ├── datasets           # The path containing Dataset classes and train/test parameters for each dataset
            ├── omnigolot
                  ├── TrainParams.py  # omniglot training parameters configuration
                  ├── TestParams.py   # omniglot testing parameters configuration

            ├── mini-imagenet
                  ├── TrainParams.py  # mini-imagenet training parameters configuration
                  ├── TestParams.py   # mini-imagenet testing parameters configuration
            ├── cifar
                  ├── TrainParams.py  # cifar 100 training parameters configuration
                  ├── TestParams.py   # cifar 100 testing parameters configuration

       ├── model              # The path containing proposed models
       ├── train.py           # The main script for training
       ├── test.py            # The main script for testing
       ├── pretrain.py        # The main script for pre-training

 ├── datasets             # The location in which datasets are placed
       ├── omniglot
       ├── miniimagenet
       ├── cifar

 ├── experiments          # The location in which accomplished experiments are stored
       ├── omniglot
       ├── miniimagenet
       ├── cifar

In the following sections we will first provide details about how to setup the dataset. Then the instructions for installing package dependencies, training and testing is provided.

Configuring the Dataset

In this paper we have used Omniglot, CIFAR-100 and Mini-Imagenet datasets. The omniglot and cifar-100 are light-weight datasets and are automatically downloaded into datasets/omniglot/ or datasets/cifar/ whenever needed. however the mini-imagenet dataset need to be manually downloaded and placed in datasets/miniimagenet/. The following instructions will show how to properly setup this dataset:

Reading directly from the disk every time we need this dataset is an extremely slow procedure. To solve this issue we use a preprocessing step, in which the images are first shrinked to 100 pixels in the smaller dimension (without cahnging the aspect ratio), and then converted to numpy npy format. The code for this preprocessing is provided in code directory and should be executed as follows:

cd code
python genrate_img.py ../datasets/miniimagenet ../datasets/miniimagenet

Wait until the success message for test, train and validation appears and then we are ready to go.

Installing Prerequisites

The following packages are required:

Training and Testing

The first step for training or testing is to confgure the desired parameters. We have seperated the training/testing parameters for each dataset and placed them under code/datasets/omniglot and code/datasets/miniimagenet. For example to change the number of meta-training episodes on omniglot dataset, one may do as following:

Setting the training model is done in the same way by changing self.modelClass value. We have provided the following models in the code/model/ path:

file pathmodel name in the paper
code/model/Bayesian.pyGeMCL predictive
code/model/MAP.pyGeMCL MAP
code/model/LR.pyMTLR
code/model/PGLR.pyPGLR
code/model/ProtoNet.pyPrototypical

Training Instructions

To perform training first configure the training parameters in code/datasets/omniglot/TrainParams.py or code/datasets/miniimagenet/TrainParams.py for omniglot and mini-magenet datasets respectively. In theese files, self.experiment_name variable along with a Date prefix will determine the folder name in which training logs are stored.

Now to start training run the following command for omniglot (In all our codes the M or O flag represents mini-imagene and omniglot datasets respectively):

cd code
python train.py O

and the following for mini-imagenet:

cd code
python train.py M

The training logs and checkpoints are stored in a folder under experiments/omniglot/ or experiments/miniimagenet/ with the name specified in self.experiment_name. We have already attached some trained models with the same settings reported in the paper. The path and details for these models are as follows:

Model PathDetails
experiments/miniimagenet/imagenet_bayesian_finalGeMCL predictive trained on mini-imagenet
experiments/miniimagenet/imagenet_map_finalGeMCL MAP trained on mini-imagenet
experiments/miniimagenet/imagenet_PGLR_finalPGLR trained on mini-imagenet
experiments/miniimagenet/imagenet_MTLR_finalMTLR trained on mini-imagenet
experiments/miniimagenet/imagenet_protonet_finalPrototypical trained on mini-imagenet
experiments/miniimagenet/imagenet_pretrain_finalpretrained model on mini-imagenet
experiments/miniimagenet/imagenet_Bayesian_OMLBackboneGeMCL predictive trained on mini-imagenet with OML backbone
experiments/miniimagenet/imagenet_randomrandom model compatible to mini-imagenet but not trained previously
experiments/omniglot/omniglot_Bayesian_finalGeMCL predictive trained on omniglot
experiments/omniglot/omniglot_MAP_finalGeMCL MAP trained on omniglot
experiments/omniglot/omniglot_PGLR_finalPGLR trained on omniglot
experiments/omniglot/omniglot_MTLR_finalMTLR trained on omniglot
experiments/omniglot/omniglot_Protonet_finalPrototypical trained on omniglot
experiments/omniglot/omniglot_Pretrain_finalpretrained model on omniglot
experiments/omniglot/Omniglot_Bayesian_OMLBackboneGeMCL predictive trained on omniglot with OML backbone
experiments/omniglot/omniglot_randomrandom model compatible to omniglot but not trained previously
experiments/omniglot/omniglot_bayesian_28GeMCL predictive trained on omniglot with 28x28 input
<br>

Testing Instructions

To evaluate a previously trained model, we can use test.py by determining the path in which the model was stored. As an example consider the following structure for omniglot experiments.

root
 ├── experiments
       ├── omniglot
            ├── omniglot_Bayesian_final

Now to test this model run:

cd code
python test.py O ../experiments/omniglot/omniglot_Bayesian_final/

At the end of testing, the mean accuracy and std among test epsiodes will be printed.

Note: Both test.py and train.py use TrainParams.py for configuring model class. Thus before executing test.py make sure that TrainParams.py is configured correctly.

Pre-training Instructions

To perform a preitraining you can use

cd code
python pretrain.py O

The pre-training configuarations are also available in TrainParams.py.

References