Awesome
Rethinking Experience Replay: a Bag of Tricks for Continual Learning
This code is based on our framework: Mammoth - An Extendible Continual Learning Framework for Pytorch.
To run experiments with the default arguments use python ./utils/main.py --model=<MODEL> --dataset=<DATASET> --buffer_size=<MEM_BUFFER_SIZE> --load_best_args
.
Available models:
sgd
: SGD with no countermeasure to catastrophic forgetting (lower bound)joint
: joint training on the whole dataset (upper bound - not continual)agem
: A-GEMgem
: Gradient Episodic Memory for Continual Learninghal
: Hindisght Anchor LearningiCaRL
: Incremental Classifier and Representation Learninger
: naive Experience Replay with no trickser_tricks
: Experience Replay equipped with our proposed tricks
Available datasets:
seq-fmnist
: Split Fashion-MNIST (5 tasks, 2 classes per task)seq-cifar10
: Split CIFAR-10 (5 tasks, 2 classes per task)seq-cifar100
: Split CIFAR-100 (10 tasks, 10 classes per task)seq-core50
: CORe50 dataset according to the SIT-NC protocol described here
Best args are provided for the following memory buffer sizes:
- 200 exemplars
- 500 exemplars
- 1000 exemplars