Awesome
CLearning is a general continual learning framework.
1. Features
-
Supported incremental learning (IL) scenarios:
Task-IL, Domain-IL, and Class-IL
-
Supported datasets:
MNIST, PMNIST, RMNIST, FIVE, SVHN, CIFAR10, CIFAR100, SUPER, TinyImageNet, miniImageNet, DTD, ImageNet100, CUB, ImageNet1000
-
Supported training modes:
Single GPU, Distributed Parallel (DP)
-
Support flexibly specify model:
model specified in '*.yaml' > default model
2. Methods
The current repository implements the following continual learning methods:
- Baseline
- FineTune
- Regularization
- Replay
- AFC: Class-incremental learning by knowledge distillation with adaptive feature consolidation
- ANCL: Achieving a better stability-plasticity trade-off via auxiliary networks in continual learning
- CSCCT: Class-incremental learning with cross-space clustering and controlled transfer
- iCaRL: icarl: Incremental classifier and representation learning
- LUCIR: Learning a unified classifier incrementally via rebalancing
- MTD: Class Incremental Learning with Multi-Teacher Distillation
- OPC: Optimizing mode connectivity for class incremental learning
- PODNet: Podnet: Pooled outputs distillation for small-tasks incremental learning
- SSIL: Ss-il: Separated softmax for incremental learning
- Structure
3. Environment
3.1. Code Environment
- python==3.9
- pytorch==1.11.0
- continuum==1.2.4
- ...
The required packages are listed in files 'config/requirements.txt' and 'config/env.yml'. We recommend using Anaconda to manage packages, such as:
conda create -n torch1.11.0 python=3.9
conda activate torch1.11.0
conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch
cd config
pip install -r requirements.txt
3.2. Dataset
Folder Structures
├── data
│ ├── cifar-100-python
│ ├── ImageNet
│ ├── ImageNet100
The CIFAR-100 dataset can be automatically downloaded by running an arbitrary script of CIFAR-100 experiments. The ImageNet dataset must be pre-downloaded. The ImageNet-100 dataset can be generated by the script 'tool/gen_imagenet100.py'.
4. Command
python main.py --cfg config/baseline/finetune/finetune_cifar100.yaml --device 0 --note test
5. Acknowledgements
We appreciate the following GitHub repos a lot for their valuable code base:
- https://github.com/arthurdouillard/incremental_learning.pytorch
- https://github.com/yaoyao-liu/POD-AANets
6. Citation
We hope that our research can help you and promote the development of continual learning. If you find this work useful, please consider citing the corresponding paper:
@inproceedings{wen2024class,
title={Class Incremental Learning with Multi-Teacher Distillation},
author={Wen, Haitao and Pan, Lili and Dai, Yu and Qiu, Heqian and Wang, Lanxiao and Wu, Qingbo and Li, Hongliang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={28443--28452},
year={2024}
}
@InProceedings{pmlr-v202-wen23b,
title = {Optimizing Mode Connectivity for Class Incremental Learning},
author = {Wen, Haitao and Cheng, Haoyang and Qiu, Heqian and Wang, Lanxiao and Pan, Lili and Li, Hongliang},
booktitle = {Proceedings of the 40th International Conference on Machine Learning},
pages = {36940--36957},
year = {2023},
volume = {202},
publisher = {PMLR}
}