Awesome
MOSE
Official implementation of MOSE for online continual learning (CVPR2024).
Introduction
Multi-level Online Sequential Experts (MOSE) cultivates the model as stacked sub-experts, integrating multi-level supervision and reverse self-distillation. Supervision signals across multiple stages facilitate appropriate convergence of the new task while gathering various strengths from experts by knowledge distillation mitigates the performance decline of old tasks.
Usage
Requirements
- python==3.8
- pytorch==1.12.1
pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html
pip install -r requirements.txt
Training and Testing
Split CIFAR-100
python main.py \
--dataset cifar100 \
--buffer_size 5000 \
--method mose \
--seed 0 \
--run_nums 5 \
--gpu_id 0
Split TinyImageNet
python main.py \
--dataset tiny_imagenet \
--buffer_size 10000 \
--method mose \
--seed 0 \
--run_nums 5 \
--gpu_id 0
Acknowledgement
Thanks the following code bases for their framework and ideas:
Citation
If you found this code or our work useful, please cite us:
@inproceedings{yan2024orchestrate,
title={Orchestrate Latent Expertise: Advancing Online Continual Learning with Multi-Level Supervision and Reverse Self-Distillation},
author={Yan, Hongwei and Wang, Liyuan and Ma, Kaisheng and Zhong, Yi},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={23670--23680},
year={2024}
}
Contact
If you have any questions or concerns, please feel free to contact us or leave an issue:
- Hongwei Yan: yanhw22@mails.tsinghua.edu.cn