Awesome
InfluenceCL
Code for CVPR 2023 paper Regularizing Second-Order Influences for Continual Learning.
<p align="center"> <img src="assets/intro.png" alt="Coreset selection process in continual learning" width=60%> </p>In continual learning, earlier coreset selection exerts a profound influence on subsequent steps through the data flow. Our proposed scheme regularizes the future influence of each selection. In its absence, a greedy selection strategy would degrade over time.
Dependencies
pip install -r requirements.txt
Please specify the CUDA and CuDNN version for jax explicitly. If you are using CUDA 10.2 or order, you would also need to manually choose an older version of tf2jax and neural-tangents.
Quick Start
Train and evaluate models through utils/main.py
. For example, to train our model on Split CIFAR-10 with a 500 fixed-size buffer, one could execute:
python utils/main.py --model soif --load_best_args --dataset seq-cifar10 --buffer_size 500
To compare with the result of vanilla influence functions, simply run:
python utils/main.py --model soif --load_best_args --dataset seq-cifar10 --buffer_size 500 --nu 0
More datasets and methods are supported. You can find the available options by running:
python utils/main.py --help
Results
The following results on Split CIFAR-10 were obtained with single NVIDIA 2080 Ti GPU:
Citation
If you find this code useful, please consider citing:
@inproceedings{sun2023regularizing,
title={Regularizing Second-Order Influences for Continual Learning},
author={Sun, Zhicheng and Mu, Yadong and Hua, Gang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={20166--20175},
year={2023},
}
Acknowledgement
Our implementation is based on Mammoth. We also refer to bilevel_coresets and Example_Influence_CL.