Home

Awesome

Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need

<div align="center"> <div> <a href='http://www.lamda.nju.edu.cn/zhoudw' target='_blank'>Da-Wei Zhou</a><sup>1</sup>&emsp; <a href='http://www.lamda.nju.edu.cn/caizw' target='_blank'>Zi-Wen Cai</a><sup>1</sup>&emsp; <a href='http://www.lamda.nju.edu.cn/yehj' target='_blank'>Han-Jia Ye</a><sup>1</sup>&emsp; <a href='http://www.lamda.nju.edu.cn/zhandc' target='_blank'>De-Chuan Zhan</a><sup>1</sup>&emsp; <a href='https://liuziwei7.github.io/' target='_blank'>Ziwei Liu </a><sup>2</sup> </div> <div> <sup>1</sup>School of Artificial Intelligence, State Key Laboratory for Novel Software Technology, Nanjing University&emsp;

<sup>2</sup>S-Lab, Nanyang Technological University 

</div> </div> <div align="center">

visitors arXiv

</div>

The code repository for "Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need" (IJCV 2024) in PyTorch. If you use any content of this repo for your work, please cite the following bib entry:

@article{zhou2024revisiting,
    author = {Zhou, Da-Wei and Cai, Zi-Wen and Ye, Han-Jia and Zhan, De-Chuan and Liu, Ziwei},
    title = {Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need},
    journal = {International Journal of Computer Vision},
    year = {2024}
}

📢 Updates

[08/2024] Update all training jsons and logs.

[08/2024] Accepted to IJCV.

[03/2023] arXiv paper has been released.

[03/2023] Code has been released.

📝 Introduction

Class-incremental learning (CIL) aims to adapt to emerging new classes without forgetting old ones. Traditional CIL models are trained from scratch to continually acquire knowledge as data evolves. Recently, pre-training has achieved substantial progress, making vast pre-trained models (PTMs) accessible for CIL. Contrary to traditional methods, PTMs possess generalizable embeddings, which can be easily transferred for CIL. In this work, we revisit CIL with PTMs and argue that the core factors in CIL are adaptivity for model updating and generalizability for knowledge transferring. (1) We first reveal that frozen PTM can already provide generalizable embeddings for CIL. Surprisingly, a simple baseline (SimpleCIL) which continually sets 1 the classifiers of PTM to prototype features can beat state-of-the-art even without training on the downstream task. (2) Due to the distribution gap between pre-trained and downstream datasets, PTM can be further cultivated with adaptivity via model adaptation. We propose AdaPt and mERge (Aper), which aggregates the embeddings of PTM and adapted models for classifier construction. Aper is a general framework that can be orthogonally combined with any parameter-efficient tuning method, which holds the advantages of PTM’s generalizability and adapted model’s adaptivity. (3) Additionally, considering previous ImageNet-based benchmarks are unsuitable in the era of PTM due to data overlapping, we propose four new benchmarks for assessment, namely ImageNet-A, ObjectNet, OmniBenchmark, and VTAB. Extensive experiments validate the effectiveness of Aper with a unified and concise framework.

<div align="center"> <img src="imgs/aper.png" width="95%"> <h3>TL;DR</h3>

A simple baseline (SimpleCIL) beats SOTA even without training on the downstream task. AdaPt and mERge (Aper) extends SimpleCIL with better adaptivity and generalizability. Four new benchmarks are proposed for assessment.

</div>

🔧 Requirements

Environment

  1. torch 1.11.0
  2. torchvision 0.12.0
  3. timm 0.6.12

Dataset

We provide the processed datasets as follows:

These subsets are sampled from the original datasets. Please note that I do not have the right to distribute these datasets. If the distribution violates the license, I shall provide the filenames instead.

The md5sum information can be found in this issue.

You need to modify the path of the datasets in ./utils/data.py according to your own path. Ensure that the "shuffle" entry in the JSON file is set to false for the VTAB benchmark.

💡 Running scripts

To prepare your JSON files, refer to the settings in the exps folder and run the following command. All main experiments from the paper (see Table 1) are already provided in the exps folder, you can simply execute them to reproduce the results found in the logs folder.

python main.py --config ./exps/[configname].json

🎈 Acknowledgement

This repo is based on CIL_Survey and PyCIL.

The implementations of parameter-efficient tuning methods are based on VPT, AdaptFormer, and SSF.

💭 Correspondence

If you have any questions, please contact me via email or open an issue.