Home

Awesome

Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning

<p align="center"> <a href=""><img src="https://img.shields.io/badge/Ease-v1.0-darkcyan"></a> <a href='https://arxiv.org/abs/2403.12030'><img src='https://img.shields.io/badge/Arxiv-2403.12030-b31b1b.svg?logo=arXiv'></a> <a href=""><img src="https://img.shields.io/github/stars/sun-hailong/CVPR24-Ease?color=4fb5ee"></a> <a href=""><img src="https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com%2Fsun-hailong%2FCVPR24-Ease&count_bg=%23FFA500&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=visitors&edge_flat=false"></a> </p>

🎉The code repository for "Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning" [paper](CVPR24) in PyTorch. If you use any content of this repo for your work, please cite the following bib entry:

  @inproceedings{zhou2024expandable,
    title={Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning},
    author={Zhou, Da-Wei and Sun, Hai-Long and Ye, Han-Jia and Zhan, De-Chuan},
    booktitle={CVPR},
    pages={23554--23564},
    year={2024}
  }

Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning

Class-Incremental Learning (CIL) requires a learning system to continually learn new classes without forgetting. Despite the strong performance of Pre-Trained Models (PTMs) in CIL, a critical issue persists: learning new classes often results in the overwriting of old ones. Excessive modification of the network causes forgetting, while minimal adjustments lead to an inadequate fit for new classes. As a result, it is desired to figure out a way of efficient model updating without harming former knowledge.

In this paper, we propose ExpAndable Subspace Ensemble (EASE) for PTM-based CIL. To enable model updating without conflict, we train a distinct lightweight adapter module for each new task, aiming to create task-specific subspaces. These adapters span a high-dimensional feature space, enabling joint decision-making across multiple subspaces. As data evolves, the expanding subspaces render the old class classifiers incompatible with new-stage spaces. Correspondingly, we design a semantic-guided prototype complement strategy that synthesizes old classes’ new features without using any old class instance. Extensive experiments on seven benchmark datasets verify EASE’s state-of-the-art performance.

<img src='resources/teaser.png' width='900'>

🎊 Results

We conducted experiments on seven benchmark datasets to verify the competitive performance of EASE.

<img src='resources/result-img.png' width='900'> <img src='resources/result-table.png' width='900'>

Requirements

🗂️ Environment

  1. torch 2.0.1
  2. torchvision 0.15.2
  3. timm 0.6.12
  4. easydict

🔎 Dataset

We provide the processed datasets as follows:

You need to modify the path of the datasets in ./utils/data.py according to your own path.

These datasets are referenced in the ADAM

🔑 Running scripts

Please follow the settings in the exps folder to prepare json files, and then run:

python main.py --config ./exps/[filename].json

Here is an example of how to run the code

if you want to run the cifar dataset using ViT-B/16-IN1K, you can follow the script:

python main.py --config ./exps/ease_cifar.json

if you want to run the cifar dataset using ViT-B/16-IN21K, you can follow the script:

python main.py --config ./exps/ease_cifar_in21k.json

After running the code, you will get a log file in the logs/ease/cifar224/ folder.

👨‍🏫 Acknowledgment

We would like to express our gratitude to the following repositories for offering valuable components and functions that contributed to our work.