Home

Awesome

Rethinking Few-shot Class-incremental Learning: Learning from Yourself (ECCV2024)

Official PyTorch implementation of our ECCV2024 paper “Rethinking Few-shot Class-incremental Learning: Learning from Yourself”. [Paper]

<div align=center> <img src="img.png" style="zoom: 60%;"></div> <div align=center> <img src="img_1.png" style="zoom: 72%;"></div> <div align=center> <img src="img_2.png" style="zoom: 57%;"></div>

Introduction

TL;DR

We proposed a novel metric for a more balanced evaluation of Few-Shot Class-incremental Learning (FSCIL) methods. Further, we also provide analyses of Vision Transformers(ViT) on FSCIL and design the feature rectification module learning from intermediate features.

Environments

Data Preparation

We follow prior works to conduct experiments on three standard datasets: CIFAR100, mini/ImageNet, and CUB200.

Download Datasets

After downloading, please put all datasets into the ./data directory.

Training

Evaluation

We proposed a novel evaluation metric called generalized average accuracy (gAcc), which provides a more balanced assessment of FSCIL methods. The codes for gAcc is the generalised_avg_acc() function in models/metric.py, which inputs the range of the parameter $\alpha$ and the accuracy at each task. By default, we show gAcc and aAcc of our method after training of each task, feel free to use this metric for any other methods!

Base Task Training (optional)

We also provide codes for training our ViT backbone on the base task (task 1, with 60 classes). To train, please cd baseline_inc and check the README.md in ./baseline_inc.

Acknowledgement

@inproceedings{tang2024rethinking,
  title={Rethinking Few-shot Class-incremental Learning: Learning from Yourself},
  author={Tang, Yu-Ming and Peng, Yi-Xing and Meng, Jingke and Zheng, Wei-Shi},
  booktitle={European Conference on Computer Vision},
  year={2024}
}