Home

Awesome

Neural Collapse Terminus: A Unified Solution for Class Incremental Learning and Its Variants

Yibo Yang*, Haobo Yuan*, Xiangtai Li, Jianlong Wu, Lefei Zhang, Zhouchen Lin, Philip H.S. Torr, Dacheng Tao, Bernard Ghanem.

[pdf] [arxiv] [code]

Environment

[Optional] To start, you need to install the environment with docker (in docker_env directory):

docker build -t ftc --network=host .

Note that we have published the pre-installed image and no need to run the above command if you network is well.

Then, you can start a new container to run our codes:

DATALOC={YOUR DATA LOCATION} LOGLOC={YOUR LOG LOCATION} bash tools/docker.sh

Preparing Data

You do not need to prepare CIFAR datasets.

For ImageNet datasets, please prepare and organize it as following:

imagenet
├── train
│   ├── n01440764
│   │   ├── n01440764_18.JPEG
│   │   ├── ...
├── _val
│   ├── n01440764
│   │   ├── ILSVRC2012_val_00000293.JPEG
│   │   ├── ...

Please create a docker container and enter it (DATALOC and LOGLOC have default values, but they may not match your env):

DATALOC=/path/to/data LOGLOC=/path/to/logger bash tools/docker.sh

Let's go for 🏃‍♀️running code.

For CIL and LTCIL

CIFAR-100

25 steps

bash tools/dist_train.sh configs/cifar/resnet12_cifar_dist_25.py 8 --seed 0 --deterministic --work-dir /opt/logger/cifar100_25t

CIFAR100-LT

10 steps (shuffled)

bash tools/dist_train.sh configs/cifar_lt/resnet_cifar_shuffle_10.py 8 --seed 0 --deterministic --work-dir /opt/logger/cifar100_lt_10t_shuffle

ImageNet-100

25 steps

bash tools/dist_train.sh configs/imagenet/resnet18_imagenet100_25t.py 8 --seed 0 --deterministic --work-dir /opt/logger/i100_25t

ImageNet100-LT

10 steps (Shuffled)

bash tools/dist_train.sh configs/imagenet_lt/resnet18_imagenet100_shuffle_10t.py 8 --seed 0 --deterministic --work-dir /opt/logger/i100_lt_10t_shuffle

For FSCIL

Please refer to our another repo.

For UniCIL (The generalized case)

To conduct UniCIL, you need to run the base session first and run the incremental sessions beyond the base session checkpoint.

Base Session:

bash tools_general/dist.sh train_base configs_general/cifar_general/resnet18_cifar_10.py 8 --seed 0 --deterministic --work-dir /opt/logger/general_cifar_10

Incremental Sessions:

bash tools_general/dist.sh train_inc configs_general/cifar_general/resnet18_cifar_10.py 8 --seed 0 --deterministic --work-dir /opt/logger/general_cifar_10

Results

You can calculate the average of "[ACC_MEAN]" of each session to get the average incremental accuracy. Be careful that "[ACC_MEAN]" is the accuracy after a specific session rather than the average incremental accuracy in the tables of our paper.

Citation

If you find this work helpful in your research, please consider referring:

@article{UniCIL,
    author={Yibo Yang and Haobo Yuan and Xiangtai Li and Jianlong Wu and Lefei Zhang and Zhouchen Lin and Philip H.S. Torr and Bernard Ghanem and Dacheng Tao},
    title={Neural Collapse Terminus: A Unified Solution for Class Incremental Learning and Its Variants},
    journal={arXiv pre-print},
    year={2023}
  }