Home

Awesome

DKT

The official PyTorch implementation of our CVPR 2023 poster paper:

DKT: Diverse Knowledge Transfer Transformer for Class Incremental Learning

GitHub maintainer: Xinyuan Gao

Requirement

We use the
python == 3.9
torch == 1.11.0
torchvision == 0.12.0
timm == 0.5.4
continuum == 1.2.3

Accuracy

We provide the accuracy of every phase in different settings in the following table. You can also get them in the logs. (We run the official code again, it may be slightly different from the paper).

CIFAR 20—2012345AVG
%88.380.276.9271.9567.1776.91
CIFAR 10—1012345678910AVG
%94.286.9583.077.5374.1274.0570.5367.965.1263.4575.69
CIFAR 5—51234567891011121314151617181920AVG
%97.894.090.2787.384.1681.6778.5475.3873.9172.4270.3670.4267.8266.4665.4564.863.9662.4861.0359.274.37
ImageNet100 10—1012345678910AVG
%91.685.881.5379.3577.2876.5773.4971.670.268.7477.62
ImageNet1000 100—10012345678910AVG
%85.0280.1276.573.770.2668.3666.3564.161.8158.9370.52

Notice

If you want to run our experiment on different numbers of GPUs, you should set the Batch_size * GPUs == 512. For example, one GPU, the Batch size 512 and two GPUs, the Batch size 256 (CIFAR-100 and ImageNet100). If you want to change it, please try to change the hyperparameters. \

For CIFAR-100, you can use a single GPU with bs 512 or two GPUs with bs 256. (The accuracy is in the logs)
For ImageNet-100, we use two GPUs with bs 256
For ImageNet-1000, we use four GPUs with bs 256

Due to the rush in organizing time, if you encounter any situation, please contact my email [gxy010317@stu.edu.cn]. Thanks

Acknowledgement

Our code is heavily based on the great codebase of Dytox, thanks for its wonderful code frame.

Also, a part of our code is inspired by the CSCCT, thanks for its code.

Trainer

You can use the following command to run the code like the Dytox:

bash train.sh 0,1 
    --options options/data/cifar100_10-10.yaml options/data/cifar100_order1.yaml options/model/cifar_DKT.yaml 
    --name DKT 
    --data-path MY_PATH_TO_DATASET 
    --output-basedir PATH_TO_SAVE_CHECKPOINTS 
    --memory-size 2000

Citation

If any parts of our paper and code help your research, please consider citing us and giving a star to our repository.

@InProceedings{Gao_2023_CVPR, 
    author    = {Gao, Xinyuan and He, Yuhang and Dong, Songlin and Cheng, Jie and Wei, Xing and Gong, Yihong}, 
    title     = {DKT: Diverse Knowledge Transfer Transformer for Class Incremental Learning}, 
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, 
    month     = {June}, 
    year      = {2023}, 
    pages     = {24236-24245} 
}