Home

Awesome

LTRL: Boosting Long-tail Recognition via Reflective Learning

[Official, ECCV 2024 (Oral), paper šŸ”„

Qihao Zhao*<sup>1,4</sup>, Yalun Dai*<sup>2</sup>, Shen Lin<sup>3</sup>, Wei Hu<sup>1</sup>, Fan Zhang<sup>1</sup>, Jun Liu<sup>4,5</sup>

1 Beijing University of Chemical Technology

2 Nanyang Technological University

3 Xidian University

4 Singapore University of Technology and Design

5 Lancaster University

(* Equal contribution)

The framework of LTRL

LTRL

1. Requirements

pip install -r requirements.txt

2. Datasets

(1) Four bechmark datasets

data
ā”œā”€ā”€ ImageNet_LT
ā”‚Ā Ā  ā”œā”€ā”€ test
ā”‚Ā Ā  ā”œā”€ā”€ train
ā”‚Ā Ā  ā””ā”€ā”€ val
ā”œā”€ā”€ CIFAR100
ā”‚Ā Ā  ā””ā”€ā”€ cifar-100-python
ā”œā”€ā”€ CIFAR10
ā”‚Ā Ā  ā””ā”€ā”€ cifar-10-python
ā”œā”€ā”€ Place365
ā”‚Ā Ā  ā”œā”€ā”€ data_256
ā”‚Ā Ā  ā”œā”€ā”€ test_256
ā”‚Ā Ā  ā””ā”€ā”€ val_256
ā””ā”€ā”€ iNaturalist 
 Ā Ā  ā”œā”€ā”€ test2018
    ā””ā”€ā”€ train_val2018

(2) Txt files

data_txt
ā”œā”€ā”€ ImageNet_LT
ā”‚Ā Ā  ā”œā”€ā”€ ImageNet_LT_test.txt
ā”‚Ā Ā  ā”œā”€ā”€ ImageNet_LT_train.txt
ā”‚Ā Ā  ā””ā”€ā”€ ImageNet_LT_val.txt
ā”œā”€ā”€ Places_LT_v2
ā”‚Ā Ā  ā”œā”€ā”€ Places_LT_test.txt
ā”‚Ā Ā  ā”œā”€ā”€ Places_LT_train.txt
ā”‚Ā Ā  ā””ā”€ā”€ Places_LT_val.txt
ā””ā”€ā”€ iNaturalist18
    ā”œā”€ā”€ iNaturalist18_train.txt
    ā””ā”€ā”€ iNaturalist18_val.txt 

3. Pretrained models

4. Train

Train SADE_RL/BSCE_RL

(1) CIFAR100-LT

nohup python train.py -c configs/{sade or bsce}/config_cifar100_ir10_{sade or ce}_rl.json &>{sade or ce}_rl_10.out&
nohup python train.py -c configs/{sade or bsce}/config_cifar100_ir50_{sade or ce}_rl.json &>{sade or ce}_rl_50.out&
nohup python train.py -c configs/{sade or bsce}/config_cifar100_ir100_{sade or ce}_rl.json &>{sade or ce}_rl_100.out&

Example:
nohup python train.py -c configs/sade/config_cifar100_ir100_sade_rl.json &>sade_rl_100.out&
# test
python test.py -r {$PATH}

(2) ImageNet-LT

python train.py -c configs/{sade or bsce}/config_imagenet_lt_resnext50_{sade or ce}_rl.json

(3) Place-LT

python train.py -c configs/{sade or bsce}/config_imagenet_lt_resnext50_{sade or ce}_rl.json

(4) iNatrualist2018-LT

python train.py -c configs/{sade or bsce}/config_iNaturalist_resnet50_{sade or ce}_rl.json

Train baseline: SADE/BSCE

nohup python train.py -c configs/{sade/bsce}/config_cifar100_ir10_{sade/ce}.json &>{sade/ce}_10.out&
nohup python train.py -c configs/{sade/bsce}/config_cifar100_ir50_{sade/ce}.json &>{sade/ce}_50.out&
nohup python train.py -c configs/{sade/bsce}/config_cifar100_ir100_{sade/ce}.json &>{sade/ce}_100.out&

5. Test

python test.py -r {$PATH}

(2) ImageNet-LT

python train.py -c configs/{sade or bsce}config_imagenet_lt_resnext50_{sade or ce}.json

(3) Place-LT

python train.py -c configs/{sade or bsce}/config_imagenet_lt_resnext50_{sade or ce}_rl.json

(4) iNatrualist2018-LT

python train.py -c configs/{sade or bsce}/config_iNaturalist_resnet50_{sade or ce}_rl.json

Citation

If you find our work inspiring or use our codebase in your research, please consider giving a star ā­ and a citation.

@article{zhao2024ltrl,
  title={LTRL: Boosting Long-tail Recognition via Reflective Learning},
  author={Zhao, Qihao and Dai, Yalun and Lin, Shen and Hu, Wei and Zhang, Fan and Liu, Jun},
  journal={arXiv preprint arXiv:2407.12568},
  year={2024}
}

Acknowledgements

The framework is based on SADE and RIDE.