Home

Awesome

Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-tailed Classification

Implementation of
"Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-tailed Classification"
Liuyu Xiang, Guiguang Ding, Jungong Han;

in European Conference on Computer Vision (ECCV), 2020, Spotlight

<img src='./assets/LFME.PNG' width=800>

Requirements

Data Preparation

Follow OLTR for data preparation.

Getting Started (Training & Testing)

CUDA_VISIBLE_DEVICES=0 python main.py --config=./config/many_shot.py
CUDA_VISIBLE_DEVICES=0 python main.py --config=./config/median_shot.py
CUDA_VISIBLE_DEVICES=0 python main.py --config=./config/low_shot.py
CUDA_VISIBLE_DEVICES=0 python main_LFME.py --config=./config/ImageNet_LT/LFME.py
CUDA_VISIBLE_DEVICES=0 python main_LFME.py --config=./config/ImageNet_LT/LFME.py --test

Citation

If you find our work useful for your research, please consider citing the following paper:

@inproceedings{xiang2020learning,
  title={Learning from multiple experts: Self-paced knowledge distillation for long-tailed classification},
  author={Xiang, Liuyu and Ding, Guiguang and Han, Jungong},
  booktitle={European Conference on Computer Vision},
  pages={247--263},
  year={2020},
  organization={Springer}
}

Contact

If you have any questions, please feel free to contact xiangly17@mails.tsinghua.edu.cn.

Acknowledgement

The code is partly based on OLTR.