Awesome
Knowledge Distillation for Image Classification
This repository includes official implementation for the following papers:
-
ICCV 2023: NKD and USKD: From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels
-
ViTKD: ViTKD: Practical Guidelines for ViT feature knowledge distillation
It also provides unofficial implementation for the following papers:
If this repository is helpful, please give us a star ⭐ and cite relevant papers.
Install
- Prepare the dataset in data/imagenet
-
# Set environment pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html pip install -r requirements.txt
- This repo uses mmcls = 0.23.2. If you want to use higher mmcls version for distillation, you can refer branch 1.0 to change the codes.
Run
- Please refer nkd.md and vitkd.md to train the student and get the weight.
- You can modify the configs to choose different distillation methods and pairs.
- The implementation details of different methods can be seen in the folder distillation.
Citing NKD and USKD
@article{yang2023knowledge,
title={From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels},
author={Yang, Zhendong and Zeng, Ailing and Li, Zhe and Zhang, Tianke and Yuan, Chun and Li, Yu},
journal={arXiv preprint arXiv:2303.13005},
year={2023}
}
Citing ViTKD
@article{yang2022vitkd,
title={ViTKD: Practical Guidelines for ViT feature knowledge distillation},
author={Yang, Zhendong and Li, Zhe and Zeng, Ailing and Li, Zexian and Yuan, Chun and Li, Yu},
journal={arXiv preprint arXiv:2209.02432},
year={2022}
}
Acknowledgement
Our code is based on the project MMClassification.