Awesome
[CVPR 2024 Highlight] Logit Standardization in Knowledge Distillation
[Project Page] [arXiv] [Supplementary Materials] [Zhihu (in Chinese)]
<img src=.github/1_1-1.png width=50% /> | <img src=.github/2_2-1.png width=50% /> |
---|---|
Vanilla KD | KD w/ our logit standardization |
Abstract
Knowledge distillation involves transferring soft labels from a teacher to a student using a shared temperature-based softmax function. However, the assumption of a shared temperature between teacher and student implies a mandatory exact match between their logits in terms of logit range and variance. This side-effect limits the performance of student, considering the capacity discrepancy between them and the finding that the innate logit relations of teacher are sufficient for student to learn. To address this issue, we propose setting the temperature as the weighted standard deviation of logit and performing a plug-and-play Z-score pre-process of logit standardization before applying softmax and Kullback-Leibler divergence. Our pre-process enables student to focus on essential logit relations from teacher rather than requiring a magnitude match, and can improve the performance of existing logit-based distillation methods. We also show a typical case where the conventional setting of sharing temperature between teacher and student cannot reliably yield the authentic distillation evaluation; nonetheless, this challenge is successfully alleviated by our Z-score. We extensively evaluate our method for various student and teacher models on CIFAR-100 and ImageNet, showing its significant superiority. The vanilla knowledge distillation powered by our pre-process can achieve favorable performance against state-of-the-art methods, and other distillation variants can obtain considerable gain with the assistance of our pre-process.
:tada: News
2024.4: Selected as Highlight in CVPR 2024
2024.3: Release the code and arXiv
2024.2: Accepted by CVPR 2024
2023.7: Rejected by ICCV 2023
Usage
The code is built on mdistiller, Multi-Level-Logit-Distillation, CTKD and tiny-transformers.
Installation
Environments:
- Python 3.8
- PyTorch 1.7.0
Install the package:
sudo pip3 install -r requirements.txt
sudo python setup.py develop
Distilling CNNs
CIFAR-100
- Download the
cifar_teachers.tar
and untar it to./download_ckpts
viatar xvf cifar_teachers.tar
.
- For KD
# KD
python tools/train.py --cfg configs/cifar100/kd/resnet32x4_resnet8x4.yaml
# KD+Ours
python tools/train.py --cfg configs/cifar100/kd/resnet32x4_resnet8x4.yaml --logit-stand --base-temp 2 --kd-weight 9
- For DKD
# DKD
python tools/train.py --cfg configs/cifar100/dkd/resnet32x4_resnet8x4.yaml
# DKD+Ours
python tools/train.py --cfg configs/cifar100/dkd/resnet32x4_resnet8x4.yaml --logit-stand --base-temp 2 --kd-weight 9
- For MLKD
# MLKD
python tools/train.py --cfg configs/cifar100/mlkd/resnet32x4_resnet8x4.yaml
# MLKD+Ours
python tools/train.py --cfg configs/cifar100/mlkd/resnet32x4_resnet8x4.yaml --logit-stand --base-temp 2 --kd-weight 9
- For CTKD
Please refer to CTKD.
Results and Logs
We put the training logs in ./logs
and hyper-linked below. The name of each log file is formated with KD_TYPE,TEACHER,STUDENT,BASE_TEMPERATURE,KD_WEIGHT.txt
. The possible third value for DKD is the value of BETA. Due to average operation and randomness, there may be slight differences between the reported results and the logged results.
- Teacher and student have identical structures:
Teacher <br> Student | ResNet32x4 <br> ResNet8x4 | VGG13 <br> VGG8 | Wrn_40_2 <br> Wrn_40_1 | Wrn_40_2 <br> Wrn_16_2 | ResNet56 <br> ResNet20 | ResNet110 <br> ResNet32 | ResNet110 <br> ResNet20 |
---|---|---|---|---|---|---|---|
KD | 73.33 | 72.98 | 73.54 | 74.92 | 70.66 | 73.08 | 70.67 |
KD+Ours | 76.62 | 74.36 | 74.37 | 76.11 | 71.43 | 74.17 | 71.48 |
CTKD | 73.39 | 73.52 | 73.93 | 75.45 | 71.19 | 73.52 | 70.99 |
CTKD+Ours | 76.67 | 74.47 | 74.58 | 76.08 | 71.34 | 74.01 | 71.39 |
DKD | 76.32 | 74.68 | 74.81 | 76.24 | 71.97 | 74.11 | 71.06 |
DKD+Ours | 77.01 | 74.81 | 74.89 | 76.39 | 72.32 | 74.29 | 71.85 |
MLKD | 77.08 | 75.18 | 75.35 | 76.63 | 72.19 | 74.11 | 71.89 |
MLKD+Ours | 78.28 | 75.22 | 75.56 | 76.95 | 72.33 | 74.32 | 72.27 |
- Teacher and student have distinct structures:
Teacher <br> Student | ResNet32x4 <br> SHN-V2 | ResNet32x4 <br> Wrn_16_2 | ResNet32x4 <br> Wrn_40_2 | Wrn_40_2 <br> ResNet8x4 | Wrn_40_2 <br> MN-V2 | VGG13 <br> MN-V2 | ResNet50 <br> MN-V2 |
---|---|---|---|---|---|---|---|
KD | 74.45 | 74.90 | 77.70 | 73.97 | 68.36 | 67.37 | 67.35 |
KD+Ours | 75.56 | 75.26 | 77.92 | 77.11 | 69.23 | 68.61 | 69.02 |
CTKD | 75.37 | 74.57 | 77.66 | 74.61 | 68.34 | 68.50 | 68.67 |
CTKD+Ours | 76.18 | 75.16 | 77.99 | 77.03 | 69.53 | 68.98 | 69.36 |
DKD | 77.07 | 75.70 | 78.46 | 75.56 | 69.28 | 69.71 | 70.35 |
DKD+Ours | 77.37 | 76.19 | 78.95 | 76.75 | 70.01 | 69.98 | 70.45 |
MLKD | 78.44 | 76.52 | 79.26 | 77.33 | 70.78 | 70.57 | 71.04 |
MLKD+Ours | 78.76 | 77.53 | 79.66 | 77.68 | 71.61 | 70.94 | 71.19 |
Training on ImageNet
-
Download the dataset at https://image-net.org/ and put it to
./data/imagenet
# KD python tools/train.py --cfg configs/imagenet/r34_r18/kd.yaml # KD+Ours python tools/train.py --cfg configs/imagenet/r34_r18/kd.yaml --logit-stand --base-temp 2 --kd-weight 9
Distilling ViTs
Please refer to tiny-transformers.
Results
Model | Top-1 Acc. (Base) | Top-1 Acc. (ECCV2022) | Tpo-1 Acc. (KD+Ours) |
---|---|---|---|
DeiT-Tiny | 65.08 ( weights | log ) | 78.15 ( weights | log ) | 78.55( weights | log) |
T2T-ViT-7 | 69.37 ( weights | log ) | 78.35 ( weights | log ) | 78.43( weights | log) |
PiT-Tiny | 73.58 ( weights | log ) | 78.48 ( weights | log ) | 78.76( weights | log) |
PVT-Tiny | 69.22 ( weights | log ) | 77.07 ( weights | log ) | 78.43( weights | log) |
Reproduce Figure 3, 4, 5
Please refer to visualizations.
Acknowledgement
Sincere gratitude to the contributors of mdistiller, CTKD, Multi-Level-Logit-Distillation and tiny-transformers for their distinguished efforts.
:mailbox_with_mail: Contact
Shangquan Sun: shangquansun@gmail.com
:mega: Citation
If you find that this project helps your research, please consider citing some of the following paper:
@inproceedings{sun2024logit,
title={Logit standardization in knowledge distillation},
author={Sun, Shangquan and Ren, Wenqi and Li, Jingzhi and Wang, Rui and Cao, Xiaochun},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={15731--15740},
year={2024}
}
@article{sun2024logit,
title={Logit Standardization in Knowledge Distillation},
author={Sun, Shangquan and Ren, Wenqi and Li, Jingzhi and Wang, Rui and Cao, Xiaochun},
journal={arXiv preprint arXiv:2403.01427},
year={2024}
}