Awesome
PointDistiller
PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection, CVPR'23 <br> Linfeng Zhang*, Runpei Dong*, Hung-Shuo Tai, and Kaisheng Ma <br>
OpenAccess | arXiv | Logs
This repository contains the implementation of the paper PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection (CVPR 2023).
Environment
This codebase was tested with the following environment configurations. It may work with other versions.
- Ubuntu 18.04/20.04
- CUDA 10.2/11.3
- GCC 7.5.0/9.4.0
- Python 3.7.11/3.8.8
- PyTorch 1.9.0/1.10.0
- MMCV v1.4.8
- MMDetection3D v1.0.0rc0+
- MMDetection v2.22.0
- MMSegmentation v0.22.1
1. Installation
Please refer to getting_started.md for installation.
2. Datasets
We use KITTI and nuScenes datsets, please follow the official instructions for set up.
3. How to Run
Please make sure you have set up the environments and you can start knowledge distillation by running
DEVICE_ID = <gpu_id>
CUDA_VISIBLE_DEVICES=$DEVICE_ID python tools/train.py <student_cfg> --use-kd # for single gpu
bash ./tools/dist_train.sh <student_cfg> 8 --use-kd # for multiple gpus
License
PointDistiller is released under the MIT License. See the LICENSE file for more details.
Acknowledgements
Many thanks to following codes that help us a lot in building this codebase:
Citation
If you find our work useful in your research, please consider citing:
@inproceedings{pointdistiller23,
title={PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection},
author={Linfeng Zhang and Runpei Dong and Hung-Shuo Tai and Kaisheng Ma},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023},
}