Awesome
PlanKD: Compressing End-to-End Motion Planner for Autonomous Driving (CVPR 2024)
Introduction
This repository contains the code for paper: On the Road to Portability: Compressing End-to-End Motion Planner for Autonomous Driving. This paper is accepted by CVPR 2024.
TL;DR: We propose PlanKD, the first knowledge distillation framework tailored for compressing end-to-end motion planning models in autonomous driving.
<img src="./assets/framework.png" style="zoom:60%;" />Setup
Clone the repo and build the python environment.
git clone https://github.com/tulerfeng/PlanKD.git
conda env create -f environment.yml
conda activate plankd
Download and setup CARLA 0.9.10.1 environment referring to the related instructions in InterFuser or TCP .
Dataset Generation
We provide the scripts for the dataset generation in the dataset
folder and the data_collection
folder. Please refer to InterFuser or TCP for the instructions of dataset generation. Note that it's unnecessary to run all the scripts for data collection. You can choose to run them selectively, such as collecting data uniformly across different towns and weather conditions.
We also provide the a tiny dataset for demonstration which could be downloaded at here. On this tiny demo dataset, InterFuser (26.3M) obtains 36.52 / 25.54 driving score with / without PlanKD on Town05 Short.
Training
Train the teacher InterFuser (52.9M) model.
cd ./interfuser
bash scripts/train.sh interfuser_baseline
We also provide an example teacher model for direct use which could be downloaded at here. Put the checkpoint of teacher model under the interfuser/output
folder.
Train the student InterFuser (26.3M) model without PlanKD.
bash scripts/train.sh interfuser_baseline2
Train the student InterFuser (26.3M) model with PlanKD.
bash scripts/train_plankd.sh interfuser_baseline2
The InterFuser student models, with parameter counts of 26.3M, 11.7M, and 3.8M, are respectively termed as interfuser_baseline2, interfuser_baseline4, and interfuser_baseline5. The core code of PlanKD is ininterfuser/plankd.py
.
Evaluation
Launch the CARLA server.
SDL_VIDEODRIVER=offscreen ./CarlaUE4.sh -carla-world-port=2000 -opengl
Modify the configuration in /leaderboard/scripts/run_evaluation.sh
file.
Run the evaluation.
SDL_VIDEODRIVER="dummy" ./leaderboard/scripts/run_evaluation.sh
For the evaluation of TCP models, please refer to the related code in TCP since its agent config is different from InterFuser. Regarding the architecture of small TCP models, please refer to our provided code in the /TCP
folder and integrate it into the original codebase.
Trained Weights
We also provide the trained student model weights by PlanKD for direct evaluation, which could be downloaded at here.
Acknowledgement
This implementation is based on code from several repositories.
Citation
If you find our repo or paper useful, please cite us as
@article{feng2024road, title={On the Road to Portability: Compressing End-to-End Motion Planner for Autonomous Driving}, author={Kaituo Feng and Changsheng Li and Dongchun Ren and Ye Yuan and Guoren Wang}, journal={arXiv preprint arXiv:2403.01238}, year={2024} }
License
All code within this repository is under Apache License 2.0.