Home

Awesome

<div align="center"> <h1> <b> Language Prompt for Autonomous Driving </b> </h1> </div>

Language Prompt for Autonomous Driving

Dongming Wu*, Wencheng Han*, Tiancai Wang, Yingfei Liu, Xiangyu Zhang, Jianbing Shen

:fire: Introduction

<p align="center"><img src="./figs/example.jpg" width="800"/></p>

This is the official implementation of Language Prompt for Autonomous Driving.

:boom: News

:star: Benchmark

We expand nuScenes dataset with annotating language prompts, named NuPrompt. It is a large-scale dataset for language prompt in driving scenes, which contains 40,147 language prompts for 3D objects. Thanks to nuScenes, our descriptions are closed to real-driving nature and complexity, covering a 3D, multi-view, and multi-frame space.

The data can be downloaded from NuPrompt.

:hammer: Model

Our model is built upon PF-Track.

Please refer to data.md for preparing data and pre-trained models.

Please refer to environment.md for environment installation.

Please refer to training_inference.md for training and evaluation.

:rocket: Results

MethodAMOTAAMOTPRECALLModelConfig
PromptTrack0.2001.57232.5%modelconfig

:point_right: Citation

If you find our work useful in your research, please consider citing them.

@article{wu2023language,
  title={Language Prompt for Autonomous Driving},
  author={Wu, Dongming and Han, Wencheng and Wang, Tiancai and Liu, Yingfei and Zhang, Xiangyu and Shen, Jianbing},
  journal={arXiv preprint},
  year={2023}
}
@inproceedings{wu2023referring,
  title={Referring multi-object tracking},
  author={Wu, Dongming and Han, Wencheng and Wang, Tiancai and Dong, Xingping and Zhang, Xiangyu and Shen, Jianbing},
  booktitle={CVPR},
  year={2023}
}

:heart: Acknowledgements

We thank the authors that open the following projects.