Home

Awesome

ST-P3

pipeline

ST-P3: End-to-end Vision-based Autonomous Driving via Spatial-Temporal Feature Learning
Shengchao Hu, Li Chen, Penghao Wu, Hongyang Li, Junchi Yan, Dacheng Tao.

Introduction

This reposity is the official PyTorch Lightning implementation for ST-P3.

TL;DR: we propose a spatial-temporal feature learning scheme towards a set of more representative features for perception, prediction and planning tasks simultaneously in autonomous driving, and thus devise an explicit pipeline to generate planning trajectories directly from raw sensor inputs.

Get Started

Setup

conda env create -f environment.yml
git clone https://github.com/OpenDriveLab/ST-P3.git

Pre-trained models

Evaluation

To evaluate the model on nuScenes:

bash scripts/eval_plan.sh ${checkpoint} ${dataroot}

To evaluate the model on CARLA:

Training

# (recommended) perception module pretrain
bash scripts/train_perceive.sh ${configs} ${dataroot}

# (optional) prediction module training purpose, no need for e2e training
bash scripts/train_prediction.sh ${configs} ${dataroot} ${pretrained}

# entire model e2e training
bash scripts/train_plan.sh ${configs} ${dataroot} ${pretrained}

Benchmark

MethodL2 (m) 1sL2 (m) 2sL2 (m) 3sCollision (%) 1sCollision (%) 2sCollision (%) 3s
Vanilla0.501.252.800.680.982.76
NMP0.611.443.180.660.902.34
Freespace0.561.273.080.650.861.64
ST-P31.332.112.900.230.621.27
MethodTown05 Short DSTown05 Short RCTown05 Long DSTow05 Long RC
CILRS7.4713.403.687.19
LBC30.9755.017.0532.09
Transfuser54.5278.4133.1556.36
ST-P355.1486.7411.4583.15

Visualization

<img src=imgs/nuScenes.png width="720" height="360" alt="nuscenes_vis"/><br/>

<img src=imgs/CARLA.png width="720" height="240" alt="CARLA_vis"/><br/>

Citation

If you find our repo or our paper useful, please use the following citation:

@inproceedings{hu2022stp3,
 title={ST-P3: End-to-end Vision-based Autonomous Driving via Spatial-Temporal Feature Learning}, 
 author={Shengchao Hu and Li Chen and Penghao Wu and Hongyang Li and Junchi Yan and Dacheng Tao},
 booktitle={European Conference on Computer Vision (ECCV)},
 year={2022}
}

License

All code within this repository is under Apache License 2.0.

Acknowledgement

We thank Xiangwei Geng for his support on the depth map generation, and fruitful discussions from Xiaosong Jia. We have many thanks to FIERY team for their exellent open source project.