Awesome
HPTR
<p align="center"> <img src="docs/hptr_banner.png" alt="HPTR realizes real-time and on-board motion prediction without sacrificing the performance.", width=750px> <br/>HPTR realizes real-time and on-board motion prediction without sacrificing the performance. <br/>To efficiently predict the multi-modal future of numerous agents (a), HPTR minimizes the computational overhead by: (b) Sharing contexts among target agents. (c) Reusing static contexts during online inference. (d) Avoiding expensive post-processing and ensembling. </p>Real-Time Motion Prediction via Heterogeneous Polyline Transformer with Relative Pose Encoding
Zhejun Zhang, Alexander Liniger, Christos Sakaridis, Fisher Yu and Luc Van Gool.<br/>NeurIPS 2023<br/> Project Website<br/> arXiv Paper
@inproceedings{zhang2023hptr,
title = {Real-Time Motion Prediction via Heterogeneous Polyline Transformer with Relative Pose Encoding},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
author = {Zhang, Zhejun and Liniger, Alexander and Sakaridis, Christos and Yu, Fisher and Van Gool, Luc},
year = {2023},
}
Updates
- The model checkpoint for Argoverse 2 is available at this wandb artifact
zhejun/hptr_av2_ckpt/av2_ckpt:v0
(wandb project). - HPTR ranks 1st in minADE and 2nd in minFDE on the WOMD Motion Prediction Leaderboard 2023.
Setup Environment
- Create the main conda environment by running
conda env create -f environment.yml
. - Install Waymo Open Dataset API manually because the pip installation of version 1.5.2 is not supported on some linux, e.g. CentOS. Run
conda activate hptr wget https://files.pythonhosted.org/packages/85/1d/4cdd31fc8e88c3d689a67978c41b28b6e242bd4fe6b080cf8c99663b77e4/waymo_open_dataset_tf_2_11_0-1.5.2-py3-none-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl mv waymo_open_dataset_tf_2_11_0-1.5.2-py3-none-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl waymo_open_dataset_tf_2_11_0-1.5.2-py3-none-any.whl pip install --no-deps waymo_open_dataset_tf_2_11_0-1.5.2-py3-none-any.whl rm waymo_open_dataset_tf_2_11_0-1.5.2-py3-none-any.whl
- Create the conda environment for packing Argoverse 2 Motion Forecasting Dataset by running
conda env create -f env_av2.yml
. - We use WandB for logging. You can register an account for free.
- Be aware
- We use 4 NVIDIA RTX 2080Ti for training and a single 2080Ti for evaluation. The training takes at least 5 days to converge.
- This repo contains the experiments for the Waymo Motion Prediction Challenge and the Argoverse 2: Motion Forecasting Competition
- We cannot share pre-trained models according to the terms of the Waymo Open Motion Dataset.
Prepare Datasets
- Waymo Open Motion Dataset (WOMD):
- Download the Waymo Open Motion Dataset. We use v1.2.
- Run
python src/pack_h5_womd.py
or use bash/pack_h5.sh to pack the dataset into h5 files to accelerate data loading during the training and evaluation. - You should pack three datasets:
training
,validation
andtesting
. Packing thetraining
dataset takes around 2 days. Forvalidation
andtesting
it should take a few hours.
- Argoverse 2 Motion Forecasting Dataset (AV2):
- Download the Argoverse 2 Motion Forecasting Dataset.
- Run
python src/pack_h5_av2.py
or use bash/pack_h5.sh to pack the dataset into h5 files to accelerate data loading during the training and evaluation. - You should pack three datasets:
training
,validation
andtesting
. Each dataset should take a few hours.
Training, Validation, Testing and Submission
Please refer to bash/train.sh for the training.
Once the training converges, you can use the saved checkpoints (WandB artifacts) to do validation and testing, please refer to bash/submission.sh for more details.
Once the validation/testing is finished, download the file womd_K6.tar.gz
from WandB and submit to the Waymo Motion Prediction Leaderboard. For AV2, download the file av2_K6.parquet
from WandB and submit to the Argoverse 2 Motion Forecasting Competition.
Performance
Our submission to the WOMD leaderboard is found here here.
Our submission to the AV2 leaderboard is found here here.
Ablation Models
Please refer to docs/ablation_models.md for the configurations of ablation models.
Specifically you can find the Wayformer and SceneTransformer based on our backbone. You can also try out different hierarchical architectures.
License
This software is made available for non-commercial use under a creative commons license. You can find a summary of the license here.
Acknowledgement
This work is funded by Toyota Motor Europe via the research project TRACE-Zurich (Toyota Research on Automated Cars Europe).