Home

Awesome

<div align="center"> <h1>MapTR <img src="assets/map.png" width="30"></h1> <h3>An End-to-End Framework for Online Vectorized HD Map Construction</h3>

Bencheng Liao<sup>1,2,3</sup> *, Shaoyu Chen<sup>1,3</sup> *, Yunchi Zhang<sup>1,3</sup> , Bo Jiang<sup>1,3</sup> ,Tianheng Cheng<sup>1,3</sup>, Qian Zhang<sup>3</sup>, Wenyu Liu<sup>1</sup>, Chang Huang<sup>3</sup>, Xinggang Wang<sup>1 :email:</sup>

<sup>1</sup> School of EIC, HUST, <sup>2</sup> Institute of Artificial Intelligence, HUST, <sup>3</sup> Horizon Robotics

(*) equal contribution, (<sup>:email:</sup>) corresponding author.

ArXiv Preprint (arXiv 2208.14437)

openreview ICLR'23, accepted as ICLR Spotlight

extended ArXiv Preprint MapTRv2 (arXiv 2308.05736)

</div>

News

Introduction

<div align="center"><h4>MapTR/MapTRv2 is a simple, fast and strong online vectorized HD map construction framework.</h4></div>

framework

High-definition (HD) map provides abundant and precise static environmental information of the driving scene, serving as a fundamental and indispensable component for planning in autonomous driving system. In this paper, we present Map TRansformer, an end-to-end framework for online vectorized HD map construction. We propose a unified permutation-equivalent modeling approach, i.e., modeling map element as a point set with a group of equivalent permutations, which accurately describes the shape of map element and stabilizes the learning process. We design a hierarchical query embedding scheme to flexibly encode structured map information and perform hierarchical bipartite matching for map element learning. To speed up convergence, we further introduce auxiliary one-to-many matching and dense supervision. The proposed method well copes with various map elements with arbitrary shapes. It runs at real-time inference speed and achieves state-of-the-art performance on both nuScenes and Argoverse2 datasets. Abundant qualitative results show stable and robust map construction quality in complex and various driving scenes.

Models

Results from the MapTRv2 paper

comparison

MethodBackboneLr SchdmAPFPS
MapTRR18110ep45.935.0
MapTRR5024ep50.315.1
MapTRR50110ep58.715.1
MapTRv2R18110ep52.333.7
MapTRv2R5024ep61.514.1
MapTRv2R50110ep68.714.1
MapTRv2V2-99110ep73.49.9

Notes:

Results from this repo.

MapTR

<div align="center"><h4> nuScenes dataset</h4></div>
MethodBackboneBEVEncoderLr SchdmAPFPSmemoryConfigDownload
MapTR-nanoR18GKT110ep46.335.011907M (bs 24)configmodel / log
MapTR-tinyR50GKT24ep50.015.110287M (bs 4)configmodel / log
MapTR-tinyR50GKT110ep59.315.110287M (bs 4)configmodel / log
MapTR-tinyCamera & LiDARGKT24ep62.76.011858M (bs 4)configmodel / log
MapTR-tinyR50bevpool24ep50.114.79817M (bs 4)configmodel / log
MapTR-tinyR50bevformer24ep48.715.010219M (bs 4)configmodel / log
MapTR-tiny<sup>+</sup>R50GKT24ep51.315.115158M (bs 4)configmodel / log
MapTR-tiny<sup>+</sup>R50bevformer24ep53.315.015087M (bs 4)configmodel / log

Notes:

MapTRv2

Please git checkout maptrv2 and follow the install instruction to use following checkpoint

<div align="center"><h4> nuScenes dataset</h4></div>
MethodBackboneBEVEncoderLr SchdmAPFPSmemoryConfigDownload
MapTRv2R50bevpool24ep61.414.119426M (bs 24)configmodel / log
MapTRv2*R50bevpool24ep54.3WIP20363M (bs 24)configmodel / log
<div align="center"><h4> Argoverse2 dataset</h4></div>
MethodBackboneBEVEncoderLr SchdmAPFPSmemoryConfigDownload
MapTRv2R50bevpool6ep64.314.120580 (bs 24)configmodel / log
MapTRv2*R50bevpool6ep61.3WIP21515 (bs 24)configmodel / log

Notes:

Qualitative results on nuScenes val split and Argoverse2 val split

<div align="center"><h4> MapTR/MapTRv2 maintains stable and robust map construction quality in various driving scenes.</h4></div>

visualization

MapTRv2 on whole nuScenes val split

Youtube

MapTRv2 on whole Argoverse2 val split

Youtube

<!-- ### *Sunny&Cloudy* https://user-images.githubusercontent.com/31960625/187059686-11e4dd4b-46db-4411-b680-17ed6deebda2.mp4 ### *Rainy* https://user-images.githubusercontent.com/31960625/187059697-94622ddb-e76a-4fa7-9c44-a688d2e439c0.mp4 ### *Night* https://user-images.githubusercontent.com/31960625/187059706-f7f5a7d8-1d1d-46e0-8be3-c770cf96d694.mp4 -->

End-to-end Planning based on MapTR

https://user-images.githubusercontent.com/26790424/229679664-0e9ba5e8-bf2c-45e0-abbc-36d840ee5cc9.mp4

Getting Started

Catalog

Acknowledgements

MapTR is based on mmdetection3d. It is also greatly inspired by the following outstanding contributions to the open-source community: BEVFusion, BEVFormer, HDMapNet, GKT, VectorMapNet.

Citation

If you find MapTR is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.

@inproceedings{MapTR,
  title={MapTR: Structured Modeling and Learning for Online Vectorized HD Map Construction},
  author={Liao, Bencheng and Chen, Shaoyu and Wang, Xinggang and Cheng, Tianheng, and Zhang, Qian and Liu, Wenyu and Huang, Chang},
  booktitle={International Conference on Learning Representations},
  year={2023}
}
@article{maptrv2,
  title={MapTRv2: An End-to-End Framework for Online Vectorized HD Map Construction},
  author={Liao, Bencheng and Chen, Shaoyu and Zhang, Yunchi and Jiang, Bo and Zhang, Qian and Liu, Wenyu and Huang, Chang and Wang, Xinggang},
  journal={arXiv preprint arXiv:2308.05736},
  year={2023}
}
 @article{lanegap,
  title={Lane Graph as Path: Continuity-preserving Path-wise Modeling for Online Lane Graph Construction},
  author={Bencheng Liao and Shaoyu Chen and Bo Jiang and Tianheng Cheng and Qian Zhang and Wenyu Liu and Chang Huang and Xinggang Wang},
  journal={arXiv preprint arXiv:2303.08815},
  year={2023}
}