Home

Awesome

<div id="top" align="center">

OpenLane-V2

The World's First Perception and Reasoning Benchmark for Scene Structure in Autonomous Driving.

OpenLane-V2 devkit LICENSE testserver

<!-- **English | [中文](./README-zh-hans.md)** _In terms of ambiguity, the English version shall prevail._ --> </div>

Leaderboard

Mapless Driving at CVPR 2024 AGC (Server remains active)

We maintain a leaderboard and test server on the task of Driving Scene Topology. If you wish to add new / modify results to the leaderboard, please drop us an email.

image

OpenLane Topology Challenge at CVPR 2023 (Server remains active)

We maintain a leaderboard and test server on the task of OpenLane Topology. If you wish to add new / modify results to the leaderboard, please drop us an email following the instructions here.

image

Table of Contents

News

Note

The difference between v1.x and v2.x is that we updated APIs and materials on lane segment and SD map in v2.x.

❗️Update on evaluation metrics led to differences in TOP scores between vx.1 (v1.1, v2.1) and vx.0 (v1.0, v2.0). We encourage the use of vx.1 metrics. For more details please see issue #76.

<p align="right">(<a href="#top">back to top</a>)</p>

Introducing OpenLane-V2 Update

We are happy to announce an important update to the OpenLane family, featuring two sets of additional data and annotations.

<p align="center"> <img src="https://github.com/OpenDriveLab/OpenLane-V2/assets/29263416/77846f69-fe77-45aa-b769-e85fd98a0596" width="696px"> </p> <p align="center"> <img src="https://github.com/OpenDriveLab/OpenLane-V2/assets/29263416/0b3f4678-fa57-4187-afd6-e55db12a76a6" width="696px"> </p> <p align="right">(<a href="#top">back to top</a>)</p>

Task and Evaluation

Driving Scene Topology

Given sensor inputs, lane segments are required to be perceived, instead of lane centerlines in the task of OpenLane Topology. Besides, pedestrian crossings and road boundaries are also desired to build a comprehensive understanding of the driving scenes. The OpenLane-V2 UniScore (OLUS) is utilized to summarize model performance in all aspects.

OpenLane Topology

Given sensor inputs, participants are required to deliver not only perception results of lanes and traffic elements but also topology relationships among lanes and between lanes and traffic elements simultaneously. In this task, we use OpenLane-V2 Score (OLS) to evaluate model performance.

<p align="right">(<a href="#top">back to top</a>)</p>

Highlights of OpenLane-V2

Unifying Map Representations

One of the superior formulations in the bucket is Lane Segment. It serves as a unifying and versatile representation of lanes, paving the way for multiple downstream applications. With the introduction of SD Map, the autonomous driving system is capable of utilizing these informative priors for achieving satisfactory performance in perception and reasoning.

The following table sums up a detailed comparison of different lane formulations to achieve various functionalities.

<table> <tr align="center"> <td rowspan="2">Lane Formulation</td> <td colspan="8">Functionality</td> </tr> <tr align="center"> <td>3D Space</td> <td>Laneline Cateogry</td> <td>Lane Direction</td> <td>Drivable Area</td> <td>Lane-level Drivable Area</td> <td>Lane-lane Topology</td> <td>Bind to Traffic Element</td> <td>Laneline-less</td> </tr> <tr align="center"> <td>2D Laneline</td> <td></td> <td>✅</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr align="center"> <td>3D Laneline</td> <td>✅</td> <td>✅</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr align="center"> <td>Online (pseudo) HD Map</td> <td>✅</td> <td></td> <td></td> <td>✅</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr align="center"> <td>Lane Centerline</td> <td>✅</td> <td></td> <td>✅</td> <td></td> <td></td> <td>✅</td> <td>✅</td> <td>✅</td> </tr> <tr align="center"> <td><b>Lane Segment</b> (newly released)</td> <td>✅</td> <td>✅</td> <td>✅</td> <td>✅</td> <td>✅</td> <td>✅</td> <td>✅</td> <td>✅</td> </tr> </table>

Introducing 3D Laneline

Previous datasets annotate lanes on images in the perspective view. Such a type of 2D annotation is insufficient to fulfill real-world requirements. Following the OpenLane-V1 practice, we annotate lanes in 3D space to reflect the geometric properties in the real 3D world.

Recognizing Extremely Small Traffic Elements

Not only preventing collision but also facilitating efficiency is essential. Vehicles follow predefined traffic rules for self-disciplining and cooperating with others to ensure a safe and efficient traffic system. Traffic elements on the roads, such as traffic lights and road signs, provide practical and real-time information.

Topology Reasoning between Lane and Road Elements

A traffic element is only valid for its corresponding lanes. Following the wrong signals would be catastrophic. Also, lanes have their predecessors and successors to build the map. Autonomous vehicles are required to reason about the topology relationships to drive in the right way.

<!-- ### Data scale and diversity matters - building on top of renowned Benchmarks Experience from the sunny day does not apply to the dancing snowflakes. For machine learning, data is the must-have food. We provide annotations on data collected in various cities, from Austin to Singapore and from Boston to Miami. The **diversity** of data enables models to generalize in different atmospheres and landscapes. --> <p align="right">(<a href="#top">back to top</a>)</p>

Getting Started

<p align="right">(<a href="#top">back to top</a>)</p>

License & Citation

Prior to using the OpenLane-V2 dataset, you should agree to the terms of use of the nuScenes and Argoverse 2 datasets respectively. OpenLane-V2 is distributed under CC BY-NC-SA 4.0 license. All code within this repository is under Apache License 2.0.

Please use the following citation when referencing OpenLane-V2:

@inproceedings{wang2023openlanev2,
  title={OpenLane-V2: A Topology Reasoning Benchmark for Unified 3D HD Mapping}, 
  author={Wang, Huijie and Li, Tianyu and Li, Yang and Chen, Li and Sima, Chonghao and Liu, Zhenbo and Wang, Bangjun and Jia, Peijin and Wang, Yuting and Jiang, Shengyin and Wen, Feng and Xu, Hang and Luo, Ping and Yan, Junchi and Zhang, Wei and Li, Hongyang},
  booktitle={NeurIPS},
  year={2023}
}

@article{li2023toponet,
  title={Graph-based Topology Reasoning for Driving Scenes},
  author={Li, Tianyu and Chen, Li and Wang, Huijie and Li, Yang and Yang, Jiazhi and Geng, Xiangwei and Jiang, Shengyin and Wang, Yuting and Xu, Hang and Xu, Chunjing and Yan, Junchi and Luo, Ping and Li, Hongyang},
  journal={arXiv preprint arXiv:2304.05277},
  year={2023}
}

@inproceedings{li2023lanesegnet,
  title={LaneSegNet: Map Learning with Lane Segment Perception for Autonomous Driving},
  author={Li, Tianyu and Jia, Peijin and Wang, Bangjun and Chen, Li and Jiang, Kun and Yan, Junchi and Li, Hongyang},
  booktitle={ICLR},
  year={2024}
}
<p align="right">(<a href="#top">back to top</a>)</p>

Related Resources

Awesome

<p align="right">(<a href="#top">back to top</a>)</p>