Home

Awesome

OpenLane-V1

OpenLane is the first real-world and the largest scaled 3D lane dataset to date. Our dataset collects valuable contents from public perception dataset, providing lane and closest-in-path object(CIPO) annotations for 1000 segments. In short, OpenLane owns 200K frames and over 880K carefully annotated lanes. We have released the OpenLane Dataset publicly to aid the research community in making advancements in 3D perception and autonomous driving technology. See details in Paper.

<img src="imgs/overview.jpg" height = "300" /><img src="imgs/overview.gif" height = "300" />

This repository is organized as the following.

Note that our OpenLane is an autonomous driving dataset, while there's another repository with the same name The-OpenROAD-Project/OpenLane.

News

Get Started

Please follow these steps to make yourself familiar with the OpenLane dataset. Create an issue if you need any further information.

Download

You can download the entire OpenLane dataset here. Note that before using OpenLane dataset, you should register at Waymo Open Dataset Website and agreed to these Terms since OpenLane is built on top of Waymo Open Dataset.

Evaluation Kit

We provide evaluation tools on both lane and CIPO, following the same data format as Waymo and common evaluation pipeline in 2D/3D lane detection. Please refer to Evaluation Kit Instruction.

Data

OpenLane dataset is constructed on mainstream datasets in the field of autonomous driving. In v1.0, we release the annotation on Waymo Open Dataset. In the future we'll update for annotation on nuScenes. OpenLane dataset focuses on lane detection as well as CIPO. We annotate all the lanes in each frame, including those in the opposite direction if no curbside exists in the middle. In addition to the lane detection task, we also annotate: (a) scene tags, such as weather and locations; (b) the CIPO, which is defined as the most concerned target w.r.t. ego vehicle; such a tag is quite pragmatic for subsequent modules as in planning/control, besides a whole set of objects from perception. The introduction about the coordinates system can be found here.

Lane Annotation

We annotate lane in the following format.

For more annotation criterion, please refer to Lane Anno Criterion.

CIPO/Scenes Annotation

We annotate CIPO and Scenes in the following format.

For more annotation criterion, please refer to CIPO Anno Criterion.

Benchmark and Leaderboard

Benchmark

We provide an initial benchmark on OpenLane 2D/3D Lane Detection and you are welcome to pull request and add your work here! To thoroughly evaluate the model, we provide different case split from the entire validation set. They are Up&Down case, Curve case, Extreme Weather case, Night case, Intersection case, and Merge&Split case. More detail can be found in Lane Anno Criterion Based on the Lane Eval Metric, results (F-Score) of different 2D/3D methods on different cases are shown as follows.

MethodAllUp&<br>DownCurveExtreme<br>WeatherNightIntersectionMerge&<br>Split
LaneATT-S28.325.325.832.027.614.024.3
LaneATT-M31.028.327.434.730.217.026.5
PersFormer42.040.746.343.736.128.941.2
CondLaneNet-S52.355.357.545.846.648.445.5
CondLaneNet-M55.058.559.449.248.650.747.8
CondLaneNet-L59.162.162.954.751.055.752.3
MethodVersionAllUp &<br>DownCurveExtreme<br>WeatherNightIntersectionMerge&<br>SplitBest modelx-cx-fz-cz-fCategory Accuracy
GenLaneNet1.132.325.433.528.118.721.431.0model0.5930.4940.1400.195/
3DLaneNet1.144.140.846.547.541.532.141.7------
PersFormer1.150.545.658.754.050.041.653.1model0.3190.3250.1120.14189.51
PersFormer1.252.947.558.451.847.442.150.9model0.2910.2940.0800.11689.24

The implementation of PersFormer can be found here.

Leaderboard

For comparison, we provide a leaderboard on paperwithcode.

Citation

Please use the following citation when referencing OpenLane:

    @inproceedings{chen2022persformer,
      title={PersFormer: 3D Lane Detection via Perspective Transformer and the OpenLane Benchmark},
      author={Chen, Li and Sima, Chonghao and Li, Yang and Zheng, Zehan and Xu, Jiajie and Geng, Xiangwei and Li, Hongyang and He, Conghui and Shi, Jianping and Qiao, Yu and Yan, Junchi},
      booktitle={European Conference on Computer Vision (ECCV)},
      year={2022}
    }

License

Our dataset is based on the Waymo Open Dataset and therefore we distribute the data under Creative Commons Attribution-NonCommercial-ShareAlike license and Waymo Dataset License Agreement for Non-Commercial Use (August 2019). You are free to share and adapt the data, but have to give appropriate credit and may not use the work for commercial purposes. All code within this repository is under Apache License 2.0.