Home

Awesome

<div align="center">

LaneSegNet: Map Learning with Lane Segment Perception for Autonomous Driving

arXiv OpenLane-V2 LICENSE

lanesegment

</div>

Highlights

:fire: We advocate Lane Segment as a map learning paradigm that seamlessly incorporates both map :motorway: geometry and :spider_web: topology information.

:checkered_flag: Lane Segment and OpenLane-V2 Map Element Bucket serve as a track in the CVPR 2024 Autonomous Grand Challenge.

This repository can be used as a starting point for Mapless Driving track.

News


method

<div align="center"> <b>Overall pipeline of LaneSegNet</b> </div>

Table of Contents

Model Zoo

[!NOTE] The evaluation results below are based on OpenLane-V2 devkit v2.1.0. In this version, we have addressed a loophole in the TOP metric, which caused the TOP<sub>lsls</sub> value to be significantly higher than what was reported in the paper.
For more details please see issue #76 of OpenLane-V2.

Performance in LaneSegNet paper

ModelEpochmAPTOP<sub>lsls</sub>MemoryConfigDownload
LaneSegNet2433.525.49.4Gconfigckpt / log

The mean AP is between lane segment and pedestrian crossing.

Performance on OpenLane-V2 Map Element Bucket

ModelEpochDET<sub>ls</sub>DET<sub>a</sub>DET<sub>t</sub>TOP<sub>lsls</sub>TOP<sub>lste</sub>Config
LaneSegNet-meb2427.823.836.924.121.3config

This is a naive multi-branch model for the Map Element Bucket.
The pedestrian and road boundary are detected by an additional MapTR head. The traffic element are detected by a Deformable DETR head. The hyper-parameters are roughly set.

Prerequisites

Installation

We recommend using conda to run the code.

conda create -n lanesegnet python=3.8 -y
conda activate lanesegnet

# (optional) If you have CUDA installed on your computer, skip this step.
conda install cudatoolkit=11.1.1 -c conda-forge

pip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html

Install mm-series packages.

pip install mmcv-full==1.5.2 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html
pip install mmdet==2.26.0
pip install mmsegmentation==0.29.1
pip install mmdet3d==1.0.0rc6

Install other required packages.

pip install -r requirements.txt

Prepare Dataset

Following OpenLane-V2 repo to download the Image and the Map Element Bucket data. Run the following script to collect data for this repo.

[!IMPORTANT]

:exclamation: Please note that the script for generating LaneSegNet data is not the same as the OpenLane-V2 Map Element Bucket. The *_lanesegnet.pkl is not the same as the *_ls.pkl.

:bell: The Map Element Bucket has been updated as of October 2023. Please ensure you download the most recent data.

cd LaneSegNet
mkdir data

ln -s {Path to OpenLane-V2 repo}/data/OpenLane-V2 ./data/
python ./tools/data_process.py

After setup, the hierarchy of folder data is described below:

data/OpenLane-V2
├── train
|   └── ...
├── val
|   └── ...
├── test
|   └── ...
├── data_dict_subset_A_train_lanesegnet.pkl
├── data_dict_subset_A_val_lanesegnet.pkl
├── ...

Train and Evaluate

Train

We recommend using 8 GPUs for training. If a different number of GPUs is utilized, you can enhance performance by configuring the --autoscale-lr option. The training logs will be saved to work_dirs/lanesegnet.

cd LaneSegNet
mkdir -p work_dirs/lanesegnet

./tools/dist_train.sh 8 [--autoscale-lr]

Evaluate

You can set --show to visualize the results.

./tools/dist_test.sh 8 [--show]

License and Citation

All assets and code are under the Apache 2.0 license unless specified otherwise.

If this work is helpful for your research, please consider citing the following BibTeX entry.

@inproceedings{li2023lanesegnet,
  title={LaneSegNet: Map Learning with Lane Segment Perception for Autonomous Driving},
  author={Li, Tianyu and Jia, Peijin and Wang, Bangjun and Chen, Li and Jiang, Kun and Yan, Junchi and Li, Hongyang},
  booktitle={ICLR},
  year={2024}
}

@inproceedings{wang2023openlanev2,
  title={OpenLane-V2: A Topology Reasoning Benchmark for Unified 3D HD Mapping}, 
  author={Wang, Huijie and Li, Tianyu and Li, Yang and Chen, Li and Sima, Chonghao and Liu, Zhenbo and Wang, Bangjun and Jia, Peijin and Wang, Yuting and Jiang, Shengyin and Wen, Feng and Xu, Hang and Luo, Ping and Yan, Junchi and Zhang, Wei and Li, Hongyang},
  booktitle={NeurIPS},
  year={2023}
}

Related resources

We acknowledge all the open-source contributors for the following projects to make this work possible: