Awesome
<br /> <p align="center"> <h3 align="center"><strong>DV-3DLane: End-to-end Multi-modal 3D Lane Detection with Dual-view Representation</strong></h3> <p align="center"> <a href="https://openreview.net/forum?id=l1U6sEgYkb" target='_blank'> <img src="https://img.shields.io/badge/ICLR2024-lightblue.svg"> </a> <a href="" target='_blank'> <img src="https://visitor-badge.laobi.icu/badge?page_id=JMoonr.dv-3dlane&left_color=gray&right_color=lightpink"> </a> <a href="https://github.com/JMoonr/dv-3dlane" target='_blank'> <img src="https://img.shields.io/github/stars/JMoonr/dv-3dlane?style=social"> </a> </p>News
- 2024-01-15 :confetti_ball: Our new work DV-3DLane: End-to-end Multi-modal 3D Lane Detection with Dual-view Representation is accepted by ICLR2024.
Code is coming soon.
Acknowledgment
This library is inspired by LATR, OpenLane, GenLaneNet, mmdetection3d, SparseInst, and many other related works, we thank them for sharing the code and datasets.
Citation
If you find DV-3DLane is useful for your research, please consider citing our paper:
@inproceedings{
luo2024dvdlane,
title={{DV}-3{DL}ane: End-to-end Multi-modal 3D Lane Detection with Dual-view Representation},
author={Yueru Luo and Shuguang Cui and Zhen Li},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=l1U6sEgYkb}
}