Home

Awesome

<div align="center"> <h1> YOLOPv2:rocket:: Better, Faster, Stronger for Panoptic driving Perception </h1> <!-- <--!span><font size="5", > Efficient and Robust 2D-to-BEV Representation Learning via Geometry-guided Kernel Transformer </font></span> -->

Cheng Han, Qichao Zhao, Shuyi Zhang, Yinzi Chen, Zhenlin Zhang, Jinwei Yuan

<!-- <a href="https://scholar.google.com/citations?user=pCY-bikAAAAJ&hl=zh-CN">Jinwei Yuan</a> --> <div><a href="https://arxiv.org/abs/2208.11434">[YOLOPv2 arXiv Preprint]</a></div> </div>

News

<!-- * `August 26, 2022`: We've upload the model for **YOLOPv2**. This version support for model training, validation and prediction. -->

Introduction

:grin:We present an excellent multi-task network based on YOLOP:blue_heart:,which is called YOLOPv2: Better, Faster, Stronger for Panoptic driving Perception. The advantages of YOLOPv2 can be summaried as below:

PWC PWC PWC

Results

We used the BDD100K as our datasets,and experiments are run on NVIDIA TESLA V100.

Web Demo

Visualization

model : trained on the BDD100k dataset and test on T3CAIC camera.

<td><img src=data/demo/together_video.gif/></td>

Model parameter and inference speed

ModelSizeParamsSpeed (fps)
YOLOP6407.9M49
HybridNets64012.8M28
YOLOPv264038.9M91 (+42) :arrow_double_up:

Traffic Object Detection Result

<table> <tr><th>Result </th><th>Visualization</th></tr> <tr><td>
ModelmAP@0.5 (%)Recall (%)
MultiNet60.281.3
DLT-Net68.489.4
Faster R-CNN55.677.2
YOLOv5s77.286.8
YOLOP76.589.2
HybridNets77.392.8
YOLOPv283.4(+6.1):arrow_double_up:91.1(-1.7) :arrow_down:
</td><td> <!-- ### Visualization --> <img src="data/demo/veh3.jpg" width="100%" align='right'/> <!-- <img src="images/veh2.jpg" width="50%" /> --> </td></tr> </table>

Drivable Area Segmentation

<table> <tr><th>Result </th><th>Visualization</th></tr> <tr><td>
ModelDrivable mIoU (%)——:relaxed:——
MultiNet71.6
DLT-Net71.3
PSPNet89.6
YOLOP91.5
HybridNets90.5
YOLOPv293.2(+1.7) :arrow_up:
</td><td> <!-- ### Visualization --> <img src="data/demo/fs3.jpg" width="100%" align='right'/> <!-- <img src="images/fs2.jpg" width="50%" /> --> </td></tr> </table>

Lane Line Detection

<table> <tr><th>Result </th><th>Visualization</th></tr> <tr><td>
ModelAccuracy (%)Lane Line IoU (%)
Enet34.1214.64
SCNN35.7915.84
Enet-SAD36.5616.02
YOLOP70.526.2
HybridNets85.431.6
YOLOPv287.3(+1.9):arrow_up:27.2(-4.4) :arrow_down:
</td><td> <!-- ### Visualization --> <img src="data/demo/lane3.jpg" width="100%" align='right' /> <!-- <img src="images/lane1.jpg" width="50%" /> --> </td></tr> </table>

Day-time and Night-time visualization results

<div align = 'None'> <a href="./"> <img src="data/demo/all3.jpg" width="45%" /> <img src="data/demo/all2.jpg" width="45%" /> <img src="data/demo/night1.jpg" width="45%" /> <img src="data/demo/night2.jpg" width="45%" /> </a> </div>

Models

You can get the model from <a href="https://github.com/CAIC-AD/YOLOPv2/releases/download/V0.0.1/yolopv2.pt">here</a>.

Demo Test

We provide two testing method.You can store the image or video.

python demo.py  --source data/example.jpg
<!-- ## Usage coming soon. -->

Third Parties Resource

License

YOLOPv2 is released under the MIT Licence.

<!-- ## Citation If you find YOLOPv2 is useful in your research or applications, please consider giving us a star &#127775; and citing it by the following BibTeX entry. ```bibtex @article{GeokernelTransformer, title={Efficient and Robust 2D-to-BEV Representation Learning via Geometry-guided Kernel Transformer}, author={Chen, Shaoyu and Cheng, Tianheng and Wang, Xinggang and Meng, Wenming and Zhang, Qian and Liu, Wenyu}, journal={arXiv preprint arXiv:2206.04584}, year={2022} } ``` -->