Home

Awesome

<div align="left">

You Only :eyes: Once for Panoptic ​ :car: Perception

You Only Look at Once for Panoptic driving Perception

by Dong Wu, Manwen Liao, Weitian Zhang, Xinggang Wang<sup> :email:</sup>, Xiang Bai, Wenqing Cheng, Wenyu Liu School of EIC, HUST

(<sup>:email:</sup>) corresponding author.

arXiv technical report (Machine Intelligence Research2022)


中文文档

The Illustration of YOLOP

yolop

Contributions

Results

PWC

Traffic Object Detection Result

ModelRecall(%)mAP50(%)Speed(fps)
Multinet81.360.28.6
DLT-Net89.468.49.3
Faster R-CNN81.264.98.8
YOLOv5s86.877.282
YOLOP(ours)89.276.541

Drivable Area Segmentation Result

ModelmIOU(%)Speed(fps)
Multinet71.68.6
DLT-Net71.39.3
PSPNet89.611.1
YOLOP(ours)91.541

Lane Detection Result:

ModelmIOU(%)IOU(%)
ENet34.1214.64
SCNN35.7915.84
ENet-SAD36.5616.02
YOLOP(ours)70.5026.20

Ablation Studies 1: End-to-end v.s. Step-by-step:

Training_methodRecall(%)AP(%)mIoU(%)Accuracy(%)IoU(%)
ES-W87.075.390.466.826.2
ED-W87.376.091.671.226.1
ES-D-W87.075.191.768.627.0
ED-S-W87.576.191.668.026.8
End-to-end89.276.591.570.526.2

Ablation Studies 2: Multi-task v.s. Single task:

Training_methodRecall(%)AP(%)mIoU(%)Accuracy(%)IoU(%)Speed(ms/frame)
Det(only)88.276.9---15.7
Da-Seg(only)--92.0--14.8
Ll-Seg(only)---79.627.914.8
Multitask89.276.591.570.526.224.4

Ablation Studies 3: Grid-based v.s. Region-based:

Training_methodRecall(%)AP(%)mIoU(%)Accuracy(%)IoU(%)Speed(ms/frame)
R-CNNP Det(only)79.067.3----
R-CNNP Seg(only)--90.259.524.0-
R-CNNP Multitask77.2(-1.8)62.6(-4.7)86.8(-3.4)49.8(-9.7)21.5(-2.5)103.3
YOLOP Det(only)88.276.9----
YOLOP Seg(only)--91.669.926.5-
YOLOP Multitask89.2(+1.0)76.5(-0.4)91.5(-0.1)70.5(+0.6)26.2(-0.3)24.4

Notes:


Visualization

Traffic Object Detection Result

detect result

Drivable Area Segmentation Result

Lane Detection Result

Notes:


Project Structure

├─inference
│ ├─images   # inference images
│ ├─output   # inference result
├─lib
│ ├─config/default   # configuration of training and validation
│ ├─core    
│ │ ├─activations.py   # activation function
│ │ ├─evaluate.py   # calculation of metric
│ │ ├─function.py   # training and validation of model
│ │ ├─general.py   #calculation of metric、nms、conversion of data-format、visualization
│ │ ├─loss.py   # loss function
│ │ ├─postprocess.py   # postprocess(refine da-seg and ll-seg, unrelated to paper)
│ ├─dataset
│ │ ├─AutoDriveDataset.py   # Superclass dataset,general function
│ │ ├─bdd.py   # Subclass dataset,specific function
│ │ ├─hust.py   # Subclass dataset(Campus scene, unrelated to paper)
│ │ ├─convect.py 
│ │ ├─DemoDataset.py   # demo dataset(image, video and stream)
│ ├─models
│ │ ├─YOLOP.py    # Setup and Configuration of model
│ │ ├─light.py    # Model lightweight(unrelated to paper, zwt)
│ │ ├─commom.py   # calculation module
│ ├─utils
│ │ ├─augmentations.py    # data augumentation
│ │ ├─autoanchor.py   # auto anchor(k-means)
│ │ ├─split_dataset.py  # (Campus scene, unrelated to paper)
│ │ ├─utils.py  # logging、device_select、time_measure、optimizer_select、model_save&initialize 、Distributed training
│ ├─run
│ │ ├─dataset/training time  # Visualization, logging and model_save
├─tools
│ │ ├─demo.py    # demo(folder、camera)
│ │ ├─test.py    
│ │ ├─train.py    
├─toolkits
│ │ ├─deploy    # Deployment of model
│ │ ├─datapre    # Generation of gt(mask) for drivable area segmentation task
├─weights    # Pretraining model

Requirement

This codebase has been developed with python version 3.7, PyTorch 1.7+ and torchvision 0.8+:

conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.2 -c pytorch

See requirements.txt for additional dependencies and version requirements.

pip install -r requirements.txt

Data preparation

Download

We recommend the dataset directory structure to be the following:

# The id represent the correspondence relation
├─dataset root
│ ├─images
│ │ ├─train
│ │ ├─val
│ ├─det_annotations
│ │ ├─train
│ │ ├─val
│ ├─da_seg_annotations
│ │ ├─train
│ │ ├─val
│ ├─ll_seg_annotations
│ │ ├─train
│ │ ├─val

Update the your dataset path in the ./lib/config/default.py.

Training

You can set the training configuration in the ./lib/config/default.py. (Including: the loading of preliminary model, loss, data augmentation, optimizer, warm-up and cosine annealing, auto-anchor, training epochs, batch_size).

If you want try alternating optimization or train model for single task, please modify the corresponding configuration in ./lib/config/default.py to True. (As following, all configurations is False, which means training multiple tasks end to end).

# Alternating optimization
_C.TRAIN.SEG_ONLY = False           # Only train two segmentation branchs
_C.TRAIN.DET_ONLY = False           # Only train detection branch
_C.TRAIN.ENC_SEG_ONLY = False       # Only train encoder and two segmentation branchs
_C.TRAIN.ENC_DET_ONLY = False       # Only train encoder and detection branch

# Single task 
_C.TRAIN.DRIVABLE_ONLY = False      # Only train da_segmentation task
_C.TRAIN.LANE_ONLY = False          # Only train ll_segmentation task
_C.TRAIN.DET_ONLY = False          # Only train detection task

Start training:

python tools/train.py

Multi GPU mode:

python -m torch.distributed.launch --nproc_per_node=N tools/train.py  # N: the number of GPUs

Evaluation

You can set the evaluation configuration in the ./lib/config/default.py. (Including: batch_size and threshold value for nms).

Start evaluating:

python tools/test.py --weights weights/End-to-end.pth

Demo Test

We provide two testing method.

Folder

You can store the image or video in --source, and then save the reasoning result to --save-dir

python tools/demo.py --source inference/images

Camera

If there are any camera connected to your computer, you can set the source as the camera number(The default is 0).

python tools/demo.py --source 0

Demonstration

<table> <tr> <th>input</th> <th>output</th> </tr> <tr> <td><img src=pictures/input1.gif /></td> <td><img src=pictures/output1.gif/></td> </tr> <tr> <td><img src=pictures/input2.gif /></td> <td><img src=pictures/output2.gif/></td> </tr> </table>

Deployment

Our model can reason in real-time on Jetson Tx2, with Zed Camera to capture image. We use TensorRT tool for speeding up. We provide code for deployment and reasoning of model in ./toolkits/deploy.

Segmentation Label(Mask) Generation

You can generate the label for drivable area segmentation task by running

python toolkits/datasetpre/gen_bdd_seglabel.py

Model Transfer

Before reasoning with TensorRT C++ API, you need to transfer the .pth file into binary file which can be read by C++.

python toolkits/deploy/gen_wts.py

After running the above command, you obtain a binary file named yolop.wts.

Running Inference

TensorRT needs an engine file for inference. Building an engine is time-consuming. It is convenient to save an engine file so that you can reuse it every time you run the inference. The process is integrated in main.cpp. It can determine whether to build an engine according to the existence of your engine file.

Third Parties Resource

Citation

If you find our paper and code useful for your research, please consider giving a star :star: and citation :pencil: :

@article{wu2022yolop,
  title={Yolop: You only look once for panoptic driving perception},
  author={Wu, Dong and Liao, Man-Wen and Zhang, Wei-Tian and Wang, Xing-Gang and Bai, Xiang and Cheng, Wen-Qing and Liu, Wen-Yu},
  journal={Machine Intelligence Research},
  pages={1--13},
  year={2022},
  publisher={Springer}
}