Home

Awesome

V2V4Real: A large-scale real-world dataset for Vehicle-to-Vehicle Cooperative Perception

website paper supplement video

This is the official implementation of CVPR2023 Highlight paper. "V2V4Real: A large-scale real-world dataset for Vehicle-to-Vehicle Cooperative Perception". Runsheng Xu, Xin Xia, Jinlong Li, Hanzhao Li, Shuo Zhang, Zhengzhong Tu, Zonglin Meng, Hao Xiang, Xiaoyu Dong, Rui Song, Hongkai Yu, Bolei Zhou, Jiaqi Ma

Supported by the UCLA Mobility Lab.

<p align="center"> <img src="imgs/scene1.png" width="600" alt="" class="img-responsive"> </p>

Overview

CodeBase Features

Data Download

Please check our website to download the data (OPV2V format).

After downloading the data, please put the data in the following structure:

├── v2v4real
│   ├── train
|      |── testoutput_CAV_data_2022-03-15-09-54-40_1
│   ├── validate
│   ├── test

Changelog

Devkit setup

V2V4Real's codebase is build upon OpenCOOD. Compared to OpenCOOD, this codebase supports both the simulation and real-world data and more perception tasks. Furthermore, this repo provides augmentations that OpenCOOD does not support. We highly recommend you to use this codebase to train your model on V2V4Real dataset

To set up the codebase environment, do the following steps:

1. Create conda environment (python >= 3.7)

conda create -n v2v4real python=3.7
conda activate v2v4real

2. Pytorch Installation (>= 1.12.0 Required)

Take pytorch 1.12.0 as an example:

conda install pytorch==1.12.0 torchvision==0.13.0 cudatoolkit=11.3 -c pytorch -c conda-forge

3. spconv 2.x Installation

pip install spconv-cu113

4. Install other dependencies

pip install -r requirements.txt
python setup.py develop

5.Install bbx nms calculation cuda version

python opencood/utils/setup.py build_ext --inplace

Quick Start

Data sequence visualization

To quickly visualize the LiDAR stream in the OPV2V dataset, first modify the validate_dir in your opencood/hypes_yaml/visualization.yaml to the opv2v data path on your local machine, e.g. opv2v/validate, and the run the following commond:

cd ~/OpenCOOD
python opencood/visualization/vis_data_sequence.py [--color_mode ${COLOR_RENDERING_MODE} --isSim]

Arguments Explanation:

Train your model

OpenCOOD uses yaml file to configure all the parameters for training. To train your own model from scratch or a continued checkpoint, run the following commonds:

python opencood/tools/train.py --hypes_yaml ${CONFIG_FILE} [--model_dir  ${CHECKPOINT_FOLDER} --half]

Arguments Explanation:

To train on multiple gpus, run the following command:

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4  --use_env opencood/tools/train.py --hypes_yaml ${CONFIG_FILE} [--model_dir  ${CHECKPOINT_FOLDER}]

Train Sim2Real

We provide train_da.py to train the sim2real models shown in the paper. The models will take the simulation data and v2v4real data without gt labels as input, and compute the domain adaptation loss. To train the sim2real model, run the following command:

python opencood/tools/train_da.py --hypes_yaml hypes_yaml/domain_adaptions/xxx.yaml [--model_dir  ${CHECKPOINT_FOLDER} --half

Test the model

Before you run the following command, first make sure the validation_dir in config.yaml under your checkpoint folder refers to the testing dataset path, e.g. v2v4real/test.

python opencood/tools/inference.py --model_dir ${CHECKPOINT_FOLDER} --fusion_method ${FUSION_STRATEGY} [--show_vis] [--show_sequence]

Arguments Explanation:

The evaluation results will be dumped in the model directory.

Important notes for testing:

  1. Remember to change the validation_dir in config.yaml under your checkpoint folder to the testing dataset path, e.g. v2v4real/test.
  2. To test under async mode, you need to set the async_mode in config.yaml to True and set the async_overhead to the desired delay time (default 100ms).
  3. The testing script for cooperative 3D object detection and sim2real is the same

Benchmark

Results of Cooperative 3D object detection

MethodBackboneSync AP@0.5Sync AP@0.7Async AP@0.5Async AP@0.7BandwidthDownload Link
No FusionPointPillar39.822.039.822.00.0url
Late FusionPointPillar55.026.750.222.40.003url
Early FusionPointPillar59.732.152.125.80.96url
F-CooperPointPillar60.731.853.626.70.20url
Attentive FusionPointPillar64.534.356.428.50.20url
V2VNetPointPillar64.733.657.727.50.20url
V2X-ViTPointPillar64.936.955.929.30.20url
CoBEVTPointPillar66.536.058.629.70.20url

Results of Cooperative tracking

MethodAMOTA(↑)AMOTP(↑)sAMOTA(↑)MOTA(↑)MT(↑)ML(↓)
No Fusion16.0841.6053.8443.4629.4160.18
Late Fusion29.2851.0871.0559.8945.2531.22
Early Fusion26.1948.1567.3460.8740.9532.13
F-Cooper23.2943.1165.6358.3435.7538.91
AttFuse28.6450.4873.2163.0346.3828.05
V2VNet30.4854.2875.5364.8548.1927.83
V2X-ViT30.8554.3274.0164.8245.9326.47
CoBEVT32.1255.6177.6563.7547.2930.32

Results of Domain Adaption

MethodDomain AdaptionAP@0.5Download Link
F-Cooper[1]37.3Download Link
AttFuse[1]23.4Download Link
V2VNet[1]26.3Download Link
V2X-ViT[1]39.5Download Link
CoBEVT[1]40.2Download LInk

[1]: Yuhua Chen, Wen Li, Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Domain adaptive faster r-cnn for object de- tection in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3339–3348, 2018.

Citation

@inproceedings{xu2023v2v4real,
  title={V2V4Real: A Real-world Large-scale Dataset for Vehicle-to-Vehicle Cooperative Perception},
  author={Xu, Runsheng and Xia, Xin and Li, Jinlong and Li, Hanzhao and Zhang, Shuo and Tu, Zhengzhong and Meng, Zonglin and Xiang, Hao and Dong, Xiaoyu and Song, Rui and Yu, Hongkai and Zhou, Bolei and Ma, Jiaqi},
  booktitle={The IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR)},
  year={2023}
}

Acknowledgment

This dataset belongs to the OpenCDA ecosystem family. The codebase is build upon OpenCOOD, which is the first Open Cooperative Detection framework for autonomous driving.