Home

Awesome

RCooper: A Real-world Large-scale Dataset for Roadside Cooperative Perception

paper supp arXiv ckpts video poster

This is the official implementation of CVPR2024 paper. "RCooper: A Real-world Large-scale Dataset for Roadside Cooperative Perception". Ruiyang Hao<sup>*</sup>, Siqi Fan<sup>*</sup>, Yingru Dai, Zhenlin Zhang, Chenxi Li, Yuntian Wang, Haibao Yu, Wenxian Yang, Jirui Yuan, Zaiqing Nie<sup></sup>

<div style="text-align:center"> <img src="assets/RCooper.jpg" width="800" alt="" class="img-responsive"> </div>

Overview

Data Download

Please check the bottom of this page website to download the data. As shown in the figure bellow.

<div style="text-align:center"> <img src="assets/dataset_page_instruction.jpg" width="700" alt="" class="img-responsive"> </div>

After downloading the data, please put the data in the following structure:

├── RCooper
│   ├── calib
|      |── lidar2cam
|      |── lidar2world
│   ├── data
|      |── folders named specific scene index
│   ├── labels
|      |── folders named specific scene index
│   ├── original_label
|      |── folders named specific scene index

Data Conversion

To facilitate the research of cooperative perception methods on RCooper. We provide the format converter from RCooper to other popular public cooperative perception datasets. After the conversion, researchers can directly employ the methods using several opensourced frameworks.

We now support the following conversions:

RCooper to V2V4Real

Setup the dataset path in codes/dataset_convertor/converter_config.py, and complete the conversion.

cd codes/dataset_converter
python rcooper2vvreal.py

RCooper to OPV2V

Setup the dataset path in codes/dataset_convertor/converter_config.py, and complete the conversion.

cd codes/dataset_converter
python rcooper2opv2v.py

RCooper to DAIR-V2X

Setup the dataset path in codes/dataset_convertor/converter_config.py, and complete the conversion.

cd codes/dataset_converter
python rcooper2dair.py

Quick Start

For detection training & inference, you can find instructions in docs/corridor_scene or docs/intersection_scene in detail. (<b>Notes</b>: you may need to set PYTHONPATH to call modified codes other than the pip-installed ones.)

For Tracking, you can find instructions in docs/tracking.md in detail.

All the checkpoints are released in link in the tabels below, you can save them in codes/ckpts/.

Benchmark

Results of Cooperative 3D object detection for corridor scenes

MethodAP@0.3AP@0.5AP@0.7Download Link
No Fusion40.029.211.1url
Late Fusion44.529.910.8url
Early Fusion69.854.730.3url
AttFuse62.751.632.1url
F-Cooper65.955.836.1url
Where2Comm67.155.634.3url
CoBEVT67.657.236.2url

Results of Cooperative 3D object detection for intersection scenes

MethodAP@0.3AP@0.5AP@0.7Download Link
No Fusion58.144.123.8url
Late Fusion65.147.624.4url
Early Fusion50.033.918.3url
AttFuse45.540.927.9url
F-Cooper49.532.012.9url
Where2Comm50.542.229.9url
CoBEVT53.545.632.6url

Results of Cooperative tracking for corridor scenes

MethodAMOTA(↑)AMOTP(↑)sAMOTA(↑)MOTA(↑)MT(↑)ML(↓)
No Fusion8.2822.7434.0523.8917.3442.71
Late Fusion9.6025.7735.6424.7524.3742.96
Early Fusion23.7838.1859.1644.3053.0212.81
AttFuse21.7535.3157.4344.5045.7322.86
F-Cooper22.4735.5458.4945.9447.7422.11
Where2Comm22.5536.2159.6046.1150.0019.60
CoBEVT21.5435.6953.8547.3247.2418.09

Results of Cooperative tracking for corridor scenes

MethodAMOTA(↑)AMOTP(↑)sAMOTA(↑)MOTA(↑)MT(↑)ML(↓)
No Fusion18.1139.7158.2949.1635.3241.64
Late Fusion21.5743.4063.0250.5842.7534.20
Early Fusion21.3847.7162.9350.1536.8042.75
AttFuse11.8436.6346.9239.3229.0053.90
F-Cooper-4.8614.710.00-45.6611.5250.56
Where2Comm14.2138.4850.9742.2729.0045.72
CoBEVT14.8238.7149.0444.6733.8335.69

Citation

If you find RCooper useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.

@inproceedings{hao2024rcooper,
  title={RCooper: A Real-world Large-scale Dataset for Roadside Cooperative Perception},
  author={Hao, Ruiyang and Fan, Siqi and Dai, Yingru and Zhang, Zhenlin and Li, Chenxi and Wang, Yuntian and Yu, Haibao and Yang, Wenxian and Jirui, Yuan and Nie, Zaiqing},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2024},
  pages={22347-22357}
}

Acknowledgment

Sincere appreciation for their great contributions.