Home

Awesome

<div align="center">

DAIR-V2X and OpenDAIRV2X: Towards General and Real-World Cooperative Autonomous Driving

</div> <h3 align="center"> <a href="https://thudair.baai.ac.cn/index">Project Page</a> | <a href="#dataset">Dataset Download</a> | <a href="https://arxiv.org/abs/2204.05575">arXiv</a> | <a href="https://github.com/AIR-THU/DAIR-V2X/">OpenDAIRV2X</a> </h3>

<br><br> teaser

Table of Contents:

  1. Highlights
  2. News
  3. Dataset Download
  4. Getting Started
  5. Major Features
  6. Benchmark
  7. Citation
  8. Contaction

Highlights <a name="high"></a>

News <a name="news"></a>

Dataset Download <a name="dataset"></a>

Getting Started <a name="start"></a>

Please refer to getting_started.md for the usage and benchmarks reproduction of DAIR-V2X dataset.

Please refer to get_started_spd.md for the usage and benchmarks reproduction of V2X-Seq-SPD dataset.

Benchmark <a name="benchmark"></a>

You can find more benchmark in SV3D-Veh, SV3D-Inf, VIC3D and VIC3D-SPD.

Part of the VIC3D detection benchmarks based on DAIR-V2X-C dataset:

ModalityFusionModelDatasetAP-3D (IoU=0.5)AP-BEV (IoU=0.5)AB
Overall0-30m30-50m50-100mOverall0-30m30-50m50-100m
ImageVehOnlyImvoxelNetVIC-Sync9.1319.065.230.4110.9621.937.280.780
Late-FusionImvoxelNetVIC-Sync18.7733.479.438.6224.8539.4914.6814.96309.38
PointcloudVehOnlyPointPillarsVIC-Sync48.0647.6263.5144.3752.2430.5566.0348.360
Early FusionPointPillarsVIC-Sync62.6164.8268.6856.5768.9168.9273.6465.661382275.75
Late-FusionPointPillarsVIC-Sync56.0655.6968.4453.6062.0661.5272.5360.57478.61
Late-FusionPointPillarsVIC-Async-252.4351.1367.0949.8658.1057.2370.8655.78478.01
TCLFPointPillarsVIC-Async-253.3752.4167.3350.8759.1758.2571.2057.43897.91

Part of the VIC3D detection and tracking benchmarks based on V2X-Seq-SPD:

ModalityFusionModelDatasetAP 3D (Iou=0.5)AP BEV (Iou=0.5)MOTAMOTPAMOTAAMOTPIDsAB(Byte)
ImageVeh OnlyImvoxelNetVIC-Sync-SPD8.5510.3210.1957.831.3614.754
ImageLate FusionImvoxelNetVIC-Sync-SPD17.3122.5321.8156.676.2225.24473300

TODO List <a name="TODO List"></a>

Citation <a name="citation"></a>

Please consider citing our paper if the project helps your research with the following BibTex:

@inproceedings{v2x-seq,
  title={V2X-Seq: A large-scale sequential dataset for vehicle-infrastructure cooperative perception and forecasting},
  author={Yu, Haibao and Yang, Wenxian and Ruan, Hongzhi and Yang, Zhenwei and Tang, Yingjuan and Gao, Xu and Hao, Xin and Shi, Yifeng and Pan, Yifeng and Sun, Ning and Song, Juan and Yuan, Jirui and Luo, Ping and Nie, Zaiqing},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2023},
}
@inproceedings{dair-v2x,
  title={Dair-v2x: A large-scale dataset for vehicle-infrastructure cooperative 3d object detection},
  author={Yu, Haibao and Luo, Yizhen and Shu, Mao and Huo, Yiyi and Yang, Zebang and Shi, Yifeng and Guo, Zhenglong and Li, Hanyu and Hu, Xing and Yuan, Jirui and Nie, Zaiqing},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={21361--21370},
  year={2022}
}

Contaction <a name="contaction"></a>

If any questions and suggenstations, please email to dair@air.tsinghua.edu.cn.

Related Resources

Awesome