Awesome
CoAlign (ICRA2023)
Robust Collaborative 3D Object Detection in Presence of Pose Errors
Paper | Video ο½ Readme in Feishu
Updateπ 2024.1.24
HEAL is accepted to ICLR 2024. We implement a unified and integrated multi-agent collaborative perception framework for LiDAR-based, camera-based and heterogeneous setting! See HEAL GitHub.
Updateπ 2023.7.11
Camera-based collaborative perception support!
We release the multi-agent camera-based detection code, based on Lift-Splat-Shoot. Support OPV2V, V2XSet and DAIR-V2X-C dataset.
LiDAR's feature map fusion method can seamlessly adapt to camera BEV feature. Support CoAlign's multiscale fusion, V2XViT, V2VNet, Self-Att, FCooper, DiscoNet(w.o. KD). Please feel free to browse our repo. Example yamls are listed in this folder: CoAlign/opencood/hypes_yaml/opv2v/camera_no_noise
New features (Compared with OpenCOOD):
-
Modality Support
- LiDAR
- Camera π
-
Dataset Support
- OPV2V
- V2X-Sim 2.0 π
- DAIR-V2X π
- V2XSet
-
SOTA collaborative perception method support
-
Visualization support
- BEV visualization
- 3D visualization π
-
1-round/2-round communication support
- transform point cloud first (2-round communication)
- warp feature map (1-round communication, by default in this repo. π)
-
Pose error simulation support
Installation
Please visit the feishu docs CoAlign Installation Guide for details!
Or you can refer to OpenCOOD data introduction and OpenCOOD installation guide to prepare data and install CoAlign. The installation is totally the same as OpenCOOD, except some dependent packages required by CoAlign.
Data Preparation
mkdir a dataset
folder under CoAlign. Put your OPV2V, V2X-Sim, V2XSet, DAIR-V2X data in this folder. You just need to put in the dataset you want to use.
CoAlign/dataset
.
βββ my_dair_v2x
β βββ v2x_c
β βββ v2x_i
β βββ v2x_v
βββ OPV2V
β βββ additional
β βββ test
β βββ train
β βββ validate
βββ V2XSET
β βββ test
β βββ train
β βββ validate
βββ v2xsim2-complete
β βββ lidarseg
β βββ maps
β βββ sweeps
β βββ v1.0-mini
βββ v2xsim2_info
βββ v2xsim_infos_test.pkl
βββ v2xsim_infos_train.pkl
βββ v2xsim_infos_val.pkl
Note that
- *.pkl file in
v2xsim2_info
can be found in Google Drive - use our complemented annotation for DAIR-V2X in
my_dair_v2x
Complemented Annotations for DAIR-V2X-C π
Originally DAIR-V2X only annotates 3D boxes within the range of camera's view in vehicle-side. We supplement the missing 3D box annotations to enable the 360 degree detection. With fully complemented vehicle-side labels, we regenerate the cooperative labels for users, which follow the original cooperative label format.
Original Annotations | Complemented Annotations |
---|---|
Download: Google Drive
Website: Website
Checkpoints
Single detection with uncertainty
Download coalign_precalc and save it to opencood/logs
CoAlign Checkpoints
Download them and save them to opencood/logs
Citation
@inproceedings{lu2023robust,
title={Robust collaborative 3d object detection in presence of pose errors},
author={Lu, Yifan and Li, Quanhao and Liu, Baoan and Dianati, Mehrdad and Feng, Chen and Chen, Siheng and Wang, Yanfeng},
booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
pages={4812--4818},
year={2023},
organization={IEEE}
}
Acknowlege
This project is impossible without the code of OpenCOOD, g2opy and d3d!
Thanks again to @DerrickXuNu for the great code framework.