Awesome
LoGoNet
This repo is the official implementation of: LoGoNet: Towards Accurate 3D Object Detection with Local-to-Global Cross-Modal Fusion.
Paper
Framework
News
- :dart: [2023.04] Code released.
- :saxophone: [2023.03] LoGoNet is released on arXiv.
- š»[2023.03] - Intend to test the robustness of your LiDAR semantic segmentation and detection models? Check our recent work, :robot: Robo3D, a comprehensive suite that enables OoD robustness evaluation of 3D segmentors and detectors on our newly established datasets:
SemanticKITTI-C
,nuScenes-C
, andWOD-C
. - š„[2023.02.28] LoGoNet has been accepted by CVPR 2023!
- š„[2023.03] Our improved version, LoGoNet_Ens v2, ranks 1st leaderboard among all submissions. All the submission, please refer the 3D object detection leaderboard of Waymo Open Dataset for more details.
- [2022.10] Our LoGoNet_Ens ranks 1st in the term of mAPH (L2) on the Waymo leaderboard among all methods with 81.02 mAPH (L2) and It is the first time for detection performance on three classes surpasses 80 APH (L2) simultaneously.
- [2022.10] Our LoGoNet ranks 1st in the term of mAPH (L2) on the Waymo leaderboard among all methods that don't use TTA and Ensemble.
Algorithm Modules
detection
āāā al3d_det
ā āāā datasets
ā ā āāā DatasetTemplate: the basic class for constructing dataset
ā ā āāā augmentor: different augmentation during training or inference
ā ā āāā processor: processing points into voxel space
ā ā āāā the specific dataset module
ā āāā models: detection model related modules
| | āāā fusion: point cloud and image fusion modules
ā ā āāā image_modules: processing images
ā ā āāā modules: point cloud detector
ā ā āāā ops
ā āāā utils: the exclusive utils used in detection module
āāā tools
ā āāā cfgs
ā ā āāā det_dataset_cfgs
ā ā āāā det_model_cfgs
ā āāā train/test/visualize scripts
āāā data: the path of raw data of different dataset
āāā output: the path of trained model
al3d_utils: the shared utils used in every algorithm modules
docs: the readme docs for LoGoNet
Running
š„ This project relies heavily on Ceph
storage. Please refer to your file storage system to modify the file path.
- Please cd the specific module and read the corresponding README for details
Main results
Performances on Waymo with AP/APH (L2)
*We report average metrics across all results. We provide training / validation configurations, pretrained models for all models in the paper. To access these pretrained models, please send us an email with your name, institute, a screenshot of the the Waymo dataset registration confirmation mail, and your intended usage. Please note that Waymo open dataset is under strict non-commercial license so we are not allowed to share the model with you if it will used for any profit-oriented activities. However, we can provide the logs.
Model | mAPH_L2 | VEH_L2 | PED_L2 | CYC_L2 | Log |
---|---|---|---|---|---|
LoGoNet-1frame (val) | 71.38 | 71.21/70.71 | 75.49/69.94 | 74.53/73.48 | log |
LoGoNet-3frames (val) | 74.86 | 74.60/74.17 | 78.62/75.79 | 75.44/74.61 | log |
LoGoNet-5frames (val) | 75.54 | 75.84/75.38 | 78.97/76.33 | 75.67/74.91 | log |
LoGoNet-5frames (test) | 77.10 | 79.69/79.30 | 81.55/78.91 | 73.89/73.10 | Record |
LoGoNet_Ens (test) | 81.02 | 82.17/81.72 | 84.27/81.28 | 80.93/80.06 | Record |
LoGoNet_Ens_v2 (test) | 81.96 | 82.75/82.32 | 84.96/82.10 | 82.36/81.46 | Record |
Performances on KITTI with mAP
*We report average metrics across all results. We provide training / validation configurations, pretrained models for all models in the paper.
Model | Car@40 | Ped@40 | Cyc@40 | Log |
---|---|---|---|---|
LoGoNet (val) | 87.13 | 64.46 | 79.84 | log | weights |
LoGoNet (test) | 85.87 | 48.57 | 73.61 | Record |
Acknowledgement
We sincerely appreciate the following open-source projects for providing valuable and high-quality codes:
- OpenPCDet
- mmdetection3d
- Focalsconv
- CenterPoint
- BEVFusion(ADLab-AutoDrive)
- BEVFusion(mit-han-lab)
- mmdetection
- PDV
Reference
If you find our paper useful, please kindly cite us via:
@inproceedings{logonet,
title={LoGoNet: Towards Accurate 3D Object Detection with Local-to-Global Cross-Modal Fusion},
author={Xin Li and Tao Ma and Yuenan Hou and Botian Shi and Yuchen Yang and Youquan Liu and Xingjiao Wu and Qin Chen and Yikang Li and Yu Qiao and Liang He},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2023}
}
Contact
- If you have any questions about this repo, please contact
lixin@pjlab.org.cn
andshibotian@pjlab.org.cn
.