Awesome
Object DGCNN & DETR3D
This repo contains the implementations of Object DGCNN (https://arxiv.org/abs/2110.06923) and DETR3D (https://arxiv.org/abs/2110.06922). Our implementations are built on top of MMdetection3D.
Prerequisite
Data
- Follow the mmdet3d to process the data.
Train
-
Downloads the pretrained backbone weights to pretrained/
-
For example, to train Object-DGCNN with pillar on 8 GPUs, please use
tools/dist_train.sh projects/configs/obj_dgcnn/pillar.py 8
Evaluation using pretrained models
- Download the weights accordingly.
Backbone | mAP | NDS | Download |
---|---|---|---|
DETR3D, ResNet101 w/ DCN | 34.7 | 42.2 | model | log |
above, + CBGS | 34.9 | 43.4 | model | log |
DETR3D, VoVNet on trainval, evaluation on test set | 41.2 | 47.9 | model | log |
Backbone | mAP | NDS | Download |
---|---|---|---|
Object DGCNN, pillar | 53.2 | 62.8 | model | log |
Object DGCNN, voxel | 58.6 | 66.0 | model | log |
-
To test, use
tools/dist_test.sh projects/configs/obj_dgcnn/pillar_cosine.py /path/to/ckpt 8 --eval=bbox
If you find this repo useful for your research, please consider citing the papers
@inproceedings{
obj-dgcnn,
title={Object DGCNN: 3D Object Detection using Dynamic Graphs},
author={Wang, Yue and Solomon, Justin M.},
booktitle={2021 Conference on Neural Information Processing Systems ({NeurIPS})},
year={2021}
}
@inproceedings{
detr3d,
title={DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries},
author={Wang, Yue and Guizilini, Vitor and Zhang, Tianyuan and Wang, Yilun and Zhao, Hang and and Solomon, Justin M.},
booktitle={The Conference on Robot Learning ({CoRL})},
year={2021}
}