Home

Awesome

[ICCV 2023] Cross Modal Transformer: Towards Fast and Robust 3D Object Detection

arXiv

<!-- ## Introduction -->

https://user-images.githubusercontent.com/18145538/210828888-a944817a-858f-45ef-8abc-068adeda413f.mp4

<div align="center"> <img src="figs/cmt_eva.png" width="900" />

<em> Performance comparison and Robustness under sensor failure. All statistics are measured on a single Tesla A100 GPU using the best model of official repositories. All models use spconv Voxelization module. </em>

</div><br/>

CMT is a robust 3D detector for end-to-end 3D multi-modal detection. A DETR-like framework is designed for multi-modal detection(CMT) and lidar-only detection(CMT-L), which obtains 74.1%(SoTA without TTA/model ensemble) and 70.1% NDS separately on nuScenes benchmark. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. CMT can be a strong baseline for further research.

Preparation

PKLs and image pretrain weights are available at Google Drive.

Train & inference

# train
bash tools/dist_train.sh /path_to_your_config 8
# inference
bash tools/dist_test.sh /path_to_your_config /path_to_your_pth 8 --eval bbox

Main Results

Results on nuScenes val set. The default batch size is 2 on each GPU. The FPS are all evaluated with a single Tesla A100 GPU. (15e + 5e means the last 5 epochs should be trained without GTsample)

ConfigModalitymAPNDSScheduleInference FPS
vov_1600x640C40.6%46.0%20e8.4
voxel0075L62.14%68.6%15e+5e18.1
voxel0100_r50_800x320C+L67.9%70.8%15e+5e14.2
voxel0075_vov_1600x640C+L70.3%72.9%15e+5e6.4

Results on nuScenes test set. To reproduce our result, replace ann_file=data_root + '/nuscenes_infos_train.pkl' in training config with ann_file=[data_root + '/nuscenes_infos_train.pkl', data_root + '/nuscenes_infos_val.pkl']:

ConfigModalitymAPNDSScheduleInference FPS
vov_1600x640C42.9%48.1%20e8.4
voxel0075L65.3%70.1%15e+5e18.1
voxel0075_vov_1600x640C+L72.0%74.1%15e+5e6.4

Citation

If you find CMT helpful in your research, please consider citing:

@article{yan2023cross,
  title={Cross Modal Transformer via Coordinates Encoding for 3D Object Dectection},
  author={Yan, Junjie and Liu, Yingfei and Sun, Jianjian and Jia, Fan and Li, Shuailin and Wang, Tiancai and Zhang, Xiangyu},
  journal={arXiv preprint arXiv:2301.01283},
  year={2023}
}

Contact

If you have any questions, feel free to open an issue or contact us at yanjunjie@megvii.com, liuyingfei@megvii.com, sunjianjian@megvii.com or wangtiancai@megvii.com.