Home

Awesome

CMD: A Cross Mechanism Domain Adaptation Dataset for 3D Object Detection(ECCV2024)

An multi-mechanism, multi-modal real-world 3D object detection dataset that includes low-resolustion (32 beams) mechanical LiDAR, high-resolustion (128 beams) mechanical LiDAR, solid-state LiDAR, 4D millimeter-wave radar, and cameras. Each sensor is precisely time-synchronized and calibrated, making the dataset suitable for 3D object detection research involving multi-mechanism LiDAR data, particularly for cross-mechanism domain adaptation.

Download

Log in here using the username "Guest" and the password "guest_CMD" to download the dataset.

Data Sample

sample

Get Started

1. Installation and Data Preparation

A. Clone this repository.

git clone https://github.com/im-djh/CMD.git

B. Create virtual-env.

conda create -n xmuda python=3.8

C. Install requirements cuda-11.4、cuda-11.6、cuda-11.7 tested

conda activate xmuda
pip install torch==1.13.0+cu116 torchvision==0.14.0+cu116 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cu116
pip install spconv-cu116	
pip install -r requirements.txt
python setup.py develop

D. Download the dataset and create dataset infos

ln -s <path-to-the-downloaded-dataset> /xmuda/data/xmu

All the file will be organized as,

CMD
├── data
│   ├── xmu
│   │   │── ImageSets
|   |   |── label
|   |   |── seq**     
├── pcdet
├── tools
 python -m pcdet.datasets.xmu.xmu_dataset --func create_xmu_infos  --cfg_file tools/cfgs/dataset_configs/xmu/xmuda_dataset.yaml
python -m pcdet.datasets.xmu.xmu_dataset --func create_groundtruth_database  --cfg_file tools/cfgs/dataset_configs/xmu/xmu_dataset.yaml

E. For further steps, please refer to OpenPCDet.

Experimental Results

All LiDAR-based models are trained with 4 GTX 3090 GPU. Due to slight differences in annotation and calculation rules, there may be minor discrepancies between the experimental results and those reported in the paper.

Model Zoo

3D Object Detection Baselines

Selected supported methods are shown in the below table. The results are the 3D detection performance on the val set of our CMD.

Ouster

AP@50CarTruckPedestrianCyclistmAP
PointPillar41.7018.133.8037.7725.35
CenterPoint40.4318.7711.4745.7629.11
Voxel-RCNN43.2021.7013.7041.3229.98
VoxelNeXt41.4020.9810.2546.1429.70

Robosense

AP@50CarTruckPedestrianCyclistmAP
PointPillar47.6318.836.8236.9827.56
CenterPoint49.1621.212.7944.8229.50
Voxel-RCNN50.6123.9712.8643.1732.65
VoxelNeXt49.5621.665.6444.4530.33

Hesai

AP@50CarTruckPedestrianCyclistmAP
PointPillar42.1118.856.8933.2725.28
CenterPoint42.3919.154.0237.8825.86
Voxel-RCNN44.8521.8411.6334.8128.28
VoxelNeXt44.1921.573.6639.4727.22

Training

cd ../../tools
python train.py --cfg_file  cfgs/xmu_ouster_models/centerpoint.yaml 
bash scripts/dist_train.sh 8 --cfg_file cfgs/xmu_ouster_models/centerpoint.yaml 

Evaluation

python test.py --cfg_file cfgs/xmu_ouster_models/centerpoint.yaml --ckpt /path/to/your/checkpoint 
bash scripts/dist_test.sh 8 --cfg_file cfgs/xmu_ouster_models/centerpoint.yaml --ckpt /path/to/your/checkpoint 

Todo List

Notes

Citation

If you find our Cross Mechanism Dataset useful in your research, please consider cite:

@inproceedings{dengcmd,
  title={CMD: A Cross Mechanism Domain Adaptation Dataset for 3D Object Detection},
  author={Deng, Jinhao and Ye, Wei and Wu, Hai and Huang, Xun and Xia, Qiming and Li, Xin and Fang, Jin and Li, Wei and Wen, Chenglu and Wang, Cheng},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  year={2024}
}