Home

Awesome

UniM<sup>2</sup>AE: Multi-modal Masked Autoencoders with Unified 3D Representation for 3D Perception in Autonomous Driving

Paper | BibTeX

This is the official PyTorch implementation of the paper - UniM<sup>2</sup>AE: Multi-modal Masked Autoencoders with Unified 3D Representation for 3D Perception in Autonomous Driving.

pipeline

Results

Pre-training

We provide our pretrained weights. You can load the pretrained UniM<sup>2</sup>AE(UniM<sup>2</sup>AE for BEVFusion and UniM<sup>2</sup>AE-sst-pre for SST) to train the multi-modal detector(BEVFusion) or the LiDAR-only detector(SST).

ModelModalityCheckpoint
UniM<sup>2</sup>AEC+LLink
UniM<sup>2</sup>AE-sst-preLLink
swint-nuImagesCLink

Note: The checkpoint(denoted as swint-nuImages) pretrained on nuImages is provided by BEVFusion.

3D Object Detection (on nuScenes validation)

ModelModalitymAPNDSCheckpoint
TransFusion-L-SSTL65.069.9Link
UniM<sup>2</sup>AE-LL65.770.4Link
BEVFusion-SSTC+L68.271.5Link
UniM<sup>2</sup>AEC+L68.471.9Link
UniM<sup>2</sup>AE w/MMIMC+L69.772.7Link

3D Object Detection (on nuScenes test)

ModelModalitymAPNDS
UniM<sup>2</sup>AE-LL67.972.2
UniM<sup>2</sup>AEC+L70.373.3

Here, we train the UniM<sup>2</sup>AE-L and the UniM<sup>2</sup>AE on the trainval split of the nuScenes dataset and test them without any test time augmentation.

BEV Map Segmentation (on nuScenes validation)

ModelModalitymIoUCheckpoint
BEVFusionC51.2Link
UniM<sup>2</sup>AEC52.9Link
BEVFusion-SSTC+L61.3Link
UniM<sup>2</sup>AEC+L61.4Link
UniM<sup>2</sup>AE w/MMIMC+L67.8Link

Prerequisites

Pre-training

After installing these dependencies, please run this command to install the codebase:

cd Pretrain
python setup.py develop

Fine-tuning

The code of Fine-tuning are built with different libraries. Please refer to BEVFusion and Voxel-MAE.

Data Preparation

We follow the instructions from here to download the nuScenes dataset. Please remember to download both detection dataset and the map extension for BEV map segmentation.

After downloading the nuScenes dataset, please preprocess the nuScenes dataset by:

cd Finetune/bevfusion/
python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes

and create the soft link in Pretrain/data, Finetune/sst/data with ln -s.

After data preparation, the directory structure is as follows:

UniM2AE
├──Finetune
│   ├──bevfusion
│   │   ├──tools
│   │   ├──configs
│   │   ├──data
│   │   │   ├── can_bus
│   │   │   │   ├── ...
│   │   │   ├──nuscenes
│   │   │   │   ├── maps
│   │   │   │   ├── samples
│   │   │   │   ├── sweeps
│   │   │   │   ├── v1.0-test
│   │   |   |   ├── v1.0-trainval
│   │   │   │   ├── nuscenes_database
│   │   │   │   ├── nuscenes_infos_train.pkl
│   │   │   │   ├── nuscenes_infos_val.pkl
│   │   │   │   ├── nuscenes_infos_test.pkl
│   │   │   │   ├── nuscenes_dbinfos_train.pkl
│   ├──sst
│   │   ├──data
│   │   │   ├──nuscenes
│   │   │   │   ├── ...
├──Pretrain
│   ├──mmdet3d
│   ├──tools
│   ├──configs
│   ├──data
│   │   ├── can_bus
│   │   │   ├── ...
│   │   ├──nuscenes
│   │   │   ├── ...

Pre-training

Training

Please run:

cd Pretrain
bash tools/dist_train.sh configs/unim2ae_mmim.py 8 

and run the script for fine-tuning:

cd Pretrain
python tools/convert.py --source work_dirs/unim2ae_mmim/epoch_200.pth --target ../Finetune/bevfusion/pretrained/unim2ae-pre.pth

Visualization

To get the reconstruction results of the images and the LiDAR point cloud, please run:

cd Pretrain
python tools/test.py configs/unim2ae_mmim.py --checkpoint [pretrain checkpoint path] --show-pretrain --show-dir viz

Fine-tuning

We provide instructions to finetune BEVFusion and Voxel-MAE.

BEVFusion

Training

If you want to train the LiDAR-only UniM<sup>2</sup>AE-L for object detection, please run:

cd Finetune/bevfusion
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/transfusion/secfpn/lidar/sstv2.yaml --load_from pretrained/unim2ae-lidar-only-pre.pth

For UniM<sup>2</sup>AE w/MMIM detection model, please run:

cd Finetune/bevfusion

python tools/convert.py --source [lidar-only UniM2AE-L checkpoint file path] --fuser pretrained/unim2ae-pre.pth --target pretrained/unim2ae-stage1.pth --stage2

torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/unim2ae_MMIM.yaml --load_from pretrained/unim2ae-stage1.pth

If you want to init the camera backbone with weight pretrained on nuImages, please run:

torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/unim2ae_MMIM.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth --load_from pretrained/unim2ae-stage1-L.pth

For UniM<sup>2</sup>AE detection model, please run:

cd Finetune/bevfusion

torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/bevfusion_sst.yaml --load_from pretrained/unim2ae-stage1.pth

If you want to init the camera backbone with weight pretrained on nuImages, please run:

cd Finetune/bevfusion

torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/bevfusion_sst.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth --load_from pretrained/unim2ae-L-det.pth

Note: The unim2ae-L.pth is the training results of the LiDAR-only UniM<sup>2</sup>AE-L for object detection.


For camera-only UniM<sup>2</sup>AE segmentation model, please run:

cd Finetune/bevfusion
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/camera-bev256d2.yaml --load_from pretrained/unim2ae-seg-c-pre.pth

For UniM<sup>2</sup>AE segmentation model, please run:

cd Finetune/bevfusion
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/fusion-sst.yaml --load_from pretrained/unim2ae-pre.pth

If you want to init the camera backbone with weight pretrained on nuImages, please run:

cd Finetune/bevfusion
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/fusion-sst.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth --load_from pretrained/unim2ae-seg-pre.pth

For UniM<sup>2</sup>AE w/MMIM segmentation model, please run:

cd Finetune/bevfusion
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/unim2ae_MMIM.yaml --load_from pretrained/unim2ae-pre.pth

If you want to init the camera backbone with weight pretrained on nuImages, please run:

cd Finetune/bevfusion
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/unim2ae_MMIM.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth --load_from pretrained/unim2ae-seg-pre.pth

Evaluation

Please run:

cd Finetune/bevfusion
torchpack dist-run -np 8 python tools/test.py [config file path] pretrained/[checkpoint name].pth --eval [evaluation type]

For example, if you want to evaluate the detection model, please run:

cd Finetune/bevfusion
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/unim2ae_MMIM.yaml pretrained/unim2ae-mmim-det.pth --eval bbox

If you want to evaluate the segmentation model, please run:

cd Finetune/bevfusion
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/seg/unim2ae_MMIM.yaml pretrained/unim2ae-mmim-seg.pth --eval map

SST

Training

To train the LiDAR-only anchor-based detector, please run

cd Finetune/sst
bash tools/dist_train.sh configs/sst_refactor/sst_10sweeps_VS0.5_WS16_ED8_epochs288_intensity.py 8 --cfg-options 'load_from=pretrained/unim2ae-sst-pre.pth'

Evaluation

To evaluate the LiDAR-only anchor-based detector, please run

cd Finetune/sst
bash tools/dist_train.sh configs/sst_refactor/sst_10sweeps_VS0.5_WS16_ED8_epochs288_intensity.py [checkpoint file path] 8

Acknowledgement

UniM<sup>2</sup>AE is based on mmdetection3d. This repository is also inspired by the following outstanding contributions to the open-source community: 3DETR, BEVFormer, DETR, BEVFusion, MAE, Voxel-MAE, GreenMIM, SST, TransFusion.

Citation

If you find UniM<sup>2</sup>AE is helpful to your research, please consider citing our work:

@article{zou2023unim,
  title={UniM$^2$AE: Multi-modal Masked Autoencoders with Unified 3D Representation for 3D Perception in Autonomous Driving},
  author={Zou, Jian and Huang, Tianyu and Yang, Guanglei and Guo, Zhenhua and Zuo, Wangmeng},
  journal={arXiv preprint arXiv:2308.10421},
  year={2023}
}