Home

Awesome

SimIPU

SimIPU: Simple 2D Image and 3D Point Cloud Unsupervised Pre-Training for Spatial-Aware Visual Representations

Zhenyu Li, Zehui Chen, Ang Li, Liangji Fang, Qinhong Jiang, Xianming Liu, Junjun Jiang, Bolei Zhou, Hang Zhao

AAAI 2021 (arXiv pdf)

Notice

Usage

Installation

This repo is tested on python=3.7, cuda=10.1, pytorch=1.6.0, mmcv-full=1.3.4, mmdetection=2.11.0, mmsegmentation=0.13.0 and mmdetection3D=0.13.0.

Note: since mmdetection and mmdetection3D have made huge compatibility change in their latest versions, their latest version is not compatible with this repo. Make sure you install the correct version.

Follow instructions below to install:

conda create -n simipu python=3.7
conda activate monocon
git clone https://github.com/zhyever/SimIPU.git
cd SimIPU
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.1 -c pytorch
pip install mmcv-full==1.3.4 -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.6.0/index.html
git clone https://github.com/open-mmlab/mmdetection.git
cd ./mmdetection
git checkout v2.11.0
pip install -r requirements/build.txt
pip install -v -e .
cd ..
pip install mmsegmentation==0.13.0
# remember you have "cd SimIPU"
pip install -v -e .
conda install future

Data Preparation

Download KITTI dataset and organize data following the official instructions in mmdetection3D. Then generate data by running:

python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti

If you would like to run experiments on Mono3D Nus, you should follow the official instructions to prepare the NuScenes dataset.

For Waymo pre-training, we have no plan to release corresponding data-preparing scripts for a short time. Some of the scripts are presented in project_cl/tools/. I just have no effort or resources to reproduce the Waymo pre-training process. Since we provide how to prepare the Waymo dataset in our paper, if you have a problem to achieve it, feel free to contact me and I would like to help you.

Pre-training on KITTI

bash tools/dist_train.sh project_cl/configs/simipu/simipu_kitti.py 8 --work-dir work_dir/your/work/dir

Downstream Evaluation

1. Camera-lidar fusion based 3D object detection on kitti dataset.

Remember to change the pre-trained model via changing the value of key load_from in the config.

bash tools/dist_train.sh project_cl/configs/kitti_det3d/moca_r50_kitti.py 8 --work-dir work_dir/your/work/dir

2. Monocular 3D object detection on Nuscenes dataset.

Remember to change the pre-trained model via changing the value of key load_from in the config. Before training, you also need align the key name in checkpoint['state_dict']. See project_cl/tools/convert_pretrain_imgbackbone.py for details.

bash tools/dist_train.sh project_cl/configs/fcos3d_mono3d/fcos3d_r50_nus.py 8 --work-dir work_dir/your/work/dir

2. Monocular Depth Estimation on KITTI/NYU dataset.

See Depth-Estimation-Toolbox.

Pre-trained Model and Results

We provide pre-trained models. As default, the "Full Waymo or Waymo" presents Waymo dataset with load_interval=5. We use discrete frames to ensure training variety. Previous experiments indicate model improvement with load_interval=1 is slight. So actually, 1/10 Waymo means 1/5 (load_interval=5) times 1/10 (use first 1/10 scene data) = 1/50 Waymo data.

DatasetModel
SimIPUKITTIlink
SimIPUWaymolink
SimIPUImageNet Sup + Waymo SimIPUlink

Fusion-based 3D object detection results.

AP40@EasyAP40@Mod.AP40@HardLink
Moca81.3270.8866.19Log

Monocular 3D object detection results.

Pre-trainmAPLink
Fcos3DScratch17.9Log
Fcos3D1/10 Waymo SimIPU20.3Log
Fcos3D1/5 Waymo SimIPU22.5Log
Fcos3D1/2 Waymo SimIPU24.7Log
Fcos3DFull Waymo SimIPU26.2Log
Fcos3DImageNet Sup27.7Log
Fcos3DImageNet Sup + Full Waymo SimIPU28.4Log
</center>

Citation

If you find our work useful for your research, please consider citing the paper

@article{li2021simipu,
  title={SimIPU: Simple 2D Image and 3D Point Cloud Unsupervised Pre-Training for Spatial-Aware Visual Representations},
  author={Li, Zhenyu and Chen, Zehui and Li, Ang and Fang, Liangji and Jiang, Qinhong and Liu, Xianming and Jiang, Junjun and Zhou, Bolei and Zhao, Hang},
  journal={arXiv preprint arXiv:2112.04680},
  year={2021}
}