Home

Awesome

ProposalContrast: Unsupervised Pre-training for LiDAR-based 3D Object Detection

This repository contains the PyTorch implementation of the ECCV'2022 paper, ProposalContrast: Unsupervised Pre-training for LiDAR-based 3D Object Detection. This work addresses the unsupervised pre-training of 3D backbones via proposal-wise contrastive learning in the context of autonomous driving.

Updates

Abstract

Existing approaches for unsupervised point cloud pre-training are constrained to either scene-level or point/voxel-level instance discrimination. Scene-level methods tend to lose local details that are crucial for recognizing the road objects, while point/voxel-level methods inherently suffer from limited receptive field that is incapable of perceiving large objects or context environments. Considering region-level representations are more suitable for 3D object detection, we devise a new unsupervised point cloud pre-training framework, called ProposalContrast, that learns robust 3D representations by contrasting region proposals. Specifically, with an exhaustive set of region proposals sampled from each point cloud, geometric point relations within each proposal are modeled for creating expressive proposal representations. To better accommodate 3D detection properties, ProposalContrast optimizes with both inter-cluster and inter-proposal separation, i.e., sharpening the discriminativeness of proposal representations across semantic classes and object instances. The generalizability and transferability of ProposalContrast are verified on various 3D detectors (i.e., PV-RCNN, CenterPoint, PointPillars and PointRCNN) and datasets (i.e., KITTI, Waymo and ONCE).

Citation

If you find our project is helpful for you, please cite:

@article{yin2022proposal,
  title={ProposalContrast: Unsupervised Pre-training for LiDAR-based 3D Object Detection},
  author={Yin, Junbo and Zhou, Dingfu and Zhang, Liangjun and Fang, Jin and Xu, Cheng-Zhong and Shen, Jianbing and Wang, Wenguan},
  booktitle={ECCV},
  year={2022}
}

Main Results

3D Detection on Waymo validation set.

ModelParadigmVeh_L2Ped_L2Cyc_L2Overall
CenterPoint (PillarNet)Scratch60.6751.5555.2855.83
ProposalContrast (PillarNet)Fine-tuning63.0353.1657.3157.83
ModelParadigmVeh_L2Ped_L2Cyc_L2MAPH
CenterPoint (VoxelNet)Scratch63.1058.6666.5462.77
ProposalContrast (VoxelNet)Fine-tuning64.1460.0767.3163.84

Data-efficient 3D Detection on Waymo.

Model (VoxelNet)ParadigmVeh_L2Ped_L2Cyc_L2Overall
CenterPoint5%, Scratch41.5634.3444.4640.12
ProposalContrast5%, Fine-tuning50.6643.6954.4649.60
CenterPoint10%, Scratch52.5944.2855.9750.95
ProposalContrast10%, Fine-tuning57.4351.2659.7756.15
CenterPoint20%, Scratch58.4351.0262.2957.25
ProposalContrast20%, Fine-tuning61.5456.0564.1460.58
CenterPoint50%, Scratch62.8758.2066.6062.56
ProposalContrast50%, Fine-tuning63.9660.1667.4963.87

Use ProposalContrast

Installation

Please refer to INSTALL to build the required libraries. Our project supports both SpConv v1 and SpConv v2.

Data Preparation

Currently, this repo supports the pre-training and fine-tuning on the Waymo 3D object detection dataset. Please prepare the dataset according to WAYMO.

Training and Evaluation

We evaluate the unsupervised pre-training performance of the 3D models in the context of LiDAR-based 3D object detection. The scripts for pre-training, fine-tuning and evluation can be found in RUN_MODEL. We currently support the 3D models like VoxelNet and PillarNet.

License

This project is released under MIT license, as seen in LICENSE.

Acknowlegement

Our project is partially supported by the following codebase. We would like to thank for their contributions.