Home

Awesome

ONCE Benchmark

This is a reproduced benchmark for 3D object detection on the ONCE (One Million Scenes) dataset.

The code is mainly based on OpenPCDet.

Introduction

We provide the dataset API and some reproduced models on the ONCE dataset.

Installation

The repo is based on OpenPCDet. If you have already installed OpenPCDet (version >= v0.3.0), you can skip this part and use the existing environment, but remember to re-compile CUDA operators by

python setup.py develop
cd pcdet/ops/dcn
python setup.py develop

If you haven't installed OpenPCDet, please refer to INSTALL.md for the installation.

Getting Started

Please refer to GETTING_STARTED.md to learn more usage about this project.

Benchmark

Please refer to this page for detailed benchmark results. We cannot release the training checkpoints, but it's easy to reproduce the results with provided configurations.

Detection Models

We provide 1 fusion-based and 5 point cloud based 3D detectors. The training configurations are at tools/cfgs/once_models/sup_models/*.yaml

For PointPainting, you have to first produce segmentation results yourself. We used HRNet trained on CityScapes to generate segmentation masks.

Reproduced results on the validation split (trained on the training split):

MethodVehiclePedestrianCyclistmAP
PointRCNN52.094.2829.8428.74
PointPillars68.5717.6346.8144.34
SECOND71.1926.4458.0451.89
PV-RCNN77.7723.5059.3753.55
CenterPoints66.7949.9063.4560.05
PointPainting66.1744.8462.3457.78

Semi-supervised Learning

We provide 5 semi-supervised methods based on the SECOND detector. The training configurations are at tools/cfgs/once_models/semi_learning_models/*.yaml

It is worth noting that all the methods are implemented by ourselves, and some are modified to attain better performance. Thus our implementations may be quite different from the original versions.

Reproduced results on the validation split (semi-supervised learning on the 100k raw_small subset):

MethodVehiclePedestrianCyclistmAP
baseline (SECOND)71.1926.4458.0451.89
Pseudo Label72.8025.5055.3751.22
Noisy Student73.6928.8154.6752.39
Mean Teacher74.4630.5461.0255.34
SESS73.3327.3159.5253.39
3DIoUMatch73.8130.8656.7753.81

Unsupervised Domain Adaptation

This part of the codes is based on ST3D. Please copy the configurations at tools/cfgs/once_models/uda_models/* and tools/cfgs/dataset_configs/da_once_dataset.yaml, as well as the dataset file pcdet/datasets/once/once_target_dataset.py to the ST3D repo. The results can be easily reproduced following their instructions.

TaskWaymo_to_ONCEnuScenes_to_ONCEONCE_to_KITTI
MethodAP_BEV/AP_3DAP_BEV/AP_3DAP_BEV/AP_3D
Source Only65.55/32.8846.85/23.7442.01/12.11
SN67.97/38.2562.47/29.5348.12/21.12
ST3D68.05/48.3442.53/17.5286.89/41.42
Oracle89.00/77.5089.00/77.5083.29/73.45

Citation

If you find this project useful in your research, please consider cite:

@article{mao2021one,
  title={One Million Scenes for Autonomous Driving: ONCE Dataset},
  author={Mao, Jiageng and Niu, Minzhe and Jiang, Chenhan and Liang, Hanxue and Liang, Xiaodan and Li, Yamin and Ye, Chaoqiang and Zhang, Wei and Li, Zhenguo and Yu, Jie and others},
  journal={NeurIPS},
  year={2021}
}