Home

Awesome

<img src="docs/open_mmlab.png" align="right" width="30%">

OpenPCDet

OpenPCDet is a clear, simple, self-contained open source project for LiDAR-based 3D object detection.

It is also the official code release of [PointRCNN], [Part-A2-Net], [PV-RCNN], [Voxel R-CNN], [PV-RCNN++] and [MPPNet].

Highlights:

Overview

Changelog

[2023-06-30] NEW: Added support for DSVT, which achieves state-of-the-art performance on large-scale Waymo Open Dataset with real-time inference speed (27HZ with TensorRT).

[2023-05-13] NEW: Added support for the multi-modal 3D object detection models on Nuscenes dataset.

[2023-04-02] Added support for VoxelNeXt on Nuscenes, Waymo, and Argoverse2 datasets. It is a fully sparse 3D object detection network, which is a clean sparse CNNs network and predicts 3D objects directly upon voxels.

[2022-09-02] NEW: Update OpenPCDet to v0.6.0:

[2022-08-22] Added support for custom dataset tutorial and template

[2022-07-05] Added support for the 3D object detection backbone network Focals Conv.

[2022-02-12] Added support for using docker. Please refer to the guidance in ./docker.

[2022-02-07] Added support for Centerpoint models on Nuscenes Dataset.

[2022-01-14] Added support for dynamic pillar voxelization, following the implementation proposed in H^23D R-CNN with unique operation and torch_scatter package.

[2022-01-05] NEW: Update OpenPCDet to v0.5.2:

[2021-12-09] NEW: Update OpenPCDet to v0.5.1:

[2021-12-01] NEW: OpenPCDet v0.5.0 is released with the following features:

[2021-06-08] Added support for the voxel-based 3D object detection model Voxel R-CNN.

[2021-05-14] Added support for the monocular 3D object detection model CaDDN.

[2020-11-27] Bugfixed: Please re-prepare the validation infos of Waymo dataset (version 1.2) if you would like to use our provided Waymo evaluation tool (see PR). Note that you do not need to re-prepare the training data and ground-truth database.

[2020-11-10] The Waymo Open Dataset has been supported with state-of-the-art results. Currently we provide the configs and results of SECOND, PartA2 and PV-RCNN on the Waymo Open Dataset, and more models could be easily supported by modifying their dataset configs.

[2020-08-10] Bugfixed: The provided NuScenes models have been updated to fix the loading bugs. Please redownload it if you need to use the pretrained NuScenes models.

[2020-07-30] OpenPCDet v0.3.0 is released with the following features:

[2020-07-17] Add simple visualization codes and a quick demo to test with custom data.

[2020-06-24] OpenPCDet v0.2.0 is released with pretty new structures to support more models and datasets.

[2020-03-16] OpenPCDet v0.1.0 is released.

Introduction

What does OpenPCDet toolbox do?

Note that we have upgrated PCDet from v0.1 to v0.2 with pretty new structures to support various datasets and models.

OpenPCDet is a general PyTorch-based codebase for 3D object detection from point cloud. It currently supports multiple state-of-the-art 3D object detection methods with highly refactored codes for both one-stage and two-stage 3D detection frameworks.

Based on OpenPCDet toolbox, we win the Waymo Open Dataset challenge in 3D Detection, 3D Tracking, Domain Adaptation three tracks among all LiDAR-only methods, and the Waymo related models will be released to OpenPCDet soon.

We are actively updating this repo currently, and more datasets and models will be supported soon. Contributions are also welcomed.

OpenPCDet design pattern

<p align="center"> <img src="docs/dataset_vs_model.png" width="95%" height="320"> </p> <p align="center"> <img src="docs/model_framework.png" width="95%"> </p> <p align="center"> <img src="docs/multiple_models_demo.png" width="95%"> </p>

Currently Supported Features

Model Zoo

KITTI 3D Object Detection Baselines

Selected supported methods are shown in the below table. The results are the 3D detection performance of moderate difficulty on the val set of KITTI dataset.

training timeCar@R11Pedestrian@R11Cyclist@R11download
PointPillar~1.2 hours77.2852.2962.68model-18M
SECOND~1.7 hours78.6252.9867.15model-20M
SECOND-IoU-79.0955.7471.31model-46M
PointRCNN~3 hours78.7054.4172.11model-16M
PointRCNN-IoU~3 hours78.7558.3271.34model-16M
Part-A2-Free~3.8 hours78.7265.9974.29model-226M
Part-A2-Anchor~4.3 hours79.4060.0569.90model-244M
PV-RCNN~5 hours83.6157.9070.47model-50M
Voxel R-CNN (Car)~2.2 hours84.54--model-28M
Focals Conv - F~4 hours85.66--model-30M
CaDDN (Mono)~15 hours21.3813.029.76model-774M

Waymo Open Dataset Baselines

We provide the setting of DATA_CONFIG.SAMPLED_INTERVAL on the Waymo Open Dataset (WOD) to subsample partial samples for training and evaluation, so you could also play with WOD by setting a smaller DATA_CONFIG.SAMPLED_INTERVAL even if you only have limited GPU resources.

By default, all models are trained with a single frame of 20% data (~32k frames) of all the training samples on 8 GTX 1080Ti GPUs, and the results of each cell here are mAP/mAPH calculated by the official Waymo evaluation metrics on the whole validation set (version 1.2).

Performance@(train with 20% Data)Vec_L1Vec_L2Ped_L1Ped_L2Cyc_L1Cyc_L2
SECOND70.96/70.3462.58/62.0265.23/54.2457.22/47.4957.13/55.6254.97/53.53
PointPillar70.43/69.8362.18/61.6466.21/46.3258.18/40.6455.26/51.7553.18/49.80
CenterPoint-Pillar70.50/69.9662.18/61.6973.11/61.9765.06/55.0065.44/63.8562.98/61.46
CenterPoint-Dynamic-Pillar70.46/69.9362.06/61.5873.92/63.3565.91/56.3366.24/64.6963.73/62.24
CenterPoint71.33/70.7663.16/62.6572.09/65.4964.27/58.2368.68/67.3966.11/64.87
CenterPoint (ResNet)72.76/72.2364.91/64.4274.19/67.9666.03/60.3471.04/69.7968.49/67.28
Part-A2-Anchor74.66/74.1265.82/65.3271.71/62.2462.46/54.0666.53/65.1864.05/62.75
PV-RCNN (AnchorHead)75.41/74.7467.44/66.8071.98/61.2463.70/53.9565.88/64.2563.39/61.82
PV-RCNN (CenterHead)75.95/75.4368.02/67.5475.94/69.4067.66/61.6270.18/68.9867.73/66.57
Voxel R-CNN (CenterHead)-Dynamic-Voxel76.13/75.6668.18/67.7478.20/71.9869.29/63.5970.75/69.6868.25/67.21
PV-RCNN++77.82/77.3269.07/68.6277.99/71.3669.92/63.7471.80/70.7169.31/68.26
PV-RCNN++ (ResNet)77.61/77.1469.18/68.7579.42/73.3170.88/65.2172.50/71.3969.84/68.77

Here we also provide the performance of several models trained on the full training set (refer to the paper of PV-RCNN++):

Performance@(train with 100% Data)Vec_L1Vec_L2Ped_L1Ped_L2Cyc_L1Cyc_L2
SECOND72.27/71.6963.85/63.3368.70/58.1860.72/51.3160.62/59.2858.34/57.05
CenterPoint-Pillar73.37/72.8665.09/64.6275.35/65.1167.61/58.2567.76/66.2265.25/63.77
Part-A2-Anchor77.05/76.5168.47/67.9775.24/66.8766.18/58.6268.60/67.3666.13/64.93
VoxelNeXt-2D77.94/77.4769.68/69.2580.24/73.4772.23/65.8873.33/72.2070.66/69.56
VoxelNeXt78.16/77.7069.86/69.4281.47/76.3073.48/68.6376.06/74.9073.29/72.18
PV-RCNN (CenterHead)78.00/77.5069.43/68.9879.21/73.0370.42/64.7271.46/70.2768.95/67.79
PV-RCNN++79.10/78.6370.34/69.9180.62/74.6271.86/66.3073.49/72.3870.70/69.62
PV-RCNN++ (ResNet)79.25/78.7870.61/70.1881.83/76.2873.17/68.0073.72/72.6671.21/70.19
DSVT-Pillar79.44/78.9771.24/70.8183.00/77.2275.45/69.9576.70/75.7073.83/72.86
DSVT-Voxel79.77/79.3171.67/71.2583.75/78.9276.21/71.5777.57/76.5874.70/73.73
PV-RCNN++ (ResNet, 2 frames)80.17/79.7072.14/71.7083.48/80.4275.54/72.6174.63/73.7572.35/71.50
MPPNet (4 frames)81.54/81.0674.07/73.6184.56/81.9477.20/74.6777.15/76.5075.01/74.38
MPPNet (16 frames)82.74/82.2875.41/74.9684.69/82.2577.43/75.0677.28/76.6675.13/74.52

We could not provide the above pretrained models due to Waymo Dataset License Agreement, but you could easily achieve similar performance by training with the default configs.

NuScenes 3D Object Detection Baselines

All models are trained with 8 GPUs and are available for download. For training BEVFusion, please refer to the guideline.

mATEmASEmAOEmAVEmAAEmAPNDSdownload
PointPillar-MultiHead33.8726.0032.0728.7420.1544.6358.23model-23M
SECOND-MultiHead (CBGS)31.1525.5126.6426.2620.4650.5962.29model-35M
CenterPoint-PointPillar31.1326.0442.9223.9019.1450.0360.70model-23M
CenterPoint (voxel_size=0.1)30.1125.5538.2821.9418.8756.0364.54model-34M
CenterPoint (voxel_size=0.075)28.8025.4337.2721.5518.2459.2266.48model-34M
VoxelNeXt (voxel_size=0.075)30.1125.2340.5721.6918.5660.5366.65model-31M
TransFusion-L*27.9625.3729.3527.3118.5564.5869.43model-32M
BEVFusion28.0325.4330.1926.7618.4867.7570.98model-157M

*: Use the fade strategy, which disables data augmentations in the last several epochs during training.

ONCE 3D Object Detection Baselines

All models are trained with 8 GPUs.

VehiclePedestrianCyclistmAP
PointRCNN52.094.2829.8428.74
PointPillar68.5717.6346.8144.34
SECOND71.1926.4458.0451.89
PV-RCNN77.7723.5059.3753.55
CenterPoint78.0249.7467.2264.99

Argoverse2 3D Object Detection Baselines

All models are trained with 4 GPUs.

mAPdownload
VoxelNeXt30.5model-32M

Other datasets

Welcome to support other datasets by submitting pull request.

Installation

Please refer to INSTALL.md for the installation of OpenPCDet.

Quick Demo

Please refer to DEMO.md for a quick demo to test with a pretrained model and visualize the predicted results on your custom data or the original KITTI data.

Getting Started

Please refer to GETTING_STARTED.md to learn more usage about this project.

License

OpenPCDet is released under the Apache 2.0 license.

Acknowledgement

OpenPCDet is an open source project for LiDAR-based 3D scene perception that supports multiple LiDAR-based perception models as shown above. Some parts of PCDet are learned from the official released codes of the above supported methods. We would like to thank for their proposed methods and the official implementation.

We hope that this repo could serve as a strong and flexible codebase to benefit the research community by speeding up the process of reimplementing previous works and/or developing new methods.

Citation

If you find this project useful in your research, please consider cite:

@misc{openpcdet2020,
    title={OpenPCDet: An Open-source Toolbox for 3D Object Detection from Point Clouds},
    author={OpenPCDet Development Team},
    howpublished = {\url{https://github.com/open-mmlab/OpenPCDet}},
    year={2020}
}

Contribution

Welcome to be a member of the OpenPCDet development team by contributing to this repo, and feel free to contact us for any potential contributions.