Home

Awesome

CenterPoint-PonintPillars Pytroch model convert to ONNX and TensorRT

Welcome to CenterPoint! This project is fork from tianweiy/CenterPoint. I implement some code to export CenterPoint-PonintPillars ONNX model and deploy the onnx model using TensorRT.

Center-based 3D Object Detection and Tracking

3D Object Detection and Tracking using center points in the bird-eye view.

<p align="center"> <img src='docs/teaser.png' align="center" height="230px"> </p>

Center-based 3D Object Detection and Tracking,
Tianwei Yin, Xingyi Zhou, Philipp Krähenbühl,
arXiv technical report (arXiv 2006.11275)

@article{yin2020center,
  title={Center-based 3D Object Detection and Tracking},
  author={Yin, Tianwei and Zhou, Xingyi and Kr{\"a}henb{\"u}hl, Philipp},
  journal={arXiv:2006.11275},
  year={2020},
}

NEWS

[2021-01-06] CenterPoint v1.0 is released. Without bells and whistles, we rank first among all Lidar-only methods on Waymo Open Dataset with a single model that runs at 11 FPS. Check out CenterPoint's model zoo for Waymo and nuScenes.

[2020-12-11] 3 out of the top 4 entries in the recent NeurIPS 2020 nuScenes 3D Detection challenge used CenterPoint. Congratualations to other participants and please stay tuned for more updates on nuScenes and Waymo soon.

Contact

Any questions or suggestions are welcome!

Tianwei Yin yintianwei@utexas.edu Xingyi Zhou zhouxy@cs.utexas.edu

Abstract

Three-dimensional objects are commonly represented as 3D boxes in a point-cloud. This representation mimics the well-studied image-based 2D bounding-box detection but comes with additional challenges. Objects in a 3D world do not follow any particular orientation, and box-based detectors have difficulties enumerating all orientations or fitting an axis-aligned bounding box to rotated objects. In this paper, we instead propose to represent, detect, and track 3D objects as points. Our framework, CenterPoint, first detects centers of objects using a keypoint detector and regresses to other attributes, including 3D size, 3D orientation, and velocity. In a second stage, it refines these estimates using additional point features on the object. In CenterPoint, 3D object tracking simplifies to greedy closest-point matching. The resulting detection and tracking algorithm is simple, efficient, and effective. CenterPoint achieved state-of-the-art performance on the nuScenes benchmark for both 3D detection and tracking, with 65.5 NDS and 63.8 AMOTA for a single model. On the Waymo Open Dataset, CenterPoint outperforms all previous single model method by a large margin and ranks first among all Lidar-only submissions.

Highlights

Main results

3D detection on Waymo test set

#FrameVeh_L2Ped_L2Cyc_L2MAPHFPS
VoxelNet171.967.068.269.013
VoxelNet273.071.571.371.911

3D detection on Waymo domain adaptation test set

#FrameVeh_L2Ped_L2Cyc_L2MAPHFPS
VoxelNet256.147.865.256.311

3D detection on nuScenes test set

MAP ↑NDS ↑PKL ↓FPS ↑
VoxelNet58.065.50.6911

3D tracking on Waymo test set

#FrameVeh_L2Ped_L2Cyc_L2MOTAFPS
VoxelNet259.456.660.058.711

3D Tracking on nuScenes test set

AMOTA ↑AMOTP ↓
VoxelNet (flip test)63.80.555

All results are tested on a Titan RTX GPU with batch size 1.

Third-party resources

Use CenterPoint

Installation

Please refer to INSTALL to set up libraries needed for distributed training and sparse convolution.

First download the model (By default, centerpoint_pillar_512) and put it in work_dirs/centerpoint_pillar_512_demo.

We provide a driving sequence clip from the nuScenes dataset. Donwload the folder and put in the main directory.
Then run a demo by python tools/demo.py. If setup corectly, you will see an output video like (red is gt objects, blue is the prediction):

<p align="center"> <img src='docs/demo.gif' align="center" height="350px"> </p>

Benchmark Evaluation and Training

Please refer to GETTING_START to prepare the data. Then follow the instruction there to reproduce our detection and tracking results. All detection configurations are included in configs and we provide the scripts for all tracking experiments in tracking_scripts.

Export ONNX

I divide Pointpillars model into two parts, pfe(include PillarFeatureNet) and rpn(include RPN and CenterHead). The PointPillarsScatter isn't exported. I use ScatterND node instead of PointPillarsScatter.

Centerpoint Pointpillars For TensorRT

see Readme

Compare the TensorRT result with Pytorch result.

TensoRTPytroch
avataravatar

3D detection on nuScenes Mini dataset

TensorRT postprocess use aligned NMS on Bev, so there are some precision loss.

mAPmATEmASEmAOEmAVEmAAENDS
Pytorch0.41630.44380.45160.56740.44290.32880.4847
TensorRT0.40070.44330.45370.56650.44160.31910.4779

License

CenterPoint is release under MIT license (see LICENSE). It is developed based on a forked version of det3d. We also incorperate a large amount of code from CenterNet and CenterTrack. See the NOTICE for details. Note that both nuScenes and Waymo datasets are under non-commercial licenses.

Acknowlegement

This project is not possible without multiple great opensourced codebases. We list some notable examples below.