Home

Awesome

LiDAR R-CNN: An Efficient and Universal 3D Object Detector

Introduction

This is the official code of LiDAR R-CNN: An Efficient and Universal 3D Object Detector. In this work, we present LiDAR R-CNN, a second stage detector that can generally improve any existing 3D detector. We find a common problem in Point-based RCNN, which is the learned features ignore the size of proposals, and propose several methods to remedy it. Evaluated on WOD benchmarks, our method significantly outperforms previous state-of-the-art.

中文介绍:https://zhuanlan.zhihu.com/p/359800738

News

Requirements

All the codes are tested in the following environment:

To install pybind11:

git clone git@github.com:pybind/pybind11.git
cd pybind11
mkdir build && cd build
cmake .. && make -j 
sudo make install

To install requirements:

pip install -r requirements.txt
apt-get install ninja-build libeigen3-dev

Install LiDAR_RCNN library:

python setup.py develop --user

Cuda Extensions:

# Rotated IOU
cd src/LiDAR_RCNN/ops/iou3d/
python setup.py build_ext --inplace

Preparing Data

Please refer to data processer to generate the proposal data.

Training

After preparing WOD data, we can train the vehicle only model in the paper, run this command:

python -m torch.distributed.launch --nproc_per_node=4 tools/train.py --cfg config/lidar_rcnn.yaml --name lidar_rcnn

For 3 class in WOD:

python -m torch.distributed.launch --nproc_per_node=8 tools/train.py --cfg config/lidar_rcnn_all_cls.yaml --name lidar_rcnn_all

The models and logs will be saved to work_dirs/outputs.

NOTE: for multi-frame training, please set MODEL.Frame = n in config.

Evaluation

To evaluate, run distributed testing with 4 gpus:

python -m torch.distributed.launch --nproc_per_node=4 tools/test.py --cfg config/lidar_rcnn.yaml --checkpoint outputs/lidar_rcnn/checkpoint_lidar_rcnn_59.pth.tar
python tools/create_results.py --cfg config/lidar_rcnn.yaml

Note that, you should keep the nGPUS in config equal to nproc_per_node .This will generate a val.bin file in the work_dir/results. You can create submission to Waymo server using waymo-open-dataset code by following the instructions here.

Results

Our model achieves the following performance on:

Waymo Open Dataset Challenges (3D Detection)

Proposals fromClassFrame/Channel3D AP L1 Vehicle3D AP L1 Pedestrian3D AP L1 Cyclist
PointPillarsVehicle1 / 1x75.6--
PointPillarsVehicle1 / 2x75.6--
PointPillarsVehicle3 / 2x77.8--
SSTVehicle3 / 2x78.6--
PointPillars3 Class1 / 1x73.470.767.4
PointPillars3 Class1 / 2x73.871.969.4
Proposals fromClassFrame/Channel3D AP L2 Vehicle3D AP L2 Pedestrian3D AP L2 Cyclist
PointPillarsVehicle1 / 1x66.8--
PointPillarsVehicle1 / 2x67.9--
PointPillarsVehicle3 / 2x69.1--
SSTVehicle3 / 2x69.9--
PointPillars3 Class1 / 1x64.862.464.8
PointPillars3 Class1 / 2x65.163.566.8

Note: The proposals provided by PointPillars are detected on 1 frame points cloud.

Citation

If you find our paper or repository useful, please consider citing

@article{li2021lidar,
  title={LiDAR R-CNN: An Efficient and Universal 3D Object Detector},
  author={Li, Zhichao and Wang, Feng and Wang, Naiyan},
  journal={CVPR},
  year={2021},
}

Acknowledgement

This project draws on the following codebases.