Home

Awesome

PWC PWC PWC PWC

CVF arXiv GitHub Stars

DSVT: an efficient and deployment-friendly sparse backbone for large-scale point clouds

<!-- [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/embracing-single-stride-3d-object-detector/3d-object-detection-on-waymo-pedestrian)](https://paperswithcode.com/sota/3d-object-detection-on-waymo-pedestrian?p=embracing-single-stride-3d-object-detector) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/embracing-single-stride-3d-object-detector/3d-object-detection-on-waymo-cyclist)](https://paperswithcode.com/sota/3d-object-detection-on-waymo-cyclist?p=embracing-single-stride-3d-object-detector) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/embracing-single-stride-3d-object-detector/3d-object-detection-on-waymo-vehicle)](https://paperswithcode.com/sota/3d-object-detection-on-waymo-vehicle?p=embracing-single-stride-3d-object-detector) -->

This repo is the official implementation of CVPR paper: DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets as well as the follow-ups. Our DSVT achieves state-of-the-art performance on large-scale Waymo Open Dataset with real-time inference speed (27Hz). We have made every effort to ensure that the codebase is clean, concise, easily readable, state-of-the-art, and relies only on minimal dependencies.

DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets

Haiyang Wang*, Chen Shi*, Shaoshuai Shi $^\dagger$, Meng Lei, Sen Wang, Di He, Bernt Schiele, Liwei Wang $^\dagger$

<div align="center"> <img src="assets/Figure2.png" width="500"/> </div>

News

Overview

TODO

Introduction

Dynamic Sparse Voxel Transformer is an efficient yet deployment-friendly 3D transformer backbone for outdoor 3D object detection. It partitions a series of local regions in each window according to its sparsity and then computes the features of all regions in a fully parallel manner. Moreover, to allow the cross-set connection, it designs a rotated set partitioning strategy that alternates between two partitioning configurations in consecutive self-attention layers.

DSVT achieves state-of-the-art performance on large-scale Waymo one-sweeps 3D object detection (78.2 mAPH L1 and 72.1 mAPH L2 on one-stage setting) and (78.9 mAPH L1 and 72.8 mAPH L2 on two-stage setting), surpassing previous models by a large margin. Moreover, as for multiple sweeps setting ( 2, 3, 4 sweeps settings), our model reaches 74.6 mAPH L2, 75.0 mAPH L2 and 75.6 mAPH L2 in terms of one-stage framework and 75.1 mAPH L2, 75.5 mAPH L2 and 76.2 mAPH L2 on two-stage framework, which outperforms the previous best multi-frame methods with a large margin. Note that our model is not specifically designed for multi-frame detection, and only takes concatenated point clouds as input.

Pipeline

Main results

We provide the pillar and voxel 3D version of one-stage DSVT. The two-stage versions with CT3D are also listed below.

3D Object Detection (on Waymo validation)

We run training for 3 times and report average metrics across all results. Regrettably, we are unable to provide the pre-trained model weights due to Waymo Dataset License Agreement. However, we can provide the training logs.

One-Sweeps Setting

Model#SweepsmAP/H_L1mAP/H_L2Veh_L1Veh_L2Ped_L1Ped_L2Cyc_L1Cyc_L2Log
DSVT(Pillar)179.5/77.173.2/71.079.3/78.870.9/70.582.8/77.075.2/69.876.4/75.473.6/72.7Log
DSVT(Voxel)180.3/78.274.0/72.179.7/79.371.4/71.083.7/78.976.1/71.577.5/76.574.6/73.7Log
DSVT(Pillar-TS)180.6/78.274.3/72.180.2/79.772.0/71.683.7/78.076.1/70.777.8/76.874.9/73.9Log
DSVT(Voxel-TS)181.1/78.974.8/72.880.4/79.972.2/71.884.2/79.376.5/71.878.6/77.675.7/74.7Log

Multi-Sweeps Setting

2-Sweeps
Model#SweepsmAP/H_L1mAP/H_L2Veh_L1Veh_L2Ped_L1Ped_L2Cyc_L1Cyc_L2Log
DSVT(Pillar)281.4/79.875.4/73.980.8/80.372.7/72.384.5/81.377.2/74.178.8/77.976.3/75.4Log
DSVT(Voxel)281.9/80.476.0/74.681.1/80.673.0/72.684.9/81.777.8/74.879.8/78.977.3/76.4Log
DSVT(Pillar-TS)281.9/80.476.0/74.581.3/80.973.4/73.085.2/81.977.9/74.779.2/78.376.7/75.9Log
DSVT(Voxel-TS)282.3/80.876.6/75.181.4/81.073.5/73.185.4/82.278.4/75.380.2/79.377.8/76.9Log
3-Sweeps
Model#SweepsmAP/H_L1mAP/H_L2Veh_L1Veh_L2Ped_L1Ped_L2Cyc_L1Cyc_L2Log
DSVT(Pillar)381.9/80.576.2/74.881.2/80.873.3/72.985.0/82.078.0/75.079.6/78.877.2/76.4Log
DSVT(Voxel)382.1/80.876.3/75.081.5/81.173.6/73.285.3/82.478.2/75.479.6/78.877.2/76.4Log
DSVT(Pillar-TS)382.5/81.076.7/75.481.8/81.374.0/73.685.6/82.678.5/75.680.1/79.277.7/76.9Log
DSVT(Voxel-TS)382.6/81.276.8/75.581.8/81.474.0/73.685.8/82.978.8/75.980.1/79.277.7/76.9Log
4-Sweeps
Model#SweepsmAP/H_L1mAP/H_L2Veh_L1Veh_L2Ped_L1Ped_L2Cyc_L1Cyc_L2Log
DSVT(Pillar)482.5/81.076.7/75.381.7/81.273.8/73.485.4/82.378.5/75.580.3/79.477.9/77.1Log
DSVT(Voxel)482.6/81.376.9/75.681.8/81.474.1/73.685.6/82.878.6/75.980.4/79.678.1/77.3Log
DSVT(Pillar-TS)482.9/81.577.3/75.982.1/81.674.4/74.085.8/82.879.0/76.180.9/80.078.6/77.7Log
DSVT(Voxel-TS)483.1/81.777.5/76.282.1/81.674.5/74.186.0/83.279.1/76.481.1/80.378.8/78.0Log

3D Object Detection (on NuScenes validation)

ModelmAPNDSmATEmASEmAOEmAVEmAAEckptLog
DSVT(Pillar)66.471.127.024.827.222.618.9ckptLog

3D Object Detection (on NuScenes test)

ModelmAPNDSmATEmASEmAOEmAVEmAAEresults
DSVT(Pillar)68.472.724.823.029.624.613.6result.json

Bev Map Segmentation (on NuScenes validation)

ModelDrivablePed.Cross.WalkwayStopLineCarparkDividermIoU
DSVT(Pillar)87.667.272.759.762.758.268.0

What's new here?

🔥 Deployment-friendly and fast inference speed

We present a comparison with other state-of-the-art methods on both inference speed and performance accuracy. After being deployed by NVIDIA TensorRT, our model can achieve a real-time running speed (27Hz). We hope that DSVT can lead the wave of point cloud sparse network design, replacing Sparse Convolution and enabling the deployment of sparse networks in real-world applications.

<div align="left"> <img src="assets/Figure1_arxiv.png" width="600"/> </div>
ModelLatencymAP_L2mAPH_L2
Centerpoint-Pillar35ms66.062.2
Centerpoint-Voxel40ms68.265.8
PV-RCNN++(center)113ms71.769.5
DSVT(Pillar)67ms73.271.0
DSVT(Voxel)97ms74.072.1
DSVT(Pillar+TensorRt)37ms73.271.0

🔥 Beats previous SOTAs of outdoor 3D Object Detection and BEV Segmentation

Our approach has achieved the best performance on multiple datasets (e.g., Waymo and Nuscenes) and tasks (e.g., 3D Object Detection and BEV Map Segmentation), and it is highly versatile, requiring only the replacement of the backbone.

<div align="left"> <img src="assets/Figure4.png" width="700"/> </div>

🔥 More powerful than Spase Convolution

Thanks to the large receptive field of Transformer, our DSVT-P brings +1.78 L2 mAPH gains over sparse convolution with a slightly larger latency. Due to the characteristic of friendly deployment (SpConv cannot be easily deployed), our model can achieve 2x faster by TensorRT acceleration.

<div align="left"> <img src="assets/Figure5.png" width="500"/> </div>

See our paper for more analysis, discussions, and evaluations.

Usage

Installation

Please refer to INSTALL.md for installation.

Dataset Preparation

Please follow the instructions from OpenPCDet. We adopt the same data generation process.

Training

# multi-gpu training
cd tools
bash scripts/dist_train.sh 8 --cfg_file <CONFIG_FILE> --sync_bn [other optional arguments]

You can train the model with fp16 setting to save cuda memory, which may occasionally report gradient NaN error.

# fp16 training
cd tools
bash scripts/dist_train.sh 8 --cfg_file <CONFIG_FILE> --sync_bn --fp16 [other optional arguments]

Testing

# multi-gpu testing
cd tools
bash scripts/dist_test.sh 8 --cfg_file <CONFIG_FILE> --ckpt <CHECKPOINT_FILE>

Quick Start

Performance@(20% Data for 12 epoch)Batch SizeTraining timemAP/H_L1mAP/H_L2Veh_L1Veh_L2Ped_L1Ped_L2Cyc_L1Cyc_L2Log
DSVT(Pillar&Dim192)1~5.5h75.3/72.469.3/66.475.3/74.866.9/66.479.4/71.771.7/64.671.9/70.869.2/68.1Log
DSVT(Voxel&Dim192)1~6.5h76.2/73.669.9/67.475.7/75.267.2/66.880.1/73.772.5/66.472.8/71.870.1/69.1Log
# example DSVT-P@fp32 ~5.5h on RTX3090
cd tools
bash scripts/dist_train.sh 8 --cfg_file ./cfgs/dsvt_models/dsvt_plain_D512e.yaml --sync_bn --logger_iter_interval 500

# example DSVT-P@fp16 ~4.0h on RTX3090
cd tools
bash scripts/dist_train.sh 8 --cfg_file ./cfgs/dsvt_models/dsvt_plain_D512e.yaml --sync_bn --fp16 --logger_iter_interval 500
Performance@(100% Data for 24 epoch)Batch SizeTraining timemAP/H_L1mAP/H_L2Veh_L1Veh_L2Ped_L1Ped_L2Cyc_L1Cyc_L2Log
DSVT(Pillar)3~22.5h79.5/77.173.2/71.079.3/78.870.9/70.582.8/77.075.2/69.876.4/75.473.6/72.7Log
DSVT(Voxel)3~27.5h80.3/78.274.0/72.179.7/79.371.4/71.083.7/78.976.1/71.577.5/76.574.6/73.7Log
# example DSVT-P@fp32 ~22.5h on NVIDIA A100
cd tools
bash scripts/dist_train.sh 8 --cfg_file ./cfgs/dsvt_models/dsvt_plain_1f_onestage.yaml.yaml --sync_bn --logger_iter_interval 500

TensorRT Deployment

We privide deployment details of DSVT, including converting torch_model to onnx_model and creating trt_engine from onnx_model. The deployment of the backbone2d and centerhead can be performed in a similar manner.

Notably, for the sake of universality, we offer partial deployment only for backbone3D, specifically referring to the deployment of DSVT. The head and backbone2D can be deployed in a similar fashion. The speeds provided in the paper are the results of full deployment.

The code has been tested on Ubuntu18.04, with following libraries:

We recommend install tensorrt from TAR Package, following this.

  1. Download the input_data and specify the input_data_path and ckpt_path in code. Then run the following command to create trt_engine:
cd tools

# torch convert to onnx
python deploy.py

# onnx convert to TRT engine
trtexec --onnx=./deploy_files/dsvt.onnx  --saveEngine=./deploy_files/dsvt.engine \
--memPoolSize=workspace:4096 --verbose --buildOnly --device=1 --fp16 \
--tacticSources=+CUDNN,+CUBLAS,-CUBLAS_LT,+EDGE_MASK_CONVOLUTIONS \
--minShapes=src:3000x192,set_voxel_inds_tensor_shift_0:2x170x36,set_voxel_inds_tensor_shift_1:2x100x36,set_voxel_masks_tensor_shift_0:2x170x36,set_voxel_masks_tensor_shift_1:2x100x36,pos_embed_tensor:4x2x3000x192 \
--optShapes=src:20000x192,set_voxel_inds_tensor_shift_0:2x1000x36,set_voxel_inds_tensor_shift_1:2x700x36,set_voxel_masks_tensor_shift_0:2x1000x36,set_voxel_masks_tensor_shift_1:2x700x36,pos_embed_tensor:4x2x20000x192 \
--maxShapes=src:35000x192,set_voxel_inds_tensor_shift_0:2x1500x36,set_voxel_inds_tensor_shift_1:2x1200x36,set_voxel_masks_tensor_shift_0:2x1500x36,set_voxel_masks_tensor_shift_1:2x1200x36,pos_embed_tensor:4x2x35000x192 \
> debug.log 2>&1

The onnx file and trt_engine will be saved in tools/deploy_files/, or you can directly download engine form here.

  1. Testing with trt_engine, you need specify the trt_engine path in config. (e.g., ./deploy_files/dsvt.engine)
bash scripts/dist_test.sh 8 --cfg_file ./cfgs/dsvt_models/dsvt_plain_1f_onestage_trtengine.yaml --ckpt <CHECKPOINT_FILE>
  1. After deployed with TensorRT, the runtime of DSVT (excluding the InputLayer) on a signle RTX3090 GPU significantly reduces from 36.0ms to 13.8ms, leading to an almost twofold increase in speed.

Possible Issues

Citation

Please consider citing our work as follows if it is helpful.

@inproceedings{wang2023dsvt,
    title={DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets},
    author={Haiyang Wang, Chen Shi, Shaoshuai Shi, Meng Lei, Sen Wang, Di He, Bernt Schiele and Liwei Wang},
    booktitle={CVPR},
    year={2023}
}

Potential Research

Welcome to join in designing data processing networks of 3D Vision that are truly useful in the industry, and feel free to contact us for any potential contributions.

Acknowledgments

DSVT uses code from a few open source repositories. Without the efforts of these folks (and their willingness to release their implementations), DSVT would not be possible. We thanks these authors for their efforts!

We would like to thank Lue Fan, Lihe Ding and Shaocong Dong for their helpful discussions. This project is partially supported by the National Key R&D Program of China (2022ZD0160302) and National Science Foundation of China (NSFC62276005).