Home

Awesome

<div align="center"> <h1>Azimuth Normalization </h1> <span><font size="5", > AziNorm: Exploiting the Radial Symmetry of Point Cloud <br> for Azimuth-Normalized 3D Perception [CVPR 2022] </font></span> <br> by <br> <a href="https://scholar.google.com/citations?user=PIeNN2gAAAAJ&hl=en&oi=ao">Shaoyu Chen</a>, <a href="https://xinggangw.info/">Xinggang Wang</a><sup><span>&#8224;</span></sup>, <a href="https://scholar.google.com/citations?user=PH8rJHYAAAAJ&hl=en&oi=ao">Tianheng Cheng</a>, <a href="https://github.com/mulinmeng">Wenqiang Zhang</a>, <a href="https://scholar.google.com/citations?user=pCY-bikAAAAJ&hl=zh-CN">Qian Zhang</a>, <a href="https://scholar.google.com/citations?user=IyyEKyIAAAAJ&hl=zh-CN">Chang Huang</a>, <a href="http://eic.hust.edu.cn/professor/liuwenyu/"> Wenyu Liu</a> </br> (<span>&#8224;</span>: corresponding author) <div>Paper: <a href="https://arxiv.org/abs/2203.13090">[arXiv version] </a><a href="https://openaccess.thecvf.com/content/CVPR2022/papers/Chen_AziNorm_Exploiting_the_Radial_Symmetry_of_Point_Cloud_for_Azimuth-Normalized_CVPR_2022_paper.pdf"> [CVPR version]</a></div> </div>

Highlight

framework

Introduction

Studying the inherent symmetry of data is of great importance in machine learning. Point cloud, the most important data format for 3D environmental perception, is naturally endowed with strong radial symmetry. In this work, we exploit this radial symmetry via a divide-and-conquer strategy to boost 3D perception performance and ease optimization. We propose Azimuth Normalization (AziNorm), which normalizes the point clouds along the radial direction and eliminates the variability brought by the difference of azimuth. AziNorm can be flexibly incorporated into most LiDAR-based perception methods. To validate its effectiveness and generalization ability, we apply AziNorm in both object detection and semantic segmentation. For detection, we integrate AziNorm into two representative detection methods, the one-stage SECOND detector and the state-of-the-art two-stage PV-RCNN detector. Experiments on Waymo Open Dataset demonstrate that AziNorm improves SECOND and PV-RCNN by 7.03 mAPH and 3.01 mAPH respectively. For segmentation, we integrate AziNorm into KPConv. On SemanticKitti dataset, AziNorm improves KPConv by 1.6/1.1 mIoU on val/test set. Besides, AziNorm remarkably improves data efficiency and accelerates convergence, reducing the requirement of data amounts or training epochs by an order of magnitude. SECOND w/ AziNorm can significantly outperform fully trained vanilla SECOND, even trained with only 10% data or 10% epochs.

Usage

The code is fully based on OpenPCDet-v0.3.0 toolbox. Please refer to INSTALLATION.md and GETTING_STARTED.md of OpenPCDet-v0.3.0 for the instructions of data/environment preparation.

Test and evaluate the pretrained models

python test.py --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE} --ckpt ${CKPT}
python test.py --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE} --eval_all
sh scripts/dist_test.sh ${NUM_GPUS} \
    --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE}

# or

sh scripts/slurm_test_mgpu.sh ${PARTITION} ${NUM_GPUS} \ 
    --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE}

Train a model

sh scripts/dist_train.sh ${NUM_GPUS} --cfg_file ${CONFIG_FILE}

# or 

sh scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} ${NUM_GPUS} --cfg_file ${CONFIG_FILE}
python train.py --cfg_file ${CONFIG_FILE}
ModelTraining DataEpochVeh_L1Veh_L2Ped_L1Ped_L2Cyc_L1Cyc_L2Download
SECOND16k (10%)558.50/57.5951.35/50.5347.68/25.5640.56/21.7439.44/25.7638.10/24.89ckpt
AziNorm-based SECOND16k (10%)563.85/63.1755.60/55.0058.17/44.7049.95/38.3557.95/54.9956.01/53.15ckpt
SECOND32k (20%)562.95/62.2154.71/54.0651.79/37.6144.69/32.4044.27/39.8742.78/38.53ckpt
AziNorm-based SECOND32k (20%)567.24/66.6258.79/58.2462.20/50.7753.80/43.8560.86/59.6258.70/57.50ckpt
SECOND160k (100%)568.29/67.6759.71/59.1658.80/48.4151.32/42.1752.82/51.6451.11/49.96ckpt
AziNorm-based SECOND160k (100%)570.01/69.4762.22/61.7265.76/54.8957.15/47.6264.05/62.7961.72/60.51ckpt

Citing AziNorm

If you find AziNorm is useful in your research or applications, please consider giving us a star 🌟 and citing AziNorm by the following BibTeX entry.

@InProceedings{Chen_2022_CVPR,
    author    = {Chen, Shaoyu and Wang, Xinggang and Cheng, Tianheng and Zhang, Wenqiang and Zhang, Qian and Huang, Chang and Liu, Wenyu},
    title     = {AziNorm: Exploiting the Radial Symmetry of Point Cloud for Azimuth-Normalized 3D Perception},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year      = {2022},
}