Home

Awesome

AeDet: Azimuth-invariant Multi-view 3D Object Detection

Paper     Website

News

AeDet achieves SOTA on Camera-Only nuScenes Detection Task with 53.1% mAP and 62.0% NDS!

Introduction

Recent LSS-based multi-view 3D object detection has made tremendous progress, by processing the features in Brid-Eye-View (BEV) via the convolutional detector. However, the typical convolution ignores the radial symmetry of the BEV features and increases the difficulty of the detector optimization. To preserve the inherent property of the BEV features and ease the optimization, we propose an azimuth-equivariant convolution (AeConv) and an azimuth-equivariant anchor. The sampling grid of AeConv is always in the radial direction, thus it can learn azimuth-invariant BEV features. The proposed anchor enables the detection head to learn predicting azimuth-irrelevant targets. In addition, we introduce a camera-decoupled virtual depth to unify the depth prediction for the images with different camera intrinsic parameters. The resultant detector is dubbed Azimuth-equivariant Detector (AeDet). Extensive experiments are conducted on nuScenes, and AeDet achieves a 62.0% NDS, surpassing the recent multi-view 3D object detectors such as PETRv2 and BEVDepth by a large margin.

Method overview

method overview

Quick Start

Installation

Step 0. Install Pytorch (v1.9.0).

Step 1. Install MMDetection3D (v1.0.0rc4).

# install mmcv
pip install mmcv-full

# install mmdetection
pip install git+https://github.com/open-mmlab/mmdetection.git

# install mmsegmentation
pip install git+https://github.com/open-mmlab/mmsegmentation.git

# install mmdetection3d
cd AeDet/mmdetection3d
pip install -v -e .

Step 2. Install requirements.

pip install -r requirements.txt

Step 3. Install AeDet (gpu required).

python setup.py develop

Data preparation

Step 0. Download nuScenes official dataset.

Step 1. Symlink the dataset root to ./data/.

ln -s ${nuscenes root} ./data/

The directory will be as follows.

AeDet
├── data
│   ├── nuScenes
│   │   ├── maps
│   │   ├── samples
│   │   ├── sweeps
│   │   ├── v1.0-test
|   |   ├── v1.0-trainval

Step 2. Prepare infos.

python scripts/gen_info.py

Step 3. Prepare depth gt.

python scripts/gen_depth_gt.py

Tutorials

Train.

python ${EXP_PATH} --amp_backend native -b 8 --gpus 8 [--ckpt_path ${ORIGIN_CKPT_PATH}]

Eval.

python ${EXP_PATH} --ckpt_path ${EMA_CKPT_PATH} -e -b 8 --gpus 8

Results

ModelImage size#Key framesCBGSmAPNDSDownload
BEVDepth_R50 (Baseline)256x70410.3150.367--
BEVDepth_R50_2KEY (Baseline)256x70420.3300.442--
AeDet_R50256x70410.3340.401google
AeDet_R50_2KEY256x70420.3590.473google
AeDet_R50_2KEY_CBGS256x70420.3810.502google
AeDet_R101_2KEY_CBGS512x140820.4490.562google

Acknowledgement

Thanks BEVDepth team and MMDetection3D team for the wonderful open source projects!

Citation

If you find AeDet useful in your research, please consider citing:

@inproceedings{feng2023aedet,
    title={AeDet: Azimuth-invariant Multi-view 3D Object Detection},
    author={Feng, Chengjian and Jie, Zequn and Zhong, Yujie and Chu, Xiangxiang and Ma, Lin},
    booktitle={Proceedings of the IEEE/CVF Computer Vision and Pattern Recognition},
    year={2023}
}