Home

Awesome

<p align="center"> <img height="150" src="./miscellaneous/active-3d-logo.png" /> </p>

This repository is the official Pytorch implementation of our work:

[ICLR 2023] CRB: Exploring Active 3D Object Detection from a Generalization Perspective.

[OpenReview] [arXiv] [Supplementary Material]

[In Submission] Open-CRB: Towards Open World Active Learning for 3D Object Detection.

[Open-CRB Branch] [arXiv]

:fire: 11/23 updates: release the code and the preprint of Open-CRB

:fire: 02/23 updates: checkpoints available at https://drive.google.com/drive/folders/1PMb6tu84AIw66vCRrMBCHpnBeL5WMkuv?usp=sharing

Framework

To alleviate the high annotation cost in LiDAR-based 3D object detection, active learning is a promising solution that learns to select only a small portion of unlabeled data to annotate, without compromising model performance. Our empirical study, however, suggests that mainstream uncertainty-based and diversity-based active learning policies are not effective when applied in the 3D detection task, as they fail to balance the trade-off between point cloud informativeness and box-level annotation costs. To overcome this limitation, we jointly investigate three novel criteria in our framework CRB for point cloud acquisition - label conciseness, feature representativeness and geometric balance, which hierarchically filters out the point clouds of redundant 3D bounding box labels, latent features and geometric characteristics (e.g., point cloud density) from the unlabeled sample pool and greedily selects informative ones with fewer objects to annotate. Our theoretical analysis demonstrates that the proposed criteria aligns the marginal distributions of the selected subset and the prior distributions of the unseen test set, and minimizes the upper bound of the generalization error. To validate the effectiveness and applicability of CRB, we conduct extensive experiments on the two benchmark 3D object detection datasets of KITTI and Waymo and examine both one-stage (i.e., SECOND) and two-stage 3D detectors (i.e., PV-RCNN). Experiments evidence that the proposed approach outperforms existing active learning strategies and achieves fully supervised performance requiring 1% and 8% annotations of bounding boxes and point clouds, respectively.

<p align="center"> <img src="miscellaneous/flowchart.png" width="70%"> </p>

Contents

Installation

Requirements

All the codes are tested in the following environment:

Install pcdet v0.5

Our implementations of 3D detectors are based on the lastest OpenPCDet. To install this pcdet library and its dependent libraries, please run the following command:

python setup.py develop

NOTE: Please re-install even if you have already installed pcdet previoursly.

Getting Started

The active learning configs are located at tools/cfgs/active-kitti_models and /tools/cfgs/active-waymo_models for different AL methods. The dataset configs are located within tools/cfgs/dataset_configs, and the model configs are located within tools/cfgs for different datasets.

Dataset Preparation

Currently we provide the dataloader of KITTI dataset and Waymo dataset, and the supporting of more datasets are on the way.

KITTI Dataset

CRB-active-3Ddet
├── data
│   ├── kitti
│   │   │── ImageSets
│   │   │── training
│   │   │   ├──calib & velodyne & label_2 & image_2 & (optional: planes) & (optional: depth_2)
│   │   │── testing
│   │   │   ├──calib & velodyne & image_2
├── pcdet
├── tools
python -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/kitti_dataset.yaml
<!-- ### NuScenes Dataset * Please download the official [NuScenes 3D object detection dataset](https://www.nuscenes.org/download) and organize the downloaded files as follows: ``` CRB-active-3Ddet ├── data │ ├── nuscenes │ │ │── v1.0-trainval (or v1.0-mini if you use mini) │ │ │ │── samples │ │ │ │── sweeps │ │ │ │── maps │ │ │ │── v1.0-trainval ├── pcdet ├── tools ``` * Install the `nuscenes-devkit` with version `1.0.5` by running the following command: ```shell script pip install nuscenes-devkit==1.0.5 ``` * Generate the data infos by running the following command (it may take several hours): ```python python -m pcdet.datasets.nuscenes.nuscenes_dataset --func create_nuscenes_infos \ --cfg_file tools/cfgs/dataset_configs/nuscenes_dataset.yaml \ --version v1.0-trainval ``` -->

Waymo Open Dataset

CRB-active-3Ddet
├── data
│   ├── waymo
│   │   │── ImageSets
│   │   │── raw_data
│   │   │   │── segment-xxxxxxxx.tfrecord
|   |   |   |── ...
|   |   |── waymo_processed_data_v0_5_0
│   │   │   │── segment-xxxxxxxx/
|   |   |   |── ...
│   │   │── waymo_processed_data_v0_5_0_gt_database_train_sampled_1/
│   │   │── waymo_processed_data_v0_5_0_waymo_dbinfos_train_sampled_1.pkl
│   │   │── waymo_processed_data_v0_5_0_gt_database_train_sampled_1_global.npy (optional)
│   │   │── waymo_processed_data_v0_5_0_infos_train.pkl (optional)
│   │   │── waymo_processed_data_v0_5_0_infos_val.pkl (optional)
├── pcdet
├── tools
pip3 install --upgrade pip
pip3 install waymo-open-dataset-tf-2-0-0==1.2.0 --user

Waymo version in our project is 1.2.0

python -m pcdet.datasets.waymo.waymo_dataset --func create_waymo_infos \
    --cfg_file tools/cfgs/dataset_configs/waymo_dataset.yaml

Note that you do not need to install waymo-open-dataset if you have already processed the data before and do not need to evaluate with official Waymo Metrics.

<!-- ### Lyft Dataset * Please download the official [Lyft Level5 perception dataset](https://level-5.global/data/perception) and organize the downloaded files as follows: ``` CRB-active-3Ddet ├── data │ ├── lyft │ │ │── ImageSets │ │ │── trainval │ │ │ │── data & maps & images & lidar & train_lidar ├── pcdet ├── tools ``` * Install the `lyft-dataset-sdk` with version `0.0.8` by running the following command: ```shell script pip install -U lyft_dataset_sdk==0.0.8 ``` * Generate the data infos by running the following command (it may take several hours): ```python python -m pcdet.datasets.lyft.lyft_dataset --func create_lyft_infos \ --cfg_file tools/cfgs/dataset_configs/lyft_dataset.yaml ``` * You need to check carefully since we don't provide a benchmark for it. -->

Training & Testing

Test and evaluate the pretrained models

The weights of our pre-trained model will be released upon acceptance.

python test.py --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE} --ckpt ${CKPT}
python test.py --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE} --eval_all
sh scripts/dist_test.sh ${NUM_GPUS} \
    --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE}

# or

sh scripts/slurm_test_mgpu.sh ${PARTITION} ${NUM_GPUS} \
    --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE}

Train a backbone

In our active learning setting, the 3D detector will be pre-trained with a small labeled set $\mathcal{D}_L$ which is randomly sampled from the trainig set. To train such a backbone, please run

sh scripts/${DATASET}/train_${DATASET}_backbone.sh

Train with different active learning strategies

We provide several options for active learning algorithms, including

You could optionally add extra command line parameters --batch_size ${BATCH_SIZE} and --epochs ${EPOCHS} to specify your preferred parameters.

python train.py --cfg_file ${CONFIG_FILE}