Awesome
Learning to Detect Mobile Objects from LiDAR Scans Without Labels
This is the official code release for
[CVPR 2022] Learning to Detect Mobile Objects from LiDAR Scans Without Labels.
by Yurong You*, Katie Z Luo*, Cheng Perng Phoo, Wei-Lun Chao, Wen Sun, Bharath Hariharan, Mark Campbell, and Kilian Q. Weinberger
Interested in perception with multiple traversals? Also see Hindsight is 20/20.
Abstract
Current 3D object detectors for autonomous driving are almost entirely trained on human-annotated data. Although of high quality, the generation of such data is laborious and costly, restricting them to a few specific locations and object types. This paper proposes an alternative approach entirely based on unlabeled data, which can be collected cheaply and in abundance almost everywhere on earth. Our approach leverages several simple common sense heuristics to create an initial set of approximate seed labels. For example, relevant traffic participants are generally not persistent across multiple traversals of the same route, do not fly, and are never under ground. We demonstrate that these seed labels are highly effective to bootstrap a surprisingly accurate detector through repeated self-training without a single human annotated label.
Citation
@inproceedings{you2022learning,
title = {Learning to Detect Mobile Objects from LiDAR Scans Without Labels},
author = {You, Yurong and Luo, Katie Z and Phoo, Cheng Perng and Chao, Wei-Lun and Sun, Wen and Hariharan, Bharath and Campbell, Mark and Weinberger, Kilian Q.},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
month = jun
}
Environment
conda create --name modest python=3.8
conda activate modest
conda install pytorch=1.9.0 torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia
pip install opencv-python matplotlib wandb scipy tqdm easydict scikit-learn pyquaternion pillow==8.3.2
# for managing experiments
pip install hydra-core --upgrade
pip install hydra_colorlog --upgrade
pip install rich
cd generate_cluster_mask/utils/iou3d_nms
python setup.py install
for OpenPCDet, follow downstream/OpenPCDet/docs/INSTALL.md
to install.
Data Pre-processing
Please refer to data_preprocessing/lyft/LYFT_PREPROCESSING.md
and
data_preprocessing/nuscenes/NUSCENES_PREPROCESSING.md
.
Training
Generate Seed Labels
Lyft data
cd generate_cluster_mask
# generate pp score
python pre_compute_pp_score.py data_root=$(pwd)/../downstream/OpenPCDet/data/lyft/training
# generate seed labels
python generate_mask.py data_root=$(pwd)/../downstream/OpenPCDet/data/lyft/training
python generate_label_files.py data_root=$(pwd)/../downstream/OpenPCDet/data/lyft/training
nuScenes data
cd generate_cluster_mask
# generate pp score
python pre_compute_pp_score.py data_paths=nusc.yaml data_root=NUSCENES_KITTI_FORMAT_20HZ/training \
nusc=True
# generate seed labels
python generate_mask.py data_paths=nusc.yaml data_root=$(pwd)/../downstream/OpenPCDet/data/nuscenes_boston/training plane_estimate.max_hs=-1.3
python generate_label_files.py data_paths=nusc.yaml data_root=$(pwd)/../downstream/OpenPCDet/data/nuscenes_boston/training image_shape="[900, 1600]"
Run 0-th Round Training with seed labels
Lyft (default PRCNN model)
bash scripts/seed_training_lyft.sh
nuScenes (default PRCNN model)
bash scripts/seed_training_nuscenes.sh
Self-training
Lyft (default PRCNN model)
bash scripts/self_training_lyft.sh -C "det_filtering.pp_score_threshold=0.7 det_filtering.pp_score_percentile=20 data_paths.bbox_info_save_dst=null"
nuScenes (default PRCNN model)
bash scripts/self_training_nuscenes.sh -C "data_paths=nusc.yaml det_filtering.pp_score_threshold=0.7 det_filtering.pp_score_percentile=20 data_paths.bbox_info_save_dst=null calib_path=$(pwd)/downstream/OpenPCDet/data/nuscenes_boston/training/calib ptc_path=$(pwd)/downstream/OpenPCDet/data/nuscenes_boston/training/velodyne image_shape=[900,1600]"
Evaluation
cd downstream/OpenPCDet/tools
bash scripts/dist_test.sh 4 --cfg_file <cfg> --ckpt <ckpt_path>
Checkpoints
Lyft experiments
Model | ST rounds | Checkpoint | Config file |
---|---|---|---|
PointRCNN | 0 | link | cfg |
PointRCNN | 1 | link | cfg |
PointRCNN | 10 | link | cfg |
PointRCNN | 20 | link | cfg |
PointRCNN | 30 | link | cfg |
PointRCNN | 40 | link | cfg |
Model | ST rounds | Checkpoint | Config file |
---|---|---|---|
PointPillars | 0 | link | cfg |
PointPillars | 10 | link | cfg |
Model | ST rounds | Checkpoint | Config file |
---|---|---|---|
SECOND | 0 | link | cfg |
SECOND | 10 | link | cfg |
nuScenes experiments
Model | ST rounds | Checkpoint | Config file |
---|---|---|---|
PointRCNN | 0 | link | cfg |
PointRCNN | 10 | link | cfg |
License
This project is under the MIT License. We use OpenPCDet in this project and it are under the Apache-2.0 License. We list our changes here.
Contact
Please open an issue if you have any questions about using this repo.
Acknowledgement
This work uses OpenPCDet. We also use the scripts from 3D_adapt_auto_driving for converting Lyft and nuScenes dataset into KITTI format.