Home

Awesome

BEVContrast: Self-Supervision in BEV Space for Automotive Lidar Point Clouds

Official Pytorch implementation of the method BEVContrast. More details can be found in the paper:

BEVContrast: Self-Supervision in BEV Space for Automotive Lidar Point Clouds, 3DV 2024 [arXiv] by Corentin Sautier, Gilles Puy, Alexandre Boulch, Renaud Marlet, and Vincent Lepetit

Overview of the method

If you use BEVContrast in your research, please cite:

@inproceedings{Sautier_3DV24,
  author    = {Corentin Sautier and Gilles Puy and Alexandre Boulch and Renaud Marlet and Vincent Lepetit},
  title     = {{BEVContrast}: Self-Supervision in BEV Space for Automotive Lidar Point Clouds},
  booktitle = {International Conference on 3D Vision (3DV)},
  year      = 2024
}

Dependencies

To install the various dependencies, you can run pip install -r requirements.txt.

Datasets

The code provided can be used with nuScenes, SemanticKITTI, and SemanticPOSS. Put the datasets you intend to use in the datasets folder (a symbolic link is accepted).

Pre-trained models

Minkowski SR-UNet

SR-UNet pre-trained on nuScenes

SR-UNet pre-trained on SemanticKITTI

SPconv VoxelNet

VoxelNet pre-trained on nuScenes

Reproducing the results

When using MinkowskiEngine (on SemanticKITTI), please set the OMP_NUM_THREADS environment variable to your number of CPU cores

Semantic segmentation's pre-training

Config file for SemanticKITTI is included for MinkowskiEngine by default to keep retro-compatibility with previous work, while for nuScenes it uses Torchsparse which is generally faster. Switching between libraries in the config files is easy. While architectures are similar, weights from one library cannot easily be transferred to the other.

python train.py --config_file cfgs/pretrain_sk_minkunet.yaml --name minkunet_bevcontrast_sk

python train.py --config_file cfgs/pretrain_ns_minkunet.yaml --name minkunet_bevcontrast_ns

Semantic segmentation's downstream

The specific code for downstream semantic segmentation has been adapted from ALSO.

Results on nuScenes' validation set using a Minkowski SR-Unet 34:

Method0.1%1%10%50%100%
Random init.21.635.057.369.071.2
PointContrast27.137.058.969.471.1
DepthContrast21.734.657.469.271.2
ALSO26.237.459.069.871.8
BEVContrast26.637.959.070.572.2

To launch a downstream experiment, with a Torchsparse SR-Unet, you can use these commands in addition with cfg.downstream.checkpoint_dir=[checkpoint directory] cfg.downstream.checkpoint_name=[checkpoint name]

cd downstream

# 100%
python train_downstream_semseg.py cfg=nuscenes_torchsparse cfg.downstream.max_epochs=30 cfg.downstream.val_interval=5 cfg.downstream.skip_ratio=1

# 50%
python train_downstream_semseg.py cfg=nuscenes_torchsparse cfg.downstream.max_epochs=50 cfg.downstream.val_interval=5 cfg.downstream.skip_ratio=2

# 10%
python train_downstream_semseg.py cfg=nuscenes_torchsparse cfg.downstream.max_epochs=100 cfg.downstream.val_interval=10 cfg.downstream.skip_ratio=10

# 1%
python train_downstream_semseg.py cfg=nuscenes_torchsparse cfg.downstream.max_epochs=500 cfg.downstream.val_interval=50 cfg.downstream.skip_ratio=100

# 0.1%
python train_downstream_semseg.py cfg=nuscenes_torchsparse cfg.downstream.max_epochs=1000 cfg.downstream.val_interval=100 cfg.downstream.skip_ratio=1000

Results on SemanticKITTI' validation set using a Minkowski SR-Unet 18:

Method0.1%1%10%50%100%
Random init.30.046.257.661.862.7
PointContrast32.447.959.762.763.4
SegContrast32.348.958.762.162.3
DepthContrast32.549.060.362.963.9
STSSL32.049.460.062.963.3
ALSO35.050.060.563.463.6
TARL37.952.561.263.463.7
BEVContrast39.753.861.463.461.1

To launch a downstream experiment, with a Minkowski SR-Unet, you can use these commands in addition with cfg.downstream.checkpoint_dir=[checkpoint directory] cfg.downstream.checkpoint_name=[checkpoint name]

cd downstream

# 100%
python train_downstream_semseg.py cfg=semantickitti_minkowski cfg.downstream.max_epochs=30 cfg.downstream.val_interval=5 cfg.downstream.skip_ratio=1

# 50%
python train_downstream_semseg.py cfg=semantickitti_minkowski cfg.downstream.max_epochs=50 cfg.downstream.val_interval=5 cfg.downstream.skip_ratio=2

# 10%
python train_downstream_semseg.py cfg=semantickitti_minkowski cfg.downstream.max_epochs=100 cfg.downstream.val_interval=10 cfg.downstream.skip_ratio=10

# 1%
python train_downstream_semseg.py cfg=semantickitti_minkowski cfg.downstream.max_epochs=500 cfg.downstream.val_interval=50 cfg.downstream.skip_ratio=100

# 0.1%
python train_downstream_semseg.py cfg=semantickitti_minkowski cfg.downstream.max_epochs=1000 cfg.downstream.val_interval=100 cfg.downstream.skip_ratio=1000

Object detection's pre-training

python train.py --config_file cfgs/pretrain_ns_spconv.yaml --name voxelnet_bevcontrast_ns

Object detection's downstream

Please use the code of OpenPCDet with default parameters for SECOND or PVRCNN and with no multiprocessing to retain compatibility with previous work and this one.

Acknowledgment

Part of the codebase has been adapted from OpenPCDet, ALSO, and SLidR.

License

BEVContrast is released under the Apache 2.0 license.