Home

Awesome

Open-world semantic segmentation for Lidar Point Clouds

Official implementation of "Open-world semantic segmentation for Lidar Point Clouds", ECCV 2022. After saving the corresponding inference result files using this repository, please use semantic_kitti_api and nuScenes_api to evaluate the performance.

Installation

Requirements

Data Preparation

SemanticKITTI

./
├── 
├── ...
└── path_to_data_shown_in_config/
    ├──sequences
        ├── 00/           
        │   ├── velodyne/	
        |   |	├── 000000.bin
        |   |	├── 000001.bin
        |   |	└── ...
        │   └── labels/ 
        |       ├── 000000.label
        |       ├── 000001.label
        |       └── ...
        ├── 08/ # for validation
        ├── 11/ # 11-21 for testing
        └── 21/
	    └── ...

nuScenes

./
├── ...
├── v1.0-trainval
├── v1.0-test
├── samples
├── sweeps
├── maps
└── lidarseg/
    ├──v1.0-trainval/
    ├──v1.0-mini/
    ├──v1.0-test/
    ├──nuscenes_infos_train.pkl
    ├──nuscenes_infos_val.pkl
    ├──nuscenes_infos_test.pkl
└── panoptic/
    ├──v1.0-trainval/
    ├──v1.0-mini/
    ├──v1.0-test/

Checkpoints

We provide the checkpoints of open-set model and incremental learning model here: checkpoints

Open-set semantic segmentation

Training for SemanticKITTI

All scripts for SemanticKITTI dataset is in ./semantickitti_scripts.

MSP/Maxlogit method

./train_naive.sh

Upper bound

./train_upper.sh

RCF - Predictive Distribution Calibration

./train_ood_basic.sh

RCF - Unknown Object Synthesis

./train_ood_final.sh

MC-Dropout

./train_dropout.sh

Evaluation for SemanticKITTI

We save the in-distribution prediction labels and uncertainty scores for every points in the validation set, and these files will be used to calculate the closed-set mIoU and open-set metrics including AUPR, AURPC, and FPR95.

MSP/Maxlogit

./val.sh

Upper bound

./val_upper.sh

RCF

./val_ood.sh

MC-Dropout

./val_dropout.sh

Training for nuScenes

All scripts for nuScenes dataset are in ./nuScenes_scripts

MSP/Maxlogit method

./train_nusc_naive.sh

Upper bound

./train_nusc.sh

RCF - Predictive Distribution Calibration

./train_nusc_ood_basic.sh

RCF - Unknown Object Synthesis

./train_nusc_ood_final.sh

MC-Dropout

./train_nusc_dropout.sh

Evaluation for nuScenes

MSP/Maxlogit

./val_nusc.sh

Upper bound

./val_nusc_upper.sh

RCF

./val_nusc_ood.sh

MC-Dropout

./val_nusc_dropout.sh

Incremental learning

Training for SemanticKITTI

All scripts for SemanticKITTI dataset is in ./semantickitti_scripts.

First, use the trained base model to generate and save the pseudo labels of the training set:

./val_generate_incre_labels.sh

Then, change the loading path of pseudo labels in /dataloader/pc_dataset.py, line 177.

Now, conduct incremental learing using pseudo labels:

./train_incre.sh

Evaluation for SemanticKITTI

For validation set:

./val_incre.sh

For test set:

./test_incre.sh

Training for nuScenes

All scripts for nuScenes dataset are in ./nuScenes_scripts.

For new class 1(barrier)

First, generate and save the pseudo labels of the training set:

./val_nusc_generate_incre_labels.sh

Then, change the loading path of pseudo labels in /dataloader/pc_dataset.py, line 266.

Now, conduct incremental learing using pseudo labels:

./train_nusc_incre.sh

For new class 5(construction-vehicle), 8(traffic-cone), 9(trailer)

First, generate and save the pseudo labels of the training set:

./val_nusc_generate_incre_labels.sh

Then, change the loading path of pseudo labels in /dataloader/pc_dataset.py, line 266.

Now, conduct incremental learing using pseudo labels:

./train_nusc_incre.sh

Evaluation for nuScenes

For validation set:

./val_incre.sh

For test set:

./test_incre.sh

Then upload the generated files into the evaluation server.