Home

Awesome

API for SemanticKITTI

This repository contains helper scripts to open, visualize, process, and evaluate results for point clouds and labels from the SemanticKITTI dataset.


Example of 3D pointcloud from sequence 13:
<img src="https://image.ibb.co/kyhCrV/scan1.png" width="1000">
Example of 2D spherical projection from sequence 13:
<img src="https://image.ibb.co/hZtVdA/scan2.png" width="1000">
Example of voxelized point clouds for semantic scene completion:
<img src="https://user-images.githubusercontent.com/11506664/70214770-4d43ff80-173c-11ea-940d-3950d8f24eaf.png" width="1000">

Data organization

The data is organized in the following format:

/kitti/dataset/
          └── sequences/
                  ├── 00/
                  │   ├── poses.txt
                  │   ├── image_2/
                  │   ├── image_3/
                  │   ├── labels/
                  │   │     ├ 000000.label
                  │   │     └ 000001.label
                  |   ├── voxels/
                  |   |     ├ 000000.bin
                  |   |     ├ 000000.label
                  |   |     ├ 000000.occluded
                  |   |     ├ 000000.invalid
                  |   |     ├ 000001.bin
                  |   |     ├ 000001.label
                  |   |     ├ 000001.occluded
                  |   |     ├ 000001.invalid
                  │   └── velodyne/
                  │         ├ 000000.bin
                  │         └ 000001.bin
                  ├── 01/
                  ├── 02/
                  .
                  .
                  .
                  └── 21/

The main configuration file for the data is in config/semantic-kitti.yaml. In this file you will find:

Dependencies for API:

System dependencies

$ sudo apt install python3-dev python3-pip python3-pyqt5.qtopengl # for visualization

Python dependencies

$ sudo pip3 install -r requirements.txt

Scripts:

ALL OF THE SCRIPTS CAN BE INVOKED WITH THE --help (-h) FLAG, FOR EXTRA INFORMATION AND OPTIONS.

Visualization

Point Clouds

To visualize the data, use the visualize.py script. It will open an interactive opengl visualization of the pointclouds along with a spherical projection of each scan into a 64 x 1024 image.

$ ./visualize.py --sequence 00 --dataset /path/to/kitti/dataset/

where:

Navigation:

In order to visualize your predictions instead, the --predictions option replaces visualization of the labels with the visualization of your predictions:

$ ./visualize.py --sequence 00 --dataset /path/to/kitti/dataset/ --predictions /path/to/your/predictions

To directly compare two sets of data, use the compare.py script. It will open an interactive opengl visualization of the pointcloud labels.

$ ./compare.py --sequence 00 --dataset_a /path/to/dataset_a/ --dataset_b /path/to/kitti/dataset_b/

where:

Navigation:

Voxel Grids for Semantic Scene Completion

To visualize the data, use the visualize_voxels.py script. It will open an interactive opengl visualization of the voxel grids and options to visualize the provided voxelizations of the LiDAR data.

$ ./visualize_voxels.py --sequence 00 --dataset /path/to/kitti/dataset/

where:

Navigation:

Note: Holding the forward/backward buttons triggers the playback mode.

LiDAR-based Moving Object Segmentation (LiDAR-MOS)

To visualize the data, use the visualize_mos.py script. It will open an interactive opengl visualization of the voxel grids and options to visualize the provided voxelizations of the LiDAR data.

$ ./visualize_mos.py --sequence 00 --dataset /path/to/kitti/dataset/

where:

Navigation:

Note: Holding the forward/backward buttons triggers the playback mode.

Evaluation

To evaluate the predictions of a method, use the evaluate_semantics.py to evaluate semantic segmentation, evaluate_completion.py to evaluate the semantic scene completion and evaluate_panoptic.py to evaluate panoptic segmentation. Important: The labels and the predictions need to be in the original label format, which means that if a method learns the cross-entropy mapped classes, they need to be passed through the learning_map_inv dictionary to be sent to the original dataset format. This is to prevent changes in the dataset interest classes from affecting intermediate outputs of approaches, since the original labels will stay the same. For semantic segmentation, we provide the remap_semantic_labels.py script to make this shift before the training, and once again before the evaluation, selecting which are the interest classes in the configuration file. The data needs to be either:

If instead, the IoU vs distance is wanted, the evaluation is performed in the same way, but with the evaluate_semantics_by_distance.py script. This will analyze the IoU for a set of 5 distance ranges: {(0m:10m), [10m:20m), [20m:30m), [30m:40m), (40m:50m)}.

Validation

To ensure that your zip file is valid, we provide a small validation script validate_submission.py that checks for the correct folder structure and consistent number of labels for each scan.

The submission folder expects to get an zip file containing the following folder structure (as the separate case above)

├ description.txt (optional)
sequences
  ├── 11
  │   └── predictions
  │         ├ 000000.label
  │         ├ 000001.label
  │         ├ ...
  ├── 12
  │   └── predictions
  │         ├ 000000.label
  │         ├ 000001.label
  │         ├ ...
  ├── 13
  .
  .
  .
  └── 21

In summary, you only have to provide the label files containing your predictions for every point of the scan and this is also checked by our validation script.

Run:

$ ./validate_submission.py --task {segmentation|completion|panoptic} /path/to/submission.zip /path/to/kitti/dataset

to check your submission.zip.

Note: We don't check if the labels are valid, since invalid labels are simply ignored by the evaluation script.

(New!) Adding Approach Information

If you want to have more information on the leaderboard in the new updated Codalab competitions under the "Detailed Results", you have to provide an additional description.txt file to the submission archive containing information (here just an example):

name: Auto-MOS
pdf url: https://arxiv.org/pdf/2201.04501.pdf
code url: https://github.com/PRBonn/auto-mos

where name corresponds to the name of the method, pdf url is a link to the paper pdf url (or empty), and code url is a url that directs to the code (or empty). If the information is not available, we will use Anonymous for the name, and n/a for the urls.

Statistics

Generation

Docker for API

If not installing the requirements is preferred, then a docker container is provided to run the scripts.

To build and run the container in an interactive session, which allows to run X11 apps (and GL), and copies this repo to the working directory, use

$ ./docker.sh /path/to/dataset

Where /path/to/dataset is the location of your semantic kitti dataset, and will be available inside the image in ~/data or /home/developer/data inside the container for further usage with the api. This is done by creating a shared volume, so it can be any directory containing data that is to be used by the API scripts.

Citation:

If you use this dataset and/or this API in your work, please cite its paper

@inproceedings{behley2019iccv,
    author = {J. Behley and M. Garbade and A. Milioto and J. Quenzel and S. Behnke and C. Stachniss and J. Gall},
     title = {{SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences}},
 booktitle = {Proc. of the IEEE/CVF International Conf.~on Computer Vision (ICCV)},
      year = {2019}
}

And the paper for the original KITTI dataset:

@inproceedings{geiger2012cvpr,
    author = {A. Geiger and P. Lenz and R. Urtasun},
     title = {{Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite}},
 booktitle = {Proc.~of the IEEE Conf.~on Computer Vision and Pattern Recognition (CVPR)},
     pages = {3354--3361},
      year = {2012}}