Home

Awesome

arXiv code visitors License CC BY-NC-SA 4.0 Twitter Follow

GrowSP: Unsupervised Semantic Segmentation of 3D Point Clouds (CVPR 2023)

Zihui Zhang, Bo Yang, Bing Wang, Bo Li

Overview

We propose the first unsupervised 3D semantic segmentation method, learning from growing superpoints in point clouds.

<p align="center"> <img src="figs/opening.png" alt="drawing" width=800/> </p>

Our method demonstrates promising results on multiple datasets:

<p align="center"> <img src="figs/s3dis_train.gif" alt="drawing" width=1000/> <img src="figs/s3dis_test.gif" alt="drawing" width=1000/> </p> <p align="center"> <img src="figs/scannet_train.gif" alt="drawing" width=1000/> <img src="figs/scannet_test.gif" alt="drawing" width=1000/> </p> <p align="center"> <img src="figs/semantickitti_train.gif" alt="drawing" width=1000/> <img src="figs/semantickitti_test.gif" alt="drawing" width=1000/> </p>

Full demo (Youtube)

<p align="center"> <a href="https://youtu.be/x_UW7hU3Ows"><img src="figs/GrowSP_thumbnail.png" width=500></a> </p>

1. Setup

Setting up for this project involves installing dependencies.

Installing dependencies

To install all the dependencies, please run the following:

sudo apt install build-essential python3-dev libopenblas-dev
conda env create -f env.yml
conda activate growsp
pip install -U MinkowskiEngine --install-option="--blas=openblas" -v --no-deps

2. Running codes

2.1 S3DIS

S3DIS dataset can be found here. Download the files named "Stanford3dDataset_v1.2_Aligned_Version.zip". Uncompress the folder and move it to ${your_S3DIS}. And there is an error in line 180389 of file Area_5/hallway_6/Annotations/ceiling_1.txt. It need to be fixed manually.

python data_prepare/data_prepare_S3DIS.py --data_path ${your_S3DIS}

This code will preprcocess S3DIS and put it under ./data/S3DIS/input

python data_prepare/initialSP_prepare_S3DIS.py

This code will construct superpoints on S3DIS and put it under ./data/S3DIS/initial_superpoints

CUDA_VISIBLE_DEVICES=0, python train_S3DIS.py

The output model and log file will be saved in ./ckpt/S3DIS by default.

2.2 ScanNet

Download the ScanNet dataset from the official website. You need to sign the terms of use. Uncompress the folder and move it to ${your_ScanNet}.

python data_prepare/data_prepare_ScanNet.py --data_path ${your_ScanNet}

This code will preprcocess ScanNet and put it under ./data/ScanNet/processed

python data_prepare/initialSP_prepare_ScanNet.py

This code will construct superpoints on ScanNet and put it under ./data/ScanNet/initial_superpoints

CUDA_VISIBLE_DEVICES=0, python train_ScanNet.py

The output model and log file will be saved in ./ckpt/ScanNet by default.

2.3 SemanticKITTI

Please first download the following iterms from SemanticKITTI:

Uncompressed and Merge the velodyne and labels of each sequence. The organized dataset should be as follows:

your_SemanticKITTI
└── sequences
    └── 00
    │   ├── velodyne
    │   ├── labels
    └── 01
    ...
python data_prepare/data_prepare_SemanticKITTI.py --data_path ${your_SemanticKITTI}/sequences

This code will preprcocess SemanticKITTI and put it under ./data/SemanticKITTI/dataset

python data_prepare/initialSP_prepare_SemanticKITTI.py

This code will construct superpoints on SemanticKITTI and put it under ./data/SemanticKITTI/initial_superpoints

CUDA_VISIBLE_DEVICES=0, python train_SemanticKITTI.py

The output model and log file will be saved in ./ckpt/SemanticKITTI by default.

3. Trained models

The trained models for these three datasets can be found at here