Awesome
GrowSP: Unsupervised Semantic Segmentation of 3D Point Clouds (CVPR 2023)
Zihui Zhang, Bo Yang, Bing Wang, Bo Li
Overview
We propose the first unsupervised 3D semantic segmentation method, learning from growing superpoints in point clouds.
<p align="center"> <img src="figs/opening.png" alt="drawing" width=800/> </p>Our method demonstrates promising results on multiple datasets:
- S3DIS Dataset
- ScanNet Dataset
- SemanticKITTI Dataset
Full demo (Youtube)
<p align="center"> <a href="https://youtu.be/x_UW7hU3Ows"><img src="figs/GrowSP_thumbnail.png" width=500></a> </p>1. Setup
Setting up for this project involves installing dependencies.
Installing dependencies
To install all the dependencies, please run the following:
sudo apt install build-essential python3-dev libopenblas-dev
conda env create -f env.yml
conda activate growsp
pip install -U MinkowskiEngine --install-option="--blas=openblas" -v --no-deps
2. Running codes
2.1 S3DIS
S3DIS dataset can be found here.
Download the files named "Stanford3dDataset_v1.2_Aligned_Version.zip". Uncompress the folder and move it to
${your_S3DIS}
.
And there is an error in line 180389
of file Area_5/hallway_6/Annotations/ceiling_1.txt
.
It need to be fixed manually.
- Preparing the dataset:
python data_prepare/data_prepare_S3DIS.py --data_path ${your_S3DIS}
This code will preprcocess S3DIS and put it under ./data/S3DIS/input
- Construct initial superpoints:
python data_prepare/initialSP_prepare_S3DIS.py
This code will construct superpoints on S3DIS and put it under ./data/S3DIS/initial_superpoints
- Training:
CUDA_VISIBLE_DEVICES=0, python train_S3DIS.py
The output model and log file will be saved in ./ckpt/S3DIS
by default.
2.2 ScanNet
Download the ScanNet dataset from the official website.
You need to sign the terms of use. Uncompress the folder and move it to
${your_ScanNet}
.
- Preparing the dataset:
python data_prepare/data_prepare_ScanNet.py --data_path ${your_ScanNet}
This code will preprcocess ScanNet and put it under ./data/ScanNet/processed
- Construct initial superpoints:
python data_prepare/initialSP_prepare_ScanNet.py
This code will construct superpoints on ScanNet and put it under ./data/ScanNet/initial_superpoints
- Training:
CUDA_VISIBLE_DEVICES=0, python train_ScanNet.py
The output model and log file will be saved in ./ckpt/ScanNet
by default.
2.3 SemanticKITTI
Please first download the following iterms from SemanticKITTI:
Uncompressed and Merge the velodyne
and labels
of each sequence.
The organized dataset should be as follows:
your_SemanticKITTI
└── sequences
└── 00
│ ├── velodyne
│ ├── labels
└── 01
...
- Preparing the dataset:
python data_prepare/data_prepare_SemanticKITTI.py --data_path ${your_SemanticKITTI}/sequences
This code will preprcocess SemanticKITTI and put it under ./data/SemanticKITTI/dataset
- Construct initial superpoints:
python data_prepare/initialSP_prepare_SemanticKITTI.py
This code will construct superpoints on SemanticKITTI and put it under ./data/SemanticKITTI/initial_superpoints
- Training:
CUDA_VISIBLE_DEVICES=0, python train_SemanticKITTI.py
The output model and log file will be saved in ./ckpt/SemanticKITTI
by default.
3. Trained models
The trained models for these three datasets can be found at here