Awesome
Pointnet2.ScanNet
PointNet++ Semantic Segmentation on ScanNet in PyTorch with CUDA acceleration based on the original PointNet++ repo and the PyTorch implementation with CUDA
Performance
The semantic segmentation results in percentage on the ScanNet train/val split in data/
.
use XYZ | use color | use normal | use multiview | use MSG | mIoU | weights |
---|---|---|---|---|---|---|
:heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | - | - | 50.48 | download |
:heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | - | :heavy_check_mark: | 52.50 | download |
:heavy_check_mark: | - | :heavy_check_mark: | :heavy_check_mark: | - | 65.75 | download |
:heavy_check_mark: | - | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | 67.60 | download |
If you want to play around with the pre-trained model, please download the zip file and unzip it under outputs/
.
Installation
Requirements
- Linux (tested on Ubuntu 14.04/16.04)
- Python 3.6+
- PyTorch 1.8
- TensorBoardX
Please run conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=10.2 -c pytorch
to install PyTorch 1.8 and run pip install -r requirements.txt
to install other required packages.
Install CUDA accelerated PointNet++ library
Install this library by running the following command:
cd pointnet2
python setup.py install
Configure
Change the path configurations for the ScanNet data in lib/config.py
Prepare multiview features (optional)
-
Download the ScanNet frames here (~13GB) and unzip it under the project directory.
-
Extract the multiview features from ENet:
python scripts/compute_multiview_features.py
- Generate the projection mapping between image and point cloud
python scripts/compute_multiview_projection.py
- Project the multiview features from image space to point cloud
python scripts/project_multiview_features.py
Note you might need ~100GB RAM to train the model with multiview features
Usage
Preprocess ScanNet scenes
Parse the ScanNet data into *.npy
files and save them in preprocessing/scannet_scenes/
python preprocessing/collect_scannet_scenes.py
Sanity check
Don't forget to visualize the preprocessed scenes to check the consistency
python preprocessing/visualize_prep_scene.py --scene_id <scene_id>
The visualized <scene_id>.ply
is stored in preprocessing/label_point_clouds/
- Drag that file into MeshLab and you'll see something like this:
train
Train the PointNet++ semantic segmentation model on ScanNet scenes with raw RGB values and point normals (for more training options, see python scripts/train.py -h
)
python scripts/train.py --use_color --use_normal --use_msg
The trained models and logs will be saved in outputs/<time_stamp>/
eval
Evaluate the trained models and report the segmentation performance in point accuracy, voxel accuracy and calibrated voxel accuracy
python scripts/eval.py --folder <time_stamp>
Note that all model options must match the ones used for training.
vis
Visualize the semantic segmentation results on points in a given scene
python scripts/visualize.py --folder <time_stamp> --scene_id <scene_id>
Note that all model options must match the ones used for training.
The generated <scene_id>.ply
is stored in outputs/<time_stamp>/preds
- Drag that file into MeshLab and you'll see something like the one below. See the class palette here
Changelog
- 07/29/2021 Upgrade to PyTorch 1.8 & fix existing issues
- 03/29/2020 Release the code
TODOs
- Release all pretrained models
- Upgrade to PyTorch 1.8
- Fix issues with loading pre-trained models
Acknowledgement
- charlesq34/pointnet2: Paper author and official code repo.
- erikwijmans/Pointnet2_PyTorch: Initial work of PyTorch implementation of PointNet++ with CUDA acceleration.