Awesome
STPLS3D: A Large-Scale Synthetic and Real Aerial Photogrammetry 3D Point Cloud Dataset
Meida Chen, Qingyong Hu, Zifan Yu, Hugues Thomas, Andrew Feng, Yu Hou, Kyle McCullough, Fengbo Ren, Lucio Soibelman. <br /> [Project page] [Paper] [BMVC presentation] [Demo video] [Poster] [Urban3D workshop@ICCV2023][Urban3D workshop@ECCV2022] [Instance segmentation competition] <br />
Updates
- 03/31/2023: we are organizing the Urban3D@ICCV2023 - The 3rd Challenge on Large-Scale Point Clouds Analysis for Urban Scenes Understanding!
- 11/29/2022: All source images are released Here.
- 10/23/2022: Congrats to our Urban3D team for successfully organizing the Urban3D workshop at ECCV 2022; Over 300 teams participated and competed on the SensatUrban (semantic segmentation) and STPLS3D (instance segmentation) datasets. All winners surpassed our baseline methods by a large margin. Replay of the workshop video is available.
- 10/14/2022: Special thanks to Jonas Schult for implementing Mask3D for STPLS3D - instance segmentation! Please refer to the official Mask3D for implementation details, and download their pretrained model.
- 10/13/2022: Our Paper is accepted as oral presentation at BMVC2022!
- 06/28/2022: Special thanks to Thang Vu for implementing SoftGroup for STPLS3D - instance segmentation! Please refer to the official SoftGroup for implementation details, and download their pretrained model.
- 03/25/2022: we are organizing the Urban3D@ECCV2022 - The 2nd Challenge on Large-Scale Point Clouds Analysis for Urban Scenes Understanding!
- 11/01/2021: Initial release!
(1) Our Focus
- Our project aims to provide a large database of annotated ground truth point clouds reconstructed using aerial photogrammetry.
- Our database can be used for training and validating 3D semantic and instance segmentation algorithms.
- We are developing a synthetic data generation pipeline to create synthetic training data that can augment or even replace real-world training data.
(2) Dataset
2.1 Download
- To download the STPLS3D point clouds for Semantic Segmentation click Here.
- To download the STPLS3D point clouds for Instance Segmentation click Here.
- To download the unlabled testing datasets for STPLS3D instance segmentation competition click Here.
- To download the Source Images for both synthetic and real-world datasets click Here.
2.2 Overview
we have built a large-scale photogrammetry 3D point cloud dataset, termed Semantic Terrain Points Labeling - Synthetic 3D (STPLS3D), which is composed of high-quality, rich-annotated point clouds from real-world and synthetic environments.
<p align="center"> <img src="imgs/STPLS3D.png" width="80%"> </p>2.3 Data Collection
We first collect real-world aerial images using photogrammetry best practices with quadcopter drone flight at a low altitude with significant overlaps between adjacent photos. We then reconstructed point clouds with 1.27 km^2 landscape following the standard photogrammetry pipeline. Next, we follow the same UAV path and flying pattern to generate 62 synthetic point clouds with different architectural styles, vegetation types, and terrain shapes. The synthetic dataset covers about 16 km^2 of the city landscape, with up to 18 fine-grained semantic classes and 14 instance classes.
2.4 Synthetic data generation workflow demo
<p align="center"> <a href="https://youtu.be/6wYWVo6Cmfs"><img src="imgs/STPLS3D_workflow.png" width="80%"></a> </p>2.5 Semantic Annotations
- 0-Ground: including grass, paved road, dirt, etc.
- 1-Building: including commercial, residential, educational buildings.
- 2-LowVegetation: 0.5 m < vegetation height < 2.0 m.
- 3-MediumVegetation: 2.0 m < vegetation height < 5.0 m.
- 4-HighVegetation: 5.0 m < vegetation height.
- 5-Vehicle: including sedans and hatchback cars.
- 6-Truck: including pickup trucks, cement trucks, flat-bed trailers, trailer trucks, etc.
- 7-Aircraft: including helicopters and airplanes.
- 8-MilitaryVehicle: including tanks and Humvees.
- 9-Bike: bicycles.
- 10-Motorcycle: motorcycles.
- 11-LightPole: including light poles and traffic lights.
- 12-StreetSgin: including road signs erected at the side of roads.
- 13-Clutter: including city furniture, construction equipment, barricades, and other 3D shapes.
- 14-Fence: including timber, brick, concrete, metal fences.
- 15-Road: including asphalt and concrete roads.
- 17-Windows: glass windows.
- 18-Dirt: bare earth.
- 19-Grass: including grass lawn, wild grass, etc.
Note that not all datasets we are currently providing have all the semantic labels available, the ground points that don't have the material available (15, 18, 19) are labeled with 0.
2.6 Instance annotations
The ground is labeled with -100. Window instance is currently per building but not per window but could be post-processed using connect component algorithm. Our experiments did not include the window instances.
Only synthetic datasets v2 and v3 have the instance labels.
(3) Benchmarks
3.1 Semantic segmentation:
<p align="center"> <img src="imgs/SemanticSegmentationEvaluationOnWMSC.JPG" width="80%"> </p>3.2 Instance segmentation:
<p align="center"> <img src="imgs/InstanceSegmentation_06202022.PNG" width="80%"> </p>(4) Training and Evaluation
Here we provide the training and evaluation script for both semantic and instance segmentation.
4.1 Semantic segmentation:
KpConv (Ubuntu and Windows 10): The environment setup is the same as the official KpConv release. We follow the same steps as shown here to evaluate KpConv on our STPLS3D dataset.
- Preparing the dataset
Download the data and unzip it. Change the variable self.path
of STPLS3DDataset
class (here) to the place where STPLS3D is stored.
STPLS3D
├── RealWorldData
│ ├── OCCC_points.ply
│ ├── ...
│ └── WMSC_points.ply
├── Synthetic_v1
│ ├── Austin.ply
│ ├── ...
│ └── TownshipofWashington.ply
├── Synthetic_v2
│ ├── 2_points_GTv2.ply
│ ├── ...
│ └── j_points_GTv2.ply
└── Synthetic_v3
├── 1_points_GTv3.ply
├── ...
└── 25_points_GTv3.ply
- Start training:
python3 train_STPLS3D.py
- Evaluation:
python3 test_models.py
Point Transformer (Ubuntu): Please refer to Point Transformer to test it on our STPLS3D dataset.
4.2 Instance segmentation:
Mask3D: Special thanks to Jonas Schult for implementing Mask3D for STPLS3D! Please refer to the official Mask3D for implementation details, and download their pretrained model.
SoftGroup: Special thanks to Thang Vu for implementing SoftGroup for STPLS3D! Please refer to the official SoftGroup for implementation details, and downloading their pretrained model.
HAIS (Ubuntu): The environment setup is the same as the official HAIS release
- Setup the environment
git clone https://github.com/meidachen/STPLS3D.git
cd STPLS3D/HAIS
conda create -n hais python=3.7
conda activate hais
pip install -r requirements.txt
conda install -c bioconda google-sparsehash
conda install libboost
conda install -c daleydeng gcc-5
cd STPLS3D/HAIS/lib/spconv
export CUDACXX= $PATH_TO_NVCC$
python setup.py bdist_wheel
cd STPLS3D/HAIS/lib/spconv/dist
pip install {wheel_file_name}.whl
cd STPLS3D/HAIS/lib/hais_ops
export CPLUS_INCLUDE_PATH={conda_env_path}/hais/include:$CPLUS_INCLUDE_PATH
python setup.py build_ext develop
- Preparing the dataset
Download the data, unzip it and place it under STPLS3D/HAIS/dataset.
HAIS
├── dataset
└── Synthetic_v3_InstanceSegmentation
├── 1_points_GTv3.txt
├── 2_points_GTv3.txt
├── 3_points_GTv3.txt
├── ...
├── 23_points_GTv3.txt
├── 24_points_GTv3.txt
└── 25_points_GTv3.txt
cd STPLS3D/HAIS/data
python prepare_data_inst_instance_stpls3d.py
By default, scene 5, 10, 15, 20, 25 are used as the validation sets. This can be changed at https://github.com/meidachen/STPLS3D/blob/6eec7abe760a45dc970714f62f6b0e555a2f44b7/HAIS/data/prepare_data_inst_instance_stpls3d.py#L179 https://github.com/meidachen/STPLS3D/blob/6eec7abe760a45dc970714f62f6b0e555a2f44b7/HAIS/data/prepare_data_inst_instance_stpls3d.py#L186
(optional) In case you are changing training data (i.e., not using data agumentation, using different ways for data agumentation, etc.), please run prepare_data_statistic_stpls3d.py to get the class_weight, class_radius_mean, and class_numpoint_mean_dict. Change them in hais_run_stpls3d.yaml, hierarchical_aggregation.cpp, and hierarchical_aggregation.cu accordingly. Make sure you rebuild the hais_ops.
- Start training:
CUDA_VISIBLE_DEVICES=1 python train.py --config config/hais_run_stpls3d.yaml
- Evaluation:
CUDA_VISIBLE_DEVICES=1 python test.py --config config/hais_run_stpls3d.yaml --pretrain exp/Synthetic_v3_InstanceSegmentation/hais/hais_run_stpls3d/hais_run_stpls3d-000000500.pth
- Testing on unlabeled data and submit to our evaluation server:
The unlabeled data can be downloaded here. Unzip it and place the three .txt files under STPLS3D/HAIS/dataset/Synthetic_v3_InstanceSegmentation.
HAIS
├── dataset
└── Synthetic_v3_InstanceSegmentation
├── 26_points_GTv3.txt
├── 27_points_GTv3.txt
└── 28_points_GTv3.txt
Run the preparation script again
cd STPLS3D/HAIS/data
python prepare_data_inst_instance_stpls3d.py
Set split to test https://github.com/meidachen/STPLS3D/blob/6eec7abe760a45dc970714f62f6b0e555a2f44b7/HAIS/config/hais_run_stpls3d.yaml#L71
Set save_instance to True https://github.com/meidachen/STPLS3D/blob/6eec7abe760a45dc970714f62f6b0e555a2f44b7/HAIS/config/hais_run_stpls3d.yaml#L84
Run evaluation again
CUDA_VISIBLE_DEVICES=1 python test.py --config config/hais_run_stpls3d.yaml --pretrain exp/Synthetic_v3_InstanceSegmentation/hais/hais_run_stpls3d/hais_run_stpls3d-000000500.pth
Once completed, you may find the results under exp/Synthetic_v3_InstanceSegmentation/hais/hais_run_stpls3d/result/test
You only need to keep the 300 txt files and the predicted_masks folder, and zip it to submit on our evaluation server. An example of the submission zip can be find here.
(5) Instance segmentation challenge and evaluation server
we are organizing the Urban3D@ICCV2023 - The 3rd Challenge on Large-Scale Point Clouds Analysis for Urban Scenes Understanding. The instance segmentation challenge is on CodaLab! Please feel free to submit your results to our evaluation server.
Citation
If you find our work useful in your research, please consider citing:
@inproceedings{Chen_2022_BMVC,
author = {Meida Chen and Qingyong Hu and Zifan Yu and Hugues THOMAS and Andrew Feng and Yu Hou and Kyle McCullough and Fengbo Ren and Lucio Soibelman},
title = {STPLS3D: A Large-Scale Synthetic and Real Aerial Photogrammetry 3D Point Cloud Dataset},
booktitle = {33rd British Machine Vision Conference 2022, {BMVC} 2022, London, UK, November 21-24, 2022},
publisher = {{BMVA} Press},
year = {2022},
url = {https://bmvc2022.mpi-inf.mpg.de/0429.pdf}
}
Related Repos
Semantic segmentation:
- RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds
- KPConv: Flexible and Deformable Convolution for Point Clouds
- SCF-Net: Learning Spatial Contextual Features for Large-Scale Point Cloud Segmentation
- Point Transformer
Instance segmentation:
- Mask3D for 3D Semantic Instance Segmentation
- SoftGroup for 3D Instance Segmentation on Point Clouds
- Hierarchical Aggregation for 3D Instance Segmentation
- PointGroup: Dual-Set Point Grouping for 3D Instance Segmentation
Data set:
Others:
- 3D-BoNet: Learning Object Bounding Boxes for 3D Instance Segmentation on Point Clouds
- SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration
- SQN: Weakly-Supervised Semantic Segmentation of Large-Scale 3D Point Clouds
- SoTA-Point-Cloud: Deep Learning for 3D Point Clouds: A Survey