Awesome
Deep Surface Reconstruction from Point Clouds with Visibility Information
Data, code and pretrained models for the ICPR 2022 paper (arXiv).
<table> <thead> <tr align="center"> <th><img style="width:250px;" src="teaser/sofa_0751_scan.png"></th> <th><img style="width:200px; " src="teaser/sofa_0751_co_con.png"></th> <th><img style="width:250px;" src="teaser/sofa_0751_scan_aux_los_yellow.png"></th> <th><img style="width:200px;" src="teaser/sofa_0751_co_aux.png"></th> </tr> </thead> <tbody align="center"> <tr> <td>Point cloud</td> <td>Reconstruction</td> <td>Point cloud with Visibility</td> <td>Reconstruction</td> </tr> </tbody> </table>Data
ModelNet10
- The ModelNet10 models made watertight using ManifoldPlus can be downloaded here on Zenodo.
- The ModelNet10 scans used in our paper can be downloaded here on Zenodo. The dataset also includes training and evaluation data for ConvONet, Points2Surf, Shape As Points, POCO and DGNN.
ShapeNetv1 (13 class subset of Choy et al.)
- The watertight ShapeNet models can be downloaded here (provided by the authors of ONet).
- Please open an issue if you are interested in the scans used in our paper.
Synthetic Rooms Dataset
- The watertight scenes can be downloaded here (provided by the authors of ConvONet).
- Please open an issue if you are interested in the scans used in our paper.
Scanning Procedure
You can create point clouds with visibility information of your own dataset using the scan
tool.
You can use the precompiled scan
executable from this repository (which should work on most Ubuntu systems),
or compile it youself using mesh-tools.
./scan -w path/to/workingDir -i filenameMeshToScan --export npz
For creating the scans used in the paper the follwing settings were used:
--points 3000 --noise 0.005 --outliers 0.0 --cameras 10
Data Loading
You can use the dataloader.py
script to load visibility augmented point clouds from the scan.npz
files.
Code and Pretrained Models
You can find our modified code and pretrained models for the surface reconstruction methods tested in our paper below. All methods support point clouds with and without visibility information.
References
If you find the code or data in this repository useful, please consider citing
@INPROCEEDINGS{sulzer2022deep,
author={Sulzer, Raphael and Landrieu, Loïc and Boulch, Alexandre and Marlet, Renaud and Vallet, Bruno},
booktitle={2022 26th International Conference on Pattern Recognition (ICPR)},
title={Deep Surface Reconstruction from Point Clouds with Visibility Information},
year={2022},
volume={},
number={},
pages={2415-2422},
doi={10.1109/ICPR56361.2022.9956560}}