Awesome
A Neural Galerkin Solver for Accurate Surface Reconstruction
Paper | Video | Talk
This repository contains the implementation of the above paper. It is accepted to ACM SIGGRAPH Asia 2022.
- Authors: Jiahui Huang, Hao-Xiang Chen, Shi-Min Hu
- Contact Jiahui either via email or github issues.
If you find our code or paper useful, please consider citing:
@article{huang2022neuralgalerkin,
author = {Huang, Jiahui and Chen, Hao-Xiang and Hu, Shi-Min},
title = {A Neural Galerkin Solver for Accurate Surface Reconstruction},
year = {2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {41},
number = {6},
doi = {10.1145/3550454.3555457},
journal = {ACM Trans. Graph.},
}
Introduction
NeuralGalerkin is a method to reconstruct triangular meshes from point clouds.
Please note that this implementation is accelerated by Jittor deep learning framework. The framework is based on Just-in-Time compilation of the code, and is maintained by a great team led by Prof. Shi-Min Hu.
To fully unlock the training and inference code used to reproduce our results, please checkout the pth
branch.
Getting started
We suggest to use Anaconda to manage your environment. Following is the suggested way to install the dependencies:
# Create a new conda environment
conda create -n ngs python=3.10
conda activate ngs
# Install other packages
pip install -r requirements.txt
# Compile CUDA kernels inplace
# [!] (For installable package please use the pth branch!)
python setup.py build_ext --inplace
To test environment setup, run the non-learned SPSR with:
python examples/main.py
You should be able to see the following Jittor output:
[i ...:43:10.528642 56 lock.py:85] Create lock file:~/.cache/jittor/jt1.3.6/g++9.4.0/py3.10.4/Linux-5.15.0-5x6b/IntelRCoreTMi9xa7/jittor.lock
[i ...:43:10.537673 56 compiler.py:955] Jittor(1.3.6.4) src: .../envs/ngs/lib/python3.10/site-packages/jittor
[i ...:43:10.538776 56 compiler.py:956] g++ at /usr/bin/g++(9.4.0)
[i ...:43:10.538817 56 compiler.py:957] cache_path: ~/.cache/jittor/jt1.3.6/g++9.4.0/py3.10.4/Linux-5.15.0-5x6b/IntelRCoreTMi9xa7/default
[i ...:43:10.540310 56 __init__.py:411] Found nvcc(11.1.105) at ...
[i ...:43:10.615808 56 __init__.py:411] Found gdb(10.2) at ...
as well as the visualization:
Experiments
Please follow the commands below to run all of our experiments.
As our framework is accelerated by Jittor, we only provide inference interfaces here. Please checkout pth
branch for the full loops.
ShapeNet
Please download the dataset from here, and put the extracted onet
folder under data/shapenet
.
- 1K input, No noise (trained model download here)
# Test our trained model (add -v to visualize)
python test.py none --ckpt checkpoints/shapenet-perfect1k/main/paper/checkpoints/best.ckpt
- 3K input, Small noise (trained model download here)
# Test our trained model (add -v to visualize)
python test.py none --ckpt checkpoints/shapenet-noise3k/main/paper/checkpoints/best.ckpt
- 3K input, Large noise (trained model download here)
# Test our trained model (add -v to visualize)
python test.py none --ckpt checkpoints/shapenet-noiser3k/main/paper/checkpoints/best.ckpt
Matterport3D
Please download the dataset from here, and put the extracted matterport
folder under data/
.
- Without Normal (trained model download here)
# Test our trained model (add -v to visualize)
python test.py none --ckpt checkpoints/matterport/without_normal/paper/checkpoints/best.ckpt
- With Normal (trained model download here)
# Test our trained model (add -v to visualize)
python test.py none --ckpt checkpoints/matterport/with_normal/paper/checkpoints/best.ckpt
D-FAUST
Please download the dataset from here, and put the extracted dfaust
folder under data/
.
- Origin split (trained models download here)
# Test our trained model (add -v to visualize)
python test.py none --ckpt checkpoints/dfaust/origin/paper/checkpoints/best.ckpt
- Novel split (test only)
# Test our trained model (add -v to visualize)
python test.py configs/dfaust/data_10k_novel.yaml --ckpt checkpoints/dfaust/origin/paper/checkpoints/best.ckpt -v
Acknowledgements
We thank anonymous reviewers for their constructive feedback. This work was supported by the National Key R&D Program of China (No. 2021ZD0112902), Research Grant of Beijing Higher Institution Engineering Research Center and Tsinghua-Tencent Joint Laboratory for Internet Innovation Technology.
Part of the code is directly borrowed from torchsparse and Convolutional Occupancy Networks.