Awesome
<div align="center"> <h1>Voxurf: Voxel-based Efficient and Accurate Neural Surface Reconstruction</h1> <div> <a href='https://wutong16.github.io/' target='_blank'>Tong Wu</a>  <a href='https://myownskyw7.github.io/' target='_blank'>Jiaqi Wang</a>  <a href='https://xingangpan.github.io/' target='_blank'>Xingang Pan</a>  <a href='https://sheldontsui.github.io/' target='_blank'>Xudong Xu</a>  <a href='https://people.mpi-inf.mpg.de/~theobalt/' target='_blank'>Christian Theobalt</a>  <a href='https://liuziwei7.github.io/' target='_blank'>Ziwei Liu</a>  <a href='https://scholar.google.com/citations?user=GMzzRRUAAAAJ&hl=zh-CN' target='_blank'>Dahua Lin</a>  </div><strong>Accepted to <a href='https://iclr.cc/' target='_blank'>ICLR 2023</a> (Spotlight)</strong>
<strong><a href='https://arxiv.org/abs/2208.12697' target='_blank'>Paper</a></strong>
</div>Updates
- [2023-03] Code released.
- [2023-01] :partying_face: Voxurf is accepted to ICLR 2023 (Spotlight)!
Installation
Please first install a suitable version of Pytorch and torch_scatter on your machine. We tested on CUDA 11.1 with Pytorch 1.10.0.
git clone git@github.com/wutong16/Voxurf.git
cd Voxurf
pip install -r requirements.txt
Datasets
Public datasets
Extract the datasets to ./data/
.
Custom data
For your own data (e.g., a video or multi-view images), go through the preprocessing steps below.
<details> <summary> Preprocessing (click to expand) </summary>-
Extract video frames (if needed), remove the background, and save the masks.
mkdir data/<your-data-dir>
cd tools/preprocess
bash run_process_video.sh ../../data/<your-data-dir> <your-video-dir>
- Estimate camera poses using COLMAP, and normalize them following IDR.
bash run_convert_camera.sh ../../data/<your-data-dir>
- Finally, use
configs/custom_e2e
and run with--scene <your-data-dir>
.
Running
Training
- You could find all the config files for the included datasets under
./configs
. - To train on a set of images with a white/black background (recommended), use the corresponding config file and select a scene:
bash single_runner.sh <config_folder> <workdir> <scene>
# DTU example
bash single_runner.sh configs/dtu_e2e exp 122
- To train without foreground mask on DTU:
# DTU example
bash single_runner_womask.sh configs/dtu_e2e_womask exp 122
- To train without foreground mask on MobileBrick. The full evaluation on MobileBrick compared with other methods can be found here.
# MobileBrick example
bash single_runner_womask.sh configs/mobilebrick_e2e_womask/ exp <scene>
Note For Windows users, please use the provided batch scripts with extension
.bat
instead of the bash scripts with extension.sh
Additionally, the forward slashes/
in the paths should be replaced with backslashes\
. A batch script can be run simply through<script_name>.bat <arg1> ... <argN>
.
NVS evaluation
python run.py --config <config_folder>/fine.py -p <workdir> --sdf_mode voxurf_fine --scene <scene> --render_only --render_test
Extracting the mesh & evaluation
python run.py --config <config_folder>/fine.py -p <workdir> --sdf_mode voxurf_fine --scene <scene> --render_only --mesh_from_sdf
Add --extract_color
to get a colored mesh as below. It is out of the scope of this work to estimate the material, albedo, and illumination. We simply use the normal direction as the view direction to get the vertex colors.
Citation
If you find the code useful for your research, please cite our paper.
@inproceedings{wu2022voxurf,
title={Voxurf: Voxel-based Efficient and Accurate Neural Surface Reconstruction},
author={Tong Wu and Jiaqi Wang and Xingang Pan and Xudong Xu and Christian Theobalt and Ziwei Liu and Dahua Lin},
booktitle={International Conference on Learning Representations (ICLR)},
year={2023},
}
Acknowledgement
Our code is heavily based on DirectVoxGO and NeuS. Some of the preprocessing code is borrowed from IDR and LLFF. Thanks to the authors for their awesome works and great implementations! Please check out their papers for more details.