Awesome
News
06/03/2022
We provide the instruction to run on custom data here.05/10/2022
To make the comparison on ScanNet easier, we provide all quantitative and qualitative results of baselines here, including COLMAP, COLMAP*, ACMP, NeRF, UNISURF, NeuS, and VolSDF.05/10/2022
To make the following works easier to compare with our model, we provide our quantitative and qualitative results, as well as the trained models on ScanNet here.05/10/2022
We upload our processed ScanNet scene data to Google Drive.
Neural 3D Scene Reconstruction with the Manhattan-world Assumption
Project Page | Video | Paper
<br/>Neural 3D Scene Reconstruction with the Manhattan-world Assumption
<br/>
Haoyu Guo<sup>*</sup>, Sida Peng<sup>*</sup>, Haotong Lin, Qianqian Wang, Guofeng Zhang, Hujun Bao, Xiaowei Zhou
CVPR 2022 (Oral Presentation)
Setup
Installation
conda env create -f environment.yml
conda activate manhattan
Data preparation
Download ScanNet scene data evaluated in the paper from Google Drive and extract them into data/
. Make sure that the path is consistent with config file.
We provide the instruction to run on custom data here.
Usage
Training
python train_net.py --cfg_file configs/scannet/0050.yaml gpus 0, exp_name scannet_0050
Mesh extraction
python run.py --type mesh_extract --output_mesh result.obj --cfg_file configs/scannet/0050.yaml gpus 0, exp_name scannet_0050
Evaluation
python run.py --type evaluate --cfg_file configs/scannet/0050.yaml gpus 0, exp_name scannet_0050
Citation
If you find this code useful for your research, please use the following BibTeX entry.
@inproceedings{guo2022manhattan,
title={Neural 3D Scene Reconstruction with the Manhattan-world Assumption},
author={Guo, Haoyu and Peng, Sida and Lin, Haotong and Wang, Qianqian and Zhang, Guofeng and Bao, Hujun and Zhou, Xiaowei},
booktitle={CVPR},
year={2022}
}
Acknowledgement
- Thanks to Lior Yariv for her excellent work VolSDF.
- Thanks to Jianfei Guo for his implementation of VolSDF neurecon.
- Thanks to Johannes Schönberger for his excellent work COLMAP.
- Thanks to Shaohui Liu for his customized implementation of COLMAP as a submodule of NerfingMVS.