Home

Awesome

<p align="center"> HelixSurf: A Robust and Efficient Neural Implicit Surface Learning of Indoor Scenes with Iterative Intertwined Regularization </p>

<p align="center"> Zhihao Liang, Zhangjin Huang, Changxing Ding, Kui Jia </p>

<p align="center"> CVPR 2023 </p>

<p align="center">Paper | Arxiv | Project Page </p>

<p align="center"> <img width="100%" src="assets/overview.png"/> </p>

Requirements

pip install -r requirements.txt

Installation

git submodule update --init --recursive
python setup.py develop

Prim3D and gmvs can be obtained from submodules. Please follow their README.md and install them.

Training

take 0616_00 in ScanNet as an example

HelixSurf_data
  ├── scene_data # stores our processed data
  └── mvs_results # stores our MVS results

for custom data, please refer to ManhattanSDF to generate data.

you can also use the our mvs results from HelixSurf_data/mvs_results and skip this step

sh run_scripts/first_mvs.sh
sh run_scripts/0616_train.sh mvs "--casting"
sh run_scripts/next_mvs.sh
sh run_scripts/0616_train.sh mvs_from_1epoch "--load_ckpt ckpt/0616_00_default/ckpt_1.pth --consistant -im_psize 11"

Training using pretrained geometric cues

take 0616_00 in ScanNet as an example we provide the run_script for 0616_00. You can just modify this script slightly and get the training launch script for other scene. You can also refer MonoSDF to generate geometric cues using Omnidata. The

python scripts/pretrained_geometric.py --task normal \
  --img_path HelixSurf_data/scene_data/0616_00/images/ \
  --output_path HelixSurf_data/scene_data/0616_00/pretrained \
  --omnidata_path $OMNIDATA_PROJECT/omnidata_tools/torch \
  --pretrained_models $OMNIDATA_PROJECT/omnidata_tools/torch/pretrained_models/
sh run_scripts/0616_pretrained_train.sh mvs "--casting"

Inference

Run the follow command to perform inference:

python scripts/inference.py --config configs/default.yaml --data_dir $DATA_DIR --scene 0616_00 --ckpt $CKPT_PATH

We provide pretrained models. [Google Drive] [Baidu Cloud](提取码:yil8).

TODO

Acknowledge

Citation

If you find this work useful in your research, please cite:

@misc{liang2023helixsurf,
      title={HelixSurf: A Robust and Efficient Neural Implicit Surface Learning of Indoor Scenes with Iterative Intertwined Regularization}, 
      author={Zhihao Liang and Zhangjin Huang and Changxing Ding and Kui Jia},
      year={2023},
      eprint={2302.14340},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}