Awesome
NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing
Project Page | Video | Paper
<div align=center> <img src="assets/teaser.gif" width="100%"/> </div><!-- ⚠️ Note: This is only a preview version of the code. Full code (with training scripts) will be released soon. -->NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing
[Bangbang Yang, Chong Bao]<sup>Co-Authors</sup>, Junyi Zeng, Hujun Bao, Yinda Zhang, Zhaopeng Cui, Guofeng Zhang.
ECCV 2022 Oral
Installation
We have tested the code on Python 3.8.0 and PyTorch 1.8.1, while a newer version of pytorch should also work. The steps of installation are as follows:
- create virtual environmental:
conda env create --file environment.yml
- install pytorch 1.8.1:
pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html
- install open3d development version:
pip install [open3d development package url]
- install FRNN, a fixed radius nearest neighbors search implemented on CUDA.
Data
We use DTU data of NeuS version and NeRF synthetic data.
<!-- Our code reads the poses following the format of `camera_sphere.npz`. Therefore, we convert the poses of NeRF synthetic data to [`camera_sphere.npz`](). -->[Update]: We release the test image names for our pre-trained model in the DTU dataset, which is randomly selected for evaluating PSNR/SSIM/LPIPS. Each sequence has a val_names.txt
that contains the names of test images.
P.S. Please enable the intrinsic_from_cammat: True
for hotdog
, chair
, mic
if you use the provided NeRF synthetic dataset.
Train
Here we show how to run our code on one example scene.
Note that the data_dir
should be specified in the configs/*.yaml
.
- Train the teacher network (NeuS) from multi-view images.
python train.py --config configs/neus_dtu_scan63.yaml
- Extract a triangle mesh from a trained teacher network.
python extract_mesh.py --config configs/neus_dtu_scan63.yaml --ckpt_path logs/neus_dtuscan63/ckpts/latest.pt --output_dir out/neus_dtuscan63/mesh
- Train NeuMesh from multi-view images and the teacher network. Note that the
prior_mesh
,teacher_ckpt
,teacher_config
should be specified in theneumesh*.yaml
python train.py --config configs/neumesh_dtu_scan63.yaml
Evaluation
Here we provide all pre-trained models of DTU and NeRF synthetic dataset.
You can evaluate images with the trained models.
python -m render --config configs/neumesh_dtu_scan63.yaml --load_pt logs/neumesh_dtuscan63/ckpts/latest.pt --camera_path spiral --background 1 --test_frame 24 --spiral_rad 1.2
P.S. If the time of inference costs too much, --downscale
can be enabled for acceleration.
Manipulation
Please refer to editing/README.md
.
Citing
@inproceedings{neumesh,
title={NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing},
author={{Chong Bao and Bangbang Yang} and Zeng Junyi and Bao Hujun and Zhang Yinda and Cui Zhaopeng and Zhang Guofeng},
booktitle={European Conference on Computer Vision (ECCV)},
year={2022}
}
Note: joint first-authorship is not really supported in BibTex; you may need to modify the above if not using CVPR's format. For the SIGGRAPH (or ACM) format you can try the following:
@inproceedings{neumesh,
title={NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing},
author={{Bao and Yang} and Zeng Junyi and Bao Hujun and Zhang Yinda and Cui Zhaopeng and Zhang Guofeng},
booktitle={European Conference on Computer Vision (ECCV)},
year={2022}
}
Acknowledgement
In this project we use parts of the implementations of the following works:
We thank the respective authors for open sourcing their methods.