Awesome
PVA-MVSNet
About
This official repository is established for Pyramid Multi-view Stereo Net with Self-adaptive View Aggregation (ECCV2020) paper
<img src="doc/architecture.png" width="600">How to Use
Requirements
- python 3.6
- Pytorch >= 1.0.0
- CUDA >= 9.0
Install
./conda_install.sh
Training
- Download the preprocessed DTU training data (also available at Baiduyun, code: s2v2), and upzip it as the
MVS_TRANING
folder (orrowed from MVSNet(https://raw.githubusercontent.com/YoYo000/MVSNet)). - Set
dtu_data_root
to yourMVS_TRAINING
path inenv.sh
Create a log folder and a model folder in wherever you like to save the training outputs. Set thelog_dir
andsave_dir
intrain.sh
correspondingly. - Train VA-MVSNet (GTX1080Ti):
./train.sh
Testing
- Download the test data for scan9 and unzip it as the
TEST_DATA_FOLDER
folder, which should contain onecams
folder, oneimages
folder and onepair.txt
file. - Download the pre-trained VA-MVSNet models and upzip the file as
MODEL_FOLDER
- In
eval_pyramid.sh
, setMODEL_FOLDER
tockpt
andmodel_ckpt_index
tocheckpoint_list
. - Run
./eval_pyramid.sh
.
MMP and Filter&Fusion
- We utilize
depthfusion_pytorch.py
script for Fusion (from MVSNet-pytorch). - Set
use_mmp
asTrue
to use Multi-metric Pyramid Depth Aggregation intools/postprocess.sh
. - Enter to
./tools
directory, then run./postprocess.sh
to generate final point cloud.
Reproduce Benchmark results
Results on DTU
Acc. | Comp. | Overall. | |
---|---|---|---|
MVSNet(D=256) | 0.396 | 0.527 | 0.462 |
PVAMVSNet(D=192) | 0.379 | 0.336 | 0.357 |
PVA-MVSNet point cloud results with full post-processing are also provided: DTU evaluation point clouds with extracting code zau7.
Results on Tanks and Temples
Mean | Family | Francis | Horse | Lighthouse | M60 | Panther | Playground | Train |
---|---|---|---|---|---|---|---|---|
54.46 | 69.36 | 46.80 | 46.01 | 55.74 | 57.23 | 54.75 | 56.70 | 49.06 |
Please ref to leaderboard.
Citation
If you find this project useful for your research, please cite:
@inproceedings{yi2020PVAMVSNET,
title={Pyramid multi-view stereo net with self-adaptive view aggregation},
author={Yi, Hongwei and Wei, Zizhuang and Ding, Mingyu and Zhang, Runze and Chen, Yisong and Wang, Guoping and Tai, Yu-Wing},
booktitle={ECCV},
year={2020}
}
Acknowledgement
Thanks Xiaoyang Guo for his contribution to re-implementation of MVSNet-pytorch. Thanks Yao Yao for his previous works MVSNet/R-MVSNet.