Home

Awesome

Effi-MVS (CVPR2022)

official source code of paper 'Efficient Multi-view Stereo by Iterative Dynamic Cost Volume'

Introduction

An efficient framework for high-resolution multi-view stereo. This work aims to improve the accuracy and reduce the consumption at the same time. If you find this project useful for your research, please cite:

@inproceedings{wang2022efficient,
  title={Efficient Multi-View Stereo by Iterative Dynamic Cost Volume},
  author={Wang, Shaoqian and Li, Bo and Dai, Yuchao},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={8655--8664},
  year={2022}
}

Installation

Requirements

pip install -r requirements.txt

Reproducing Results

root_directory
├──scan1 (scene_name1)
├──scan2 (scene_name2) 
      ├── images                 
      │   ├── 00000000.jpg       
      │   ├── 00000001.jpg       
      │   └── ...                
      ├── cams_1                   
      │   ├── 00000000_cam.txt   
      │   ├── 00000001_cam.txt   
      │   └── ...                
      └── pair.txt  

Camera file cam.txt stores the camera parameters, which includes extrinsic, intrinsic, minimum depth and maximum depth:

extrinsic
E00 E01 E02 E03
E10 E11 E12 E13
E20 E21 E22 E23
E30 E31 E32 E33

intrinsic
K00 K01 K02
K10 K11 K12
K20 K21 K22

DEPTH_MIN DEPTH_MAX 

pair.txt stores the view selection result. For each reference image, 10 best source views are stored in the file:

TOTAL_IMAGE_NUM
IMAGE_ID0                       # index of reference image 0 
10 ID0 SCORE0 ID1 SCORE1 ...    # 10 best source images for reference image 0 
IMAGE_ID1                       # index of reference image 1
10 ID0 SCORE0 ID1 SCORE1 ...    # 10 best source images for reference image 1 
...
SampleSet
├──MVS Data
      └──Points

In evaluations/dtu/BaseEvalMain_web.m, set dataPath as path to SampleSet/MVS Data/, plyPath as directory that stores the reconstructed point clouds and resultsPath as directory to store the evaluation results. Then run evaluations/dtu/BaseEvalMain_web.m in matlab.

The results look like: DTU

Acc. (mm)Comp. (mm)Overall (mm)
0.3210.3130.317

TANK-train on DTU(mean F-score)

intermediateadvanced (mm)
56.8834.39

TANK-train on blendedmvs(mean F-score)

intermediateadvanced (mm)
62.3838.14

The performance on Tanks & Temples datasets will be better if the model is fine-tuned on BlendedMVS Datasets

DTU Training dataset:
Download the preprocessed DTU training data and Depths_raw (both from Original MVSNet), and upzip it as the $MVS_TRANING folder.

Thanks to Yao Yao for opening source of his excellent work MVSNet. Thanks to Xiaoyang Guo for opening source of his PyTorch implementation of MVSNet MVSNet-pytorch. Thanks to Zachary Teed for his excellent work RAFT, which inspired us to this work.