Home

Awesome

M<sup>3</sup>VSNet

The code is available now!!! M3VSNet

About

The present Multi-view stereo (MVS) methods with supervised learning-based networks have an impressive performance comparing with traditional MVS methods. However, the ground-truth depth maps for training are hard to be obtained and are within limited kinds of scenarios. In this paper, we propose a novel unsupervised multi-metric MVS network, named M<sup>3</sup>VSNet, for dense point cloud reconstruction without any supervision. To improve the robustness and completeness of point cloud reconstruction, we propose a novel multi-metric loss function that combines pixel-wise and feature-wise loss function to learn the inherent constraints from different perspectives of matching correspondences. Besides, we also incorporate the normal-depth consistency in the 3D point cloud format to improve the accuracy and continuity of the estimated depth maps. Experimental results show that M<sup>3</sup>VSNet establishes the state-of-the-arts unsupervised method and achieves comparable performance with previous supervised MVSNet on the DTU dataset and demonstrates the powerful generalization ability on the Tanks and Temples benchmark with effective improvement.

Please cite:

@inproceedings{huang2021m3vsnet,
  title={M3VSNet: Unsupervised multi-metric multi-view stereo network},
  author={Huang, Baichuan and Yi, Hongwei and Huang, Can and He, Yijia and Liu, Jingbin and Liu, Xiao},
  booktitle={2021 IEEE International Conference on Image Processing (ICIP)},
  pages={3163--3167},
  year={2021},
  organization={IEEE}
}

How to use

Environment

The conda environment is listed in requirements.txt

Train

Eval

Results

Results

Acc.Comp.Overall.
MVSNet(D=196)0.4440.7410.592
Unsup_MVS0.8811.0730.977
MVS20.7600.5150.637
M3VSNet(D=192)0.6360.5310.583

T&T Benchmark

The best unsupervised MVS network until April 17, 2020. See the leaderboard .

Acknowledgement

Thanks for the funding from Megvii Technology Limited. We acknowledge the following repositories MVSNet and MVSNet_pytorch.

Happy to be acknowledgemented by the AAAI 2020 paper.