Home

Awesome

KD-MVS: Knowledge Distillation Based Self-supervised Learning for Multi-view Stereo

Code for paper KD-MVS: Knowledge Distillation Based Self-supervised Learning for Multi-view Stereo

Tips: If you meet any problems when reproduce our results, please contact Yikang Ding (dyk20@mails.tsinghua.edu.cn). We are happy to help you solve the problems and share our experience.

Change log

Installation

Clone this repo:

git clone https://github.com/megvii-research/KD-MVS.git
cd KD-MVS

We recommend using Anaconda to manage python environment:

conda create -n kdmvs python=3.6
conda activate kdmvs
pip install -r requirements.txt

We also recommend using apex, you can install apex from the official repo.

Data preparation

Training data

Download the preprocessed DTU training data (from Original MVSNet), and unzip it to construct a dataset folder like:

dtu_training
 ├── Cameras
 └── Rectified

Testing data

Download our processed DTU testing data and unzip it as the test data folder, which should contain one cams folder, one images folder and one pair.txt file.

Training

Unsupervised training

Set the configuration in scripts/run_train_unsup.sh as:

To train your model, run:

bash scripts/run_train_unsup.sh

KD training

Note:

To reproduce the results, please note:

Before start training, set the configuration in scripts/run_train_kd.sh as:

run:

bash scripts/run_train_kd.sh

Testing

For easy testing, you can download our pretrained models and put them in ckpt folder, or use your own models and follow the instruction below.

Make sure:

Set the configuration in scripts/run_test_dtu.sh:

Run:

bash scripts/run_test_dtu.sh

The reconstructed point cloud results would be stored in outputs/test_dtu/gipuma_pcd, you can also download our fused point cloud results of KD-trained model from here.

To get quantitative results of the fused point clouds from the official MATLAB evaluation tools, you can refer to TransMVSNet.

By using the latest code, pretrained model and default parameters, you can get the final results like:

ModelAcc.Comp.Overall
unsup0.41660.43350.4251
KD0.36740.28470.3260

Citation

@inproceedings{ding2022kdmvs,
  title={KD-MVS: Knowledge Distillation Based Self-supervised Learning for Multi-view Stereo},
  author={Ding, Yikang and Zhu, Qingtian and Liu, Xiangyue and Yuan, Wentao and Zhang, Haotian  and Zhang, Chi},
  booktitle={European Conference on Computer Vision},
  year={2022},
  organization={Springer}
}

Acknowledgments

We borrow some code from CasMVSNet and U-MVS. We thank the authors for releasing the source code.