Awesome
Less is More: Consistent Video Depth Estimation with Masked Frames Modeling (ACM MM 2022)
Yiran Wang<sup>1</sup>, Zhiyu Pan<sup>1</sup>, Xingyi Li<sup>1</sup>, Zhiguo Cao<sup>1</sup>, Ke Xian<sup>1*</sup>, Jianming Zhang<sup>2</sup>
<sup>1</sup>Huazhong University of Science and Technology, <sup>2</sup>Adobe Research
The official project of ACM MM 2022 paper
"Less is More: Consistent Video Depth Estimation with Masked Frames Modeling".
Arxiv | Paper | Supp | Poster | Video | 视频
Abstract
Temporal consistency is the key challenge of video depth estimation. Previous works are based on additional optical flow or camera poses, which is time-consuming. By contrast, we derive consistency with less information. Since videos inherently exist with heavy temporal redundancy, a missing frame could be recovered from neighboring ones. Inspired by this, we propose the frame masking network (FMNet), a spatial-temporal transformer network predicting the depth of masked frames based on their neighboring frames. By reconstructing masked temporal features, the FMNet can learn intrinsic inter-frame correlations, which leads to consistency. Compared with prior arts, experimental results demonstrate that our approach achieves comparable spatial accuracy and higher temporal consistency without any additional information. Our work provides a new perspective on consistent video depth estimation.
Installation
Our code is based on python=3.6.13
and pytorch==1.7.1
.
You can refer to the environment.yml
or requirements.txt
for installation.
Some libraries in those files are not needed for the code.
conda create -n fmnet python=3.6
conda activate fmnet
conda install pytorch==1.7.1 torchvision==0.8.2 cudatoolkit=11.0 -c pytorch -c conda-forge
pip install numpy imageio opencv-python scipy tensorboard timm scikit-image tqdm glob h5py
Demo
Download our checkpoint on the NYUDV2 dataset and put it in the checkpoint
folder.
The RGB frames are placed in ./demo/rgb
. The visualization results will be saved in ./demo/results
folder.
python demo.py
Evaluation
Download the 654 testing sequences of the NYUDV2 dataset and put it in the ./data/testnyu_data/
folder.
Each sequence contains 12 consecutive RGB frames and the ground truth of the 654 testing images for evaluations.
python testfmnet_nyu.py
Future Work
Our paper "Neural Video Depth Stabilizer"(NVDS) is accepted by ICCV2023. NVDS is the first plug-and-play stabilizer that can remove flickers from any single-image depth model without extra effort. Besides, we also introduce a large-scale dataset, Video Depth in the Wild (VDW), which consists of 14,203 videos with over two million frames, making it the largest natural-scene video depth dataset. If you are interested, refer to our paper and repo:
Arxiv: https://arxiv.org/abs/2307.08695
Github: https://github.com/RaymondWang987/NVDS
Project Page: https://raymondwang987.github.io/NVDS/
Citation
If you find our work useful in your research, please consider to cite our paper.
@inproceedings{Wang2022fmnet,
title = {Less is More: Consistent Video Depth Estimation with Masked Frames Modeling},
author = {Yiran, Wang and Zhiyu, Pan and Xingyi, Li and Zhiguo, Cao and Ke, Xian and Jianming, Zhang},
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia (MM '22)},
year = {2022}
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
}