Awesome
NeW CRFs: Neural Window Fully-connected CRFs for Monocular Depth Estimation
This is the official PyTorch implementation code for NeWCRFs. For technical details, please refer to:
NeW CRFs: Neural Window Fully-connected CRFs for Monocular Depth Estimation <br /> Weihao Yuan, Xiaodong Gu, Zuozhuo Dai, Siyu Zhu, Ping Tan <br /> CVPR 2022 <br /> [Project Page] | [Paper] <br />
<p float="left">    <img src="files/intro.png" width="400" /> </p> <!-- <p float="left"> <img src="files/office_00633.jpg" width="200" /> <img src="files/office_00633_depth.jpg" width="200" /> <img src="files/office_00633_pcd.jpg" width="240" /> </p> -->Bibtex
If you find this code useful in your research, please cite:
@inproceedings{yuan2022newcrfs,
title={NeWCRFs: Neural Window Fully-connected CRFs for Monocular Depth Estimation},
author={Yuan, Weihao and Gu, Xiaodong and Dai, Zuozhuo and Zhu, Siyu and Tan, Ping},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={},
year={2022}
}
Contents
Installation
conda create -n newcrfs python=3.8
conda activate newcrfs
conda install pytorch=1.10.0 torchvision cudatoolkit=11.1
pip install matplotlib, tqdm, tensorboardX, timm, mmcv
Datasets
You can prepare the datasets KITTI and NYUv2 according to here, and then modify the data path in the config files to your dataset locations.
Or you can download the NYUv2 data from here and download the KITTI data from here.
Training
First download the pretrained encoder backbone from here, and then modify the pretrain path in the config files.
Training the NYUv2 model:
python newcrfs/train.py configs/arguments_train_nyu.txt
Training the KITTI model:
python newcrfs/train.py configs/arguments_train_kittieigen.txt
Evaluation
Evaluate the NYUv2 model:
python newcrfs/eval.py configs/arguments_eval_nyu.txt
Evaluate the KITTI model:
python newcrfs/eval.py configs/arguments_eval_kittieigen.txt
Models
Model | Abs.Rel. | Sqr.Rel | RMSE | RMSElog | a1 | a2 | a3 | SILog |
---|---|---|---|---|---|---|---|---|
NYUv2 | 0.0952 | 0.0443 | 0.3310 | 0.1185 | 0.923 | 0.992 | 0.998 | 9.1023 |
KITTI_Eigen | 0.0520 | 0.1482 | 2.0716 | 0.0780 | 0.975 | 0.997 | 0.999 | 6.9859 |
Demo
Test images with the indoor model:
python newcrfs/test.py --data_path datasets/test_data --dataset nyu --filenames_file data_splits/test_list.txt --checkpoint_path model_nyu.ckpt --max_depth 10 --save_viz
Play with the live demo from a video or your webcam:
python newcrfs/demo.py --dataset nyu --checkpoint_path model_zoo/model_nyu.ckpt --max_depth 10 --video video.mp4
Acknowledgements
Thanks to Jin Han Lee for opening source of the excellent work BTS. Thanks to Microsoft Research Asia for opening source of the excellent work Swin Transformer.