Awesome
DIFFNet
This repo is for Self-Supervised Monocular Depth Estimation with Internal Feature Fusion(arXiv), BMVC2021
A new backbone for self-supervised depth estimation.
If you think it is a useful work, please consider citing it.
@inproceedings{zhou_diffnet,
title={Self-Supervised Monocular Depth Estimation with Internal Feature Fusion},
author={Zhou, Hang and Greenwood, David and Taylor, Sarah},
booktitle={British Machine Vision Conference (BMVC)},
year={2021}
}
Update:
-
[16-05-2022] Adding cityscapes trainining and testing based on Manydepth.
-
[22-01-2022] A model diffnet_649x192 uploaded (slightly improved than that of orginal paper)
-
[07-12-2021] A multi-gpu training version availible on multi-gpu branch.
Comparing with others
Evaluation on selected hard cases:
Trained weights on KITTI
- Please Note: the results of diffnet_1024x320_ms are not reported in paper *
Methods | abs rel | sq rel | RMSE | rmse log | D1 | D2 | D3 |
---|---|---|---|---|---|---|---|
1024x320 | 0.097 | 0.722 | 4.345 | 0.174 | 0.907 | 0.967 | 0.984 |
1024_320_ms | 0.094 | 0.678 | 4.250 | 0.172 | 0.911 | 0.968 | 0.984 |
1024x320_ms_ttr | 0.079 | 0.640 | 3.934 | 0.159 | 0.932 | 0.971 | 0.984 |
640x192 | 0.102 | 0.753 | 4.459 | 0.179 | 0.897 | 0.965 | 0.983 |
640x192_ms | 0.101 | 0.749 | 4.445 | 0.179 | 0.898 | 0.965 | 0.983 |
Setting up before training and testing
- Data preparation: please refer to monodepth2
Training:
sh start2train.sh
Testing:
sh disp_evaluation.sh
Infer a single depth map from a RGB:
sh test_sample.sh
Acknowledgement
Thanks the authors for their works: