Home

Awesome

Learning Depth from Focus in the Wild

[ECCV2022] Official pytorch implementation of "Learning Depth from Focus in the Wild"

Requirements

Depth estimation Network

1. Download Datasets

2. Use pretrained model

Or train the model by using train codes.

python train_code_[Dataset].py --lr [learning rate]

3. Run test.py

python test.py --dataset [Dataset]

Simulator

1. Download NYU v2 dataset [4]

2. Run the code.

python synthetic_blur_movement.py

End-To-End Network

1. Upload your dataset.

2. Run test_real_scenes.py in 'End_to_End'

python test_real_scenes.py

Results

<img src="./Results/Balls_before_warp.gif" width="40%" height="40%"> <img src="./Results/Plants_before_warp.gif" width="40%" height="40%">

<img src="./Results/Balls_after_warp.gif" width="40%" height="40%"> <img src="./Results/Plants_after_warp.gif" width="40%" height="40%">

<img src="./Results/ball_depth.jpg" width="40%" height="40%"> <img src="./Results/plants_depth.jpg" width="40%" height="40%">

Limitation

Sources

[1] Chang, Jia-Ren, and Yong-Sheng Chen. "Pyramid stereo matching network." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. code paper

[2] Shen, Zhelun, Yuchao Dai, and Zhibo Rao. "Cfnet: Cascade and fused cost volume for robust stereo matching." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. code paper

[3] Abuolaim, Abdullah, et al. "Learning to reduce defocus blur by realistically modeling dual-pixel data." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. code paper

[4]Silberman, Nathan, et al. "Indoor segmentation and support inference from rgbd images." European conference on computer vision. Springer, Berlin, Heidelberg, 2012. page

[5]Hazirbas, Caner, et al. "Deep depth from focus." Asian Conference on Computer Vision. Springer, Cham, 2018. code paper

[6]Maximov, Maxim, Kevin Galim, and Laura Leal-Taixé. "Focus on defocus: bridging the synthetic to real domain gap for depth estimation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. code paper

[7]Wang, Ning-Hsu, et al. "Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus Supervision." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. code paper

[8] Scharstein, Daniel, et al. "High-resolution stereo datasets with subpixel-accurate ground truth." German conference on pattern recognition. Springer, Cham, 2014. page

[9] Honauer, Katrin, et al. "A dataset and evaluation methodology for depth estimation on 4D light fields." Asian Conference on Computer Vision. Springer, Cham, 2016. page

[10] Herrmann, Charles, et al. "Learning to autofocus." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. page