Home

Awesome

OACC-Net

<br> <p align="center"> <img src="https://raw.github.com/YingqianWang/OACC-Net/master/Figs/OACC-Net.png" width="90%"> </p>

PyTorch implementation of our paper "Occlusion-Aware Cost Constructor for Light Field Depth Estimation". [CVPR 2022]<br>

News and Updates:

Preparation:

Requirement:

Datasets:

Path structure:

├──./datasets/
│    ├── training
│    │    ├── antinous
│    │    │    ├── gt_disp_lowres.pfm
│    │    │    ├── valid_mask.png
│    │    │    ├── input_Cam000.png
│    │    │    ├── input_Cam001.png
│    │    │    ├── ...
│    │    ├── boardgames
│    │    ├── ...
│    ├── validation
│    │    ├── backgammon
│    │    │    ├── gt_disp_lowres.pfm
│    │    │    ├── input_Cam000.png
│    │    │    ├── input_Cam001.png  
│    │    │    ├── ...
│    │    ├── boxes
│    |    ├── ...
│    ├── test
│    │    ├── bedroom
│    │    │    ├── input_Cam000.png
│    │    │    ├── input_Cam001.png  
│    │    │    ├── ...
│    │    ├── bicycle
│    |    ├── herbs
│    |    ├── origami

Train:

Test on your own LFs:

Reproduce the scores on the HCI 4D LF benchmark:

Reproduce the inference time reported in our paper:

Results:

Quantitative Results:

<p align="center"> <img src="https://raw.github.com/YingqianWang/OACC-Net/master/Figs/QuantitativeMSE.png" width="95%"> </p>

Visual Comparisons:

<p align="center"> <img src="https://raw.github.com/YingqianWang/OACC-Net/master/Figs/Visual.png" width="95%"> </p>

Screenshot on the HCI 4D LF Benchmark (March 2022):

<p align="center"> <img src="https://raw.github.com/YingqianWang/OACC-Net/master/Figs/Screenshot.png" width="75%"> </p>

Performance on real LFs:

<p align="center"> <img src="https://raw.github.com/YingqianWang/OACC-Net/master/Figs/VisualReal.png" width="65%"> </p>

Please refer to our supplemental material for additional quantitative and visual comparisons.

Citiation

If you find this work helpful, please consider citing:

@InProceedings{OACC-Net,
    author    = {Wang, Yingqian and Wang, Longguang and Liang, Zhengyu and Yang, Jungang and An, Wei and Guo, Yulan},
    title     = {Occlusion-Aware Cost Constructor for Light Field Depth Estimation},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {19809-19818}
}
<br>

Contact

Welcome to raise issues or email to wangyingqian16@nudt.edu.cn for any question regarding this work.

<details> <summary>statistics</summary>

visitors

</details>