Home

Awesome

LFT

PyTorch implementation of "Light Field Image Super-Resolution with Transformers", IEEE SPL 2022. [<a href="https://arxiv.org/abs/2108.07597">pdf</a>].<br><br>

<p align="center"> <img src="https://raw.github.com/ZhengyuLiang24/LFT/main/Figs/LFT_overview.png" width="100%"> </p>

Contributions:

Codes and Models:

Requirement

Datasets

We used the EPFL, HCInew, HCIold, INRIA and STFgantry datasets for both training and test. Please first download our dataset via Baidu Drive (key:7nzy) or OneDrive, and place the 5 datasets to the folder ./datasets/.

Train

Test

Results:

<p align="center"> <img src="https://raw.github.com/ZhengyuLiang24/LFT/main/Figs/LFT_Quantitative.png" width="100%"> </p> <p align="center"> <img src="https://raw.github.com/ZhengyuLiang24/LFT/main/Figs/LFT_Efficiency.png" width="60%"> </p> <p align="center"> <img src="https://raw.github.com/ZhengyuLiang24/LFT/main/Figs/LFT_Qualitative.png" width="100%"> </p> <p align="center"> <a href="https://wyqdatabase.s3.us-west-1.amazonaws.com/LFT_video.mp4"><img src="https://raw.github.com/ZhengyuLiang24/LFT/main/Figs/LFT_video.png" width="80%"></a> </p> <p align="center"> <img src="https://raw.github.com/ZhengyuLiang24/LFT/main/Figs/LFT_attmap.png" width="80%"> </p> <br>

Citiation

If you find this work helpful, please consider citing:

@Article{LFT,
    author    = {Liang, Zhengyu and Wang, Yingqian and Wang, Longguang and Yang, Jungang and Zhou, Shilin},
    title     = {Light Field Image Super-Resolution with Transformers},
    journal   = {IEEE Signal Processing Letters},
    year      = {2022},
}

<br>

Contact

Any question regarding this work can be addressed to zyliang@nudt.edu.cn.