Awesome
DPT
<br> <p align="center"> <img src="https://raw.github.com/BITszwang/DPT/master/Figs/framework.png" width="90%"> </p>Official Pytorch implementation of the paper "Detail-Preserving Transformer for Light Field Image Super-Resolution" accepted by AAAI 2022 .
Updates
- 2022.01: Our method is available at the newly-released repository BasicLFSR, an open-source and easy-to-use toolbox for LF image SR.
- 2022.01: The code is released.
Requirements
- Python 3.7.7
- Pytorch=1.5.0
- torchvision=0.6.0
- h5py=2.8.0
- Matlab
Dataset
We use the EPFL, HCInew, HCIold, INRIA and STFgantry datasets for both training and testing. You can download the above dataset from Baidu Drive (key:912V).
Download the pretrained weights
We share the model weights of DPT. You can download them from Baidu Drive (key:912V).
Download the visual results
We share the super-resolved results generated by our DPT. Then, researchers can compare their methods to our DPT without performing inference. Results are available at Baidu Drive (key:912V).
Prepare the datasets
To generate the training data,
Using Matlab to run `GenerateTrainingData.m`
To generate the testing data,
Using Matlab to run `GenerateTestData.m`
We also provide the processed datasets we used in the paper. The processed datasets are avaliable at Baidu Drive (key:912V).
Train
To perform DPT training, please run
python train.py
Checkpoint will be saved to ./log/
.
Test
To evaluate DPT performance, please run
python test.py
The performance of DPT on five datasets will be printed on the screen. The visual result of each scene will be saved in ./Results/
. The PSNR and SSIM values of each scene will aslo be saved in ./PSNRSSIM/
.
Generate visual results
To generate the visual super-resolved results,
Using Matlab to run `GenerateResultImages.m`
The '.mat' files in ./Results/
will be converted to '.png' images to ./SRimages/
.
To generate the visual gradient results, please run
python generate_visual_gradient_map.py
Gradient results will be saved to ./GRAimages/
.
Performance
Quantitative comparisons
<p align="center"> <img src="https://raw.github.com/BITszwang/DPT/master/Figs/quantitativeresults.png" width="95%"> </p>Visual comparisons
<p align="center"> <img src="https://raw.github.com/BITszwang/DPT/master/Figs/visualresults.png" width="95%"> </p>Citation
If you find this work helpful, please consider citing the following paper:
@inproceedings{wang2022detail,
title={Detail-Preserving Transformer for Light Field Image Super-Resolution},
author={Wang, Shunzhou and Zhou, Tianfei and Lu, Yao and Di, Huijun},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2022}
}
Acknowledgements
This code is heavily based on LF-DFNet. We also refer to the codes in VSR-Transformer, COLA-Net, and SPSR. We thank the authors for sharing the codes. We would like to thank Yingqian Wang for his help with LFSR. We would also like to thank Zhengyu Liang for adding our DPT to the repository BasicLFSR.
Contact
If you have any question about this work, feel free to contact with me via shunzhouwang@bit.edu.cn.