Home

Awesome

DualSR: Zero-Shot Dual Learning for Real-World Super-Resolution

This repository is the official implementation of "DualSR: Zero-Shot Dual Learning for Real-World Super-Resolution".

Requirements

To install requirements:

conda env create -f environment.yml
conda activate dualsr_env

Datasets

You can download datasets mentioned in the paper from the following links.

Evaluation

To super-resolve an image using DualSR, put the image in 'test/LR' folder and run:

python main.py

If you want to get PSNR values, you need to provide ground-truth image and/or ground-truth blur kernel directories:

python main.py --gt_dir 'path to the ground-truth image' --kernel_dir 'path to the ground-truth blur kernel'

You can use argument --debug to see PSNR and loss values online during the training

To evaluate DualSR on a dataset, specify the directory that contains LR images:

python main.py --input_dir 'path to the LR input images' --output_dir 'path to save results'

Results

Our model achieves the following performance values (PSNR / SSIM) on DIV2KRK, Urban100 and NTIRE2017 datasets:

Model nameDIV2KRKUrban100NTIRE2017
DualSR30.92 / 0.872825.04 / 0.780328.82 / 0.8045

All PSNR and SSIM values are calculated using 'Evaluate_PSNR_SSIM.m' script provided by RCAN.

Acknowledgement

The code is built on KernelGAN. We thank the authors for sharing the codes.