Home

Awesome

Our new work about blind image super-resolution has been accepted to IJCV. The paper is available at End-to-end Alternating Optimization for Real-World Blind Super Resolution. The codes are released at RealDAN.

This is an official implementation of Unfolding the Alternating Optimization for Blind Super Resolution and End-to-end Alternating Optimization for Blind Super Resolution

If this repo works for you, please cite our papers:

@article{luo2020unfolding,
  title={Unfolding the Alternating Optimization for Blind Super Resolution},
  author={Luo, Zhengxiong and Huang, Yan and Li, Shang and Wang, Liang and Tan, Tieniu},
  journal={Advances in Neural Information Processing Systems (NeurIPS)},
  volume={33},
  year={2020}
}
@misc{luo2021endtoend,
      title={End-to-end Alternating Optimization for Blind Super Resolution}, 
      author={Zhengxiong Luo and Yan Huang and Shang Li and Liang Wang and Tieniu Tan},
      year={2021},
      eprint={2105.06878},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

This repo is build on the basis of [MMSR] and [IKC]

News

Main Results

Results about Setting 1

MethodScaleSet5Set5Set14Set14B100B100Urban100Urban100Mangan109Manga109
PSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
IKCx237.190.952632.940.902431.510.879029.850.892836.930.9667
DANv1x237.340.952633.080.904131.760.885830.600.906037.230.9710
DANv2x237.600.954433.440.909432.000.890431.430.917438.070.9734
IKCx333.060.914629.380.823328.530.789927.430.830232.430.9316
DANv1x334.040.919930.090.828728.940.791927.650.835233.160.9382
DANv2x334.190.920930.200.830929.030.794827.830.839533.280.9400
IKCx431.670.882928.310.764327.370.719225.330.750428.910.8782
DANv1x431.890.886428.420.768727.510.724825.860.772130.500.9037
DANv2x432.000.888528.500.771527.560.727725.940.774830.450.9037

Results about Setting 2 (DIV2KRK)

Methodx2x2x4x4
PSNRSSIMPSNRSSIM
KernelGAN + ZSSR30.360.866926.810.7316
DANv132.560.899727.550.7582
DANv232.580.904828.740.7893

Dependenices

Pretrained Weights

Pretrained weights of DANv1 and IKC are available at BaiduYun(Password: cbjv) or GoogleDrive. Download the weights to checkpoints

.
|-- checkpoints
`-- |-- DANv1
    |   |-- ...
    |-- DANv2
    |   |-- ...
    `--IKC
        |-- ... 

Dataset Preparation

We use DIV2K and Flickr2K as our training datasets.

For evaluation of Setting 1, we use five datasets, i.e., Set5, Set14, Urban100, BSD100 and Manga109.

We use DIV2KRK for evaluation of Setting 2.

To train a model on the full dataset(DIV2K+Flickr2K, totally 3450 images), download datasets from official websites. After download, run codes/scripts/generate_mod_blur_LR_bic.py to generate LRblur/LR/HR/Bicubic datasets paths. (You need to modify the file paths by yourself.)

python3 codes/scripts/generate_mod_blur_LR_bic.py

For efficient IO, run codes/scripts/create_lmdb.py to transform datasets to binary files. (You need to modify the file paths by yourself.)

python3 codes/scripts/create_lmdb.py

Train

For single GPU:

cd codes/config/DANv1
python3 train.py -opt=options/setting1/train_setting1_x4.yml

For distributed training

cd codes/config/DANv1
python3 -m torch.distributed.launch --nproc_per_node=8 --master_poer=4321 train.py -opt=options/setting1/train_setting1_x4.yml --launcher pytorch

Test on Synthetic Images

cd codes/config/DANv1
python3 test.py -opt=options/setting1/test_setting1_x4.yml

Test on Real Images

cd codes/config/DANv1
python3 inference.py -input_dir=/path/to/real/images/ -output_dir=/path/to/save/sr/results/