Awesome
IKC: Blind Super-Resolution With Iterative Kernel Correction
ArXiv | BibTex | Project Website
Here is the implementation of 'Blind Super-Resolution With Iterative Kernel Correction'.<br/> Based on [BasicSR], [MMSR]. About more details, check BasicSR.<br/> Thanks to Jinjin Gu and Xintao Wang.
Updates
[2019-09-22] IKC v0.1 is modified.<br/> [2019-09-25] IKC v0.2 is modified. User could use .yaml to change different settings(scale, sigma, etc.)
Architecture
<p align="center"> <img height="300" src="./data_samples/samples/pipeline.jpg"> </p>Kernel mismatch
<p align="center"> <img height="360" src="./data_samples/samples/kernel.jpg"> </p>Dependencies
- Python 3 (Recommend to use Anaconda)
- PyTorch >= 1.0
- NVIDIA GPU + CUDA
- Python packages:
pip install numpy opencv-python lmdb pyyaml
- TensorBoard:
- PyTorch >= 1.1:
pip install tb-nightly future
- PyTorch == 1.0:
pip install tensorboardX
- PyTorch >= 1.1:
Installation
- Clone this repo:
git clone https://github.com/yuanjunchai/IKC.git
cd IKC
- Install PyTorch and dependencies from http://pytorch.org
Dataset Preparation
We use DIV2K, Flickr2K, Set5, Set14, Urban100, BSD100 datasets.
To train a model on the full dataset(DIV2K+Flickr2K, totally 3450 images), download datasets from official websites.
After download, run codes/scripts/generate_mod_LR_bic.py
to generate LRblur/LR/HR/Bicubic datasets paths and corresponding kernel map.
python codes/scripts/generate_mod_LR_bic.py
About data
When train, dataset_GT is used to produce actual LR and corresponding kernel in train_IKC.py
and train_SFTMD.py
. Therefore, dataset_LQ is not used.<br/>
When test, the operation is the same as above in test_SFTMD.py
so as to get kernel maps.<br/>
However, you need to change dataset_LQ in test_IKC.py
!!<br/>
Another method is use generate_mod_LR_bic.py
.
Getting Started
Pretrained model
You could download the pre-trained models from ./checkpoints
directory.<br/>
Remember: change opt['path']['pretrain_model_G'] of the .yaml to the models' path you saved.
Train
First, train SFTMD network, and then use pretrained SFTMD to train Predictor and Corrector networks iteratively.
- To train the SFTMD model, change image path of
codes/options/train/train_SFTMD.yml
, especially dataroot_GT, dataroot_LQ. You could change opt['name'] to save different checkpoint filenames, and change opt['gpu_ids'] to assign specific GPU.
python codes/train_SFTMD.py -opt_F codes/options/train/train_SFTMD.yml
- To train Predictor and Corrector models, you first should change opt_F['sftmd']['path']['pretrain_model_G'] to the path of pretrained SFTMD checkpoint. Also, dataroot_GT, dataroot_LQ of opt_P, opt_C should be filled with corresponding train&validation data paths.
python codes/train_IKC.py -opt_F codes/options/train/train_SFTMD.yml -opt_P codes/options/train/train_Predictor.yml -opt_C codes/options/train/train_Corrector.yml
Test
- At first, you'd better run
codes/scripts/generate_mod_LR_bic.py
to generate LRblur/LR/HR/Bicubic datasets paths and corresponding kernel map.
python codes/scripts/generate_mod_LR_bic.py
- To test SFTMD model, change test datasets paths of
codes/options/test/test_SFTMD.yml
.
python codes/test_SFTMD.py -opt_F codes/options/test/test_SFTMD.yml
- To test Predictor and Corrector models, change datasets paths of
codes/options/test/test_Predictor.yml
andcodes/options/test/test_Corrector.yml
.
python codes/test_IKC.py -opt_F codes/options/test/test_SFTMD.yml -opt_P codes/options/test/test_Predictor.yml -opt_C codes/options/test/test_Corrector.yml
The 'dataroot_GT' is only used as PSNR calculation. If you'd like to use it in blind-SR, you could set 'dataroot_GT:~' and just use your own LR data.
Citation
@InProceedings{gu2019blind,
author = {Gu, Jinjin and Lu, Hannan and Zuo, Wangmeng and Dong, Chao},
title = {Blind super-resolution with iterative kernel correction},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}