Awesome
FeMaSR
This is the official PyTorch codes for the paper
Real-World Blind Super-Resolution via Feature Matching with Implicit High-Resolution Priors (MM22 Oral)
Chaofeng Chen*, Xinyu Shi*, Yipeng Qin, Xiaoming Li, Xiaoguang Han, Tao Yang, Shihui Guo
(* indicates equal contribution)
<a href="https://colab.research.google.com/drive/1Yzb4o5OKjK46jbQ-_HGFOVJOPMVtJQjw?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>
Update
- 2022.10.10 Release reproduce training log for SR stage in . Reach similar performance as the paper,
LPIPS: 0.329 @415k
for div2k (x4). - 2022.09.26 Add example training log with 70k iterations
- 2022.09.23 Add colab demo <a href="https://colab.research.google.com/drive/1Yzb4o5OKjK46jbQ-_HGFOVJOPMVtJQjw?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>
- 2022.07.02
- Update codes of the new version FeMaSR
- Please find the old QuanTexSR in thequantexsr
branch
Here are some example results on test images from BSRGAN and RealESRGAN.
Left: real images | Right: super-resolved images with scale factor 4
<img src="testset/butterfly.png" width="390px"/> <img src="results_x4/butterfly.png" width="390px"/> <img src="testset/0003.jpg" width="390px"/> <img src="results_x4/0003.jpg" width="390px"/> <img src="testset/00003.png" width="390px"/> <img src="results_x4/00003.png" width="390px"/> <img src="testset/Lincoln.png" width="390px"/> <img src="results_x4/Lincoln.png" width="390px"/> <img src="testset/0014.jpg" width="390px"/> <img src="results_x4/0014.jpg" width="390px"/>
<!-- <img src="testset/butterfly.png" width="156"/> <img src="results/butterfly.png" width="624px"/> <img src="testset/0003.jpg" width="156px"/> <img src="results/0003.jpg" width="624px"/> <img src="testset/00003.png" width="156px"/> <img src="results/00003.png" width="624px"/> <img src="testset/Lincoln.png" width="156px"/> <img src="results/Lincoln.png" width="624px"/> <img src="testset/0014.jpg" width="156px"/> <img src="results/0014.jpg" width="624px"/> -->Dependencies and Installation
- Ubuntu >= 18.04
- CUDA >= 11.0
- Other required packages in
requirements.txt
# git clone this repository
git clone https://github.com/chaofengc/FeMaSR.git
cd FeMaSR
# create new anaconda env
conda create -n femasr python=3.8
source activate femasr
# install python dependencies
pip3 install -r requirements.txt
python setup.py develop
Quick Inference
python inference_femasr.py -s 4 -i ./testset -o results_x4/
python inference_femasr.py -s 2 -i ./testset -o results_x2/
Train the model
Preparation
Dataset
Please prepare the training and testing data follow descriptions in the main paper and supplementary material. In brief, you need to crop 512 x 512 high resolution patches, and generate the low resolution patches with degradation_bsrgan
function provided by BSRGAN. While the synthetic testing LR images are generated by the degradation_bsrgan_plus
function for fair comparison.
Model preparation
Before training, you need to
- Download the pretrained HRP model: generator, discriminator
- Put the pretrained models in
experiments/pretrained_models
- Specify their path in the corresponding option file.
Train SR model
python basicsr/train.py -opt options/train_FeMaSR_LQ_stage.yml
Model pretrain
In case you want to pretrain your own HRP model, we also provide the training option file:
python basicsr/train.py -opt options/train_FeMaSR_HQ_pretrain_stage.yml
Citation
@inproceedings{chen2022femasr,
author={Chaofeng Chen and Xinyu Shi and Yipeng Qin and Xiaoming Li and Xiaoguang Han and Tao Yang and Shihui Guo},
title={Real-World Blind Super-Resolution via Feature Matching with Implicit High-Resolution Priors},
year={2022},
Journal = {ACM International Conference on Multimedia},
}
License
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.
Acknowledgement
This project is based on BasicSR.