Awesome
Restore Globally, Refine Locally: A Mask-Guided Scheme to Accelerate Super-Resolution Networks
Introduction
This project is the implement of Restore Globally, Refine Locally: A Mask-Guided Scheme to Accelerate Super-Resolution Networks.
ECCV Poster | ECCV 5-min presentation
Requirements
Train Data
- DIV2K
- Flickr2K
Please download the datasets and put them in the
data
folder. Please refer to the official website for DIV2K and official website for Flickr2K. Please note that changing the path of the datasets in the code (__init__.py
indataset
folder) is necessary.
Test Data
Download Set5, Set14, Urban100, BSDS100 and Manga109 from Google Drive uploaded by BasicSR. Update the dataset location in .dataset/init.py.
Training
Train the model
To train the model, run the following command:
python3 -m torch.distributed.launch --nproc_per_node=$1 --master_port=$2 train_all.py
python3 -m torch.distributed.launch --nproc_per_node=$1 --master_port=$2 train_mask.py
Testing
Please refer to validate.py in each experiment folder or quick test above.
FLOPs and Parameters
Please run the following command to get the FLOPs and Parameters of the model:
python3 cal_flops_params.py
For more information, please refer to ECCVW paper "AIM 2020 Challenge on Efficient Super-Resolution: Methods and Results". CuDNN (https://developer.nvidia.com/rdp/cudnn-archive) should be installed.
Acknowledgement
We refer to BasicSR and Simple-SR for some details. Thanks for Kai Zhang for providing the code of calculating FLOPs and Parameters.
Citation
@inproceedings{hu2022restore,
title={Restore Globally, Refine Locally: A Mask-Guided Scheme to Accelerate Super-Resolution Networks},
author={Hu, Xiaotao and Xu, Jun and Gu, Shuhang and Cheng, Ming-Ming and Liu, Li},
booktitle={European Conference on Computer Vision},
pages={74--91},
year={2022},
organization={Springer}
}