Awesome
SLS-CVPR2022
Junghun Oh, Heewon Kim, Seungjun Nah, Cheeun Hong, Jonghyun Choi, Kyoung Mu Lee
This repository is a Pytorch implementation of the paper "Attentive Fine-Grained Structured Sparsity for Image Restoration" from CVPR2022. [arXiv]
If you find this code useful for your research, please consider citing our paper:
@InProceedings{Oh_2022_CVPR,
author = {Oh, Junghun and Kim, Heewon and Nah, Seungjun and Hong, Cheeun and Choi, Jonghyun and Lee, Kyoung Mu},
title = {Attentive Fine-Grained Structured Sparsity for Image Restoration},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022}
}
Proposed Method
Results
Quantitative results Qualitative results
Dataset and Pre-trained models
For super-resolution, we use DIV2K dataset to train and validate a model. You can download it here
After training, we evaluate trained models with benchmark datasets (Set14 - Zeyde et al. LNCS 2010, B100 - Martin et al. ICCV 2001, and Urban100 - Huang et al. CVPR 2015). You can download them here.
Unpack the downloaded tar files and change the args.dir_data
in super-resolution/src/option.py
to the directory where the DIV2K and benchmark datasets are located.
Since our method is applied to pre-trained models, you should download them through link, make a directory mkdir super-resolution/pretrained
, and place the downloaded models in the directory.
Usage
Clone this repository.
git clone https://github.com/JungHunOh/SLS_CVPR2022.git
cd SLS_CVPR2022
cd super-resolution/src
For training,
bash ./scripts/train_sls_carnX4.sh $gpu $target_budget # Training on DIV2K
For test,
bash ./scripts/test_sls_carnX4.sh $gpu $exp_name # Test on Set14, B100, Urban100
To see the computational costs (w.r.t MACs and Num. Params.) of a trained model,
bash ./scripts/compute_costs.sh $gpu $model_dir
Acknowledgment
Our implementation is based on the following repositories: