Home

Awesome

DRCT: Saving Image Super-resolution away from Information Bottleneck

✨✨ [CVPR NTIRE Oral Presentation]

PWC PWC PWC PWC

contributions welcome

[Paper Link] [Project Page] [Poster] [Model zoo] [Visual Results] [Slide] [Video]

Chih-Chung Hsu, Chia-Ming Lee, Yi-Shiuan Chou

Advanced Computer Vision LAB, National Cheng Kung University

Overview

In CNN-based super-resolution (SR) methods, dense connections are widely considered to be an effective way to preserve information and improve performance. (introduced by RDN / RRDB in ESRGAN...etc.)

However, SwinIR-based methods, such as HAT, CAT, DAT, etc., generally use Channel Attention Block or design novel and sophisticated Shift-Window Attention Mechanism to improve SR performance. These works ignore the information bottleneck that information flow will be lost deep in the network.

Our work simply adds dense connections in SwinIR to improve performance and re-emphasizes the importance of dense connections in Swin-IR-based SR methods. Adding dense-connection within deep-feature extraction can stablize information flow, thereby boosting performance and keeping lightweight design (compared to the SOTA methods like HAT).

<img src=".\figures\overview.png" width="500"/> <img src=".\figures\drct_fix.gif" width="600"/> <img src=".\figures\4.png" width="400"/>

Benchmark results on SRx4 without x2 pretraining. Mulit-Adds are calculated for a 64x64 input.

ModelParamsMulti-AddsForwardFLOPsSet5Set14BSD100Urban100Manga109Training Log
HAT20.77M11.22G2053M42.18G33.0429.2328.0027.9732.48-
DRCT14.13M5.92G1857M7.92G33.1129.3528.1828.0632.59-
HAT-L40.84M76.69G5165M79.60G33.3029.4728.0928.6033.09-
DRCT-L27.58M9.20G4278M11.07G33.3729.5428.1628.7033.14-
DRCT-XL (pretrained on ImageNet)----32.97 / 0.9129.08 / 0.80---log

Real DRCT GAN SRx4. (Coming Soon)

ModelTraining DataCheckpointLog
Real-DRCT-GAN_MSE_ModelDF2K + OST300CheckpointLog
Real-DRCT-GAN_Finetuned from MSEDF2K + OST300CheckpointLog

Updates

[Training log on ImageNet] [Pretrained Weight (without fine-tuning on DF2K)]

Environment

Installation

git clone https://github.com/ming053l/DRCT.git
conda create --name drct python=3.8 -y
conda activate drct
# CUDA 11.6
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge
cd DRCT
pip install -r requirements.txt
python setup.py develop

How To Inference on your own Dataset?

python inference.py --input_dir [input_dir ] --output_dir [input_dir ]  --model_path[model_path]

How To Test

python drct/test.py -opt options/test/DRCT_SRx4_ImageNet-pretrain.yml

The testing results will be saved in the ./results folder.

Note that the tile mode is also provided for limited GPU memory when testing. You can modify the specific settings of the tile mode in your custom testing option by referring to ./options/test/DRCT_tile_example.yml.

How To Train

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 drct/train.py -opt options/train/train_DRCT_SRx2_from_scratch.yml --launcher pytorch

The training logs and weights will be saved in the ./experiments folder.

Citations

If our work is helpful to your reaearch, please kindly cite our work. Thank!

BibTeX

@misc{hsu2024drct,
  title={DRCT: Saving Image Super-resolution away from Information Bottleneck}, 
  author = {Hsu, Chih-Chung and Lee, Chia-Ming and Chou, Yi-Shiuan},
  year={2024},
  eprint={2404.00722},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}
@InProceedings{Hsu_2024_CVPR,
  author    = {Hsu, Chih-Chung and Lee, Chia-Ming and Chou, Yi-Shiuan},
  title     = {DRCT: Saving Image Super-Resolution Away from Information Bottleneck},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
  month     = {June},
  year      = {2024},
  pages     = {6133-6142}
}

Thanks

A part of our work has been facilitated by HAT, SwinIR, LAM framework, and we are grateful for their outstanding contributions.

A part of our work are contributed by @zelenooki87, thanks for your big contributions and suggestions!

Contact

If you have any question, please email zuw408421476@gmail.com to discuss with the author.