Home

Awesome

PWC PWC PWC PWC

HAT Replicate

Activating More Pixels in Image Super-Resolution Transformer [Paper Link]

Xiangyu Chen, Xintao Wang, Jiantao Zhou, Yu Qiao and Chao Dong

HAT: Hybrid Attention Transformer for Image Restoration [Paper Link]

Xiangyu Chen, Xintao Wang, Wenlong Zhang, Xiangtao Kong, Jiantao Zhou, Yu Qiao and Chao Dong

Updates

Overview

<img src="https://raw.githubusercontent.com/chxy95/HAT/master/figures/Performance_comparison.png" width="600"/>

Benchmark results on SRx4 without ImageNet pretraining. Mulit-Adds are calculated for a 64x64 input.

ModelParams(M)Multi-Adds(G)Set5Set14BSD100Urban100Manga109
SwinIR11.953.632.9229.0927.9227.4532.03
HAT-S9.654.932.9229.1527.9727.8732.35
HAT20.8102.433.0429.2328.0027.9732.48

Real-World SR Results

Note that:

Results produced by Real_HAT_GAN_SRx4_sharper.pth.

<img src="https://raw.githubusercontent.com/chxy95/HAT/master/figures/Visual_Results.png" width="800"/>

Comparison with the state-of-the-art Real-SR methods.

<img src="https://raw.githubusercontent.com/chxy95/HAT/master/figures/Comparison.png" width="800"/>

Citations

BibTeX

@InProceedings{chen2023activating,
    author    = {Chen, Xiangyu and Wang, Xintao and Zhou, Jiantao and Qiao, Yu and Dong, Chao},
    title     = {Activating More Pixels in Image Super-Resolution Transformer},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {22367-22377}
}

@article{chen2023hat,
  title={HAT: Hybrid Attention Transformer for Image Restoration},
  author={Chen, Xiangyu and Wang, Xintao and Zhang, Wenlong and Kong, Xiangtao and Qiao, Yu and Zhou, Jiantao and Dong, Chao},
  journal={arXiv preprint arXiv:2309.05239},
  year={2023}
}

Environment

Installation

Install Pytorch first. Then,

pip install -r requirements.txt
python setup.py develop

How To Test

Without implementing the codes, chaiNNer is a nice tool to run our models.

Otherwise,

python hat/test.py -opt options/test/HAT_SRx4_ImageNet-pretrain.yml

The testing results will be saved in the ./results folder.

Note that the tile mode is also provided for limited GPU memory when testing. You can modify the specific settings of the tile mode in your custom testing option by referring to ./options/test/HAT_tile_example.yml.

How To Train

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 hat/train.py -opt options/train/train_HAT_SRx2_from_scratch.yml --launcher pytorch

The training logs and weights will be saved in the ./experiments folder.

Results

The inference results on benchmark datasets are available at Google Drive or Baidu Netdisk (access code: 63p5).

Contact

If you have any question, please email chxy95@gmail.com or join in the Wechat group of BasicSR to discuss with the authors.