Home

Awesome

VSR-Transformer

By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool

This paper proposes a new Transformer for video super-resolution (called VSR-Transformer). Our VSR-Transformer block contains a spatial-temporal convolutional self-attention layer and a bidirectionaloptical flow-based feed-forward layer. Our VSR-Transformer is able to improve the performance of VSR. This repository is the official implementation of "Video Super-Resolution Transformer".

<p align="center"><img width="100%" src="figs/framework.png" /></p> <p align="center"><img width="100%" src="figs/attention.png" /></p> <p align="center"><img width="100%" src="figs/feedforward.png" /></p>

Dependencies and Installation

  1. Clone repository

    git clone https://github.com/caojiezhang/VSR-Transformer.git
    
  2. Install dependent packages

    cd VSR-Transformer
    pip install -r requirements.txt
    
  3. Compile environment

    python setup.py develop
    

Dataset Preparation

Training

Testing

Citation

If you use this code of our paper please cite:

@article{cao2021vsrt,
  title={Video Super-Resolution Transformer},
  author={Cao, Jiezhang and Li, Yawei and Zhang, Kai and Van Gool, Luc},
  journal={arXiv},
  year={2021}
}

Acknowledgments

This repository is implemented based on BasicSR. If you use the repository, please consider citing BasicSR.