Awesome
VSR-Transformer
By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool
This paper proposes a new Transformer for video super-resolution (called VSR-Transformer). Our VSR-Transformer block contains a spatial-temporal convolutional self-attention layer and a bidirectionaloptical flow-based feed-forward layer. Our VSR-Transformer is able to improve the performance of VSR. This repository is the official implementation of "Video Super-Resolution Transformer".
<p align="center"><img width="100%" src="figs/framework.png" /></p> <p align="center"><img width="100%" src="figs/attention.png" /></p> <p align="center"><img width="100%" src="figs/feedforward.png" /></p>Dependencies and Installation
- Python >= 3.7 (Recommend to use Anaconda or Miniconda)
- PyTorch >= 1.3
- NVIDIA GPU + CUDA
-
Clone repository
git clone https://github.com/caojiezhang/VSR-Transformer.git
-
Install dependent packages
cd VSR-Transformer pip install -r requirements.txt
-
Compile environment
python setup.py develop
Dataset Preparation
- Please refer to DatasetPreparation.md for more details.
- The descriptions of currently supported datasets (
torch.utils.data.Dataset
classes) are in Datasets.md.
Training
-
Please refer to configuration of training for more details and pretrained models.
# Train on REDS CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/train_vsrTransformer_x4_REDS.yml --launcher pytorch # Train on Vimeo-90K CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/train_vsrTransformer_x4_Vimeo.yml --launcher pytorch
Testing
-
Please refer to configuration of testing for more details.
# Test on REDS CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/test.py -opt options/test/test_vsrTransformer_x4_REDS.yml --launcher pytorch # Test on Vimeo-90K CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/test.py -opt options/test/test_vsrTransformer_x4_Vimeo.yml --launcher pytorch # Test on Vid4 CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/test.py -opt options/test/test_vsrTransformer_x4_Vid4.yml --launcher pytorch
Citation
If you use this code of our paper please cite:
@article{cao2021vsrt,
title={Video Super-Resolution Transformer},
author={Cao, Jiezhang and Li, Yawei and Zhang, Kai and Van Gool, Luc},
journal={arXiv},
year={2021}
}
Acknowledgments
This repository is implemented based on BasicSR. If you use the repository, please consider citing BasicSR.