Home

Awesome

Video Super Resolution

A collection of state-of-the-art video or single-image super-resolution architectures, reimplemented in tensorflow.

Project uploaded to PyPI now. Try install from PyPI:

pip install VSR

Pretrained weights is uploading now.

Several referenced PyTorch implementations are also included now.

Quick Link:

Network list and reference (Updating)

The hyperlink directs to paper site, follows the official codes if the authors open sources.

All these models are implemented in ONE framework.

ModelPublishedCode*VSR (TF)**VSR (Torch)KeywordsPretrained
SRCNNECCV14-, KerasYYKaiming
RAISRarXiv---Google, Pixel 3
ESPCNCVPR16-, KerasYYReal time
VDSRCVPR16-YYDeep, Residual
DRCNCVPR16-YYRecurrent
DRRNCVPR17Caffe, PyTorchYYRecurrent
LapSRNCVPR17MatlabY-Huber loss
EDSRCVPR17-YYNTIRE17 Champion
SRGANCVPR17-Y-1st proposed GAN
VESPCNCVPR17-YYVideoSR
MemNetICCV17CaffeY-
SRDenseNetICCV17-, PyTorchY-Dense
SPMCICCV17TensorflowTYVideoSR
DnCNNTIP17MatlabYYDenoise
DCSCNarXivTensorflowY-
IDNCVPR18CaffeY-Fast
RDNCVPR18TorchY-Deep, BI-BD-DN
SRMDCVPR18Matlab-YDenoise/Deblur/SR
DBPNCVPR18PyTorchYYNTIRE18 Champion
ZSSRCVPR18Tensorflow--Zero-shot
FRVSRCVPR18PDFTYVideoSR
DUFCVPR18TensorflowT-VideoSR
CARNECCV18PyTorchYYFast
RCANECCV18PyTorchYYDeep, BI-BD-DN
MSRNECCV18PyTorchYY
SRFeatECCV18TensorflowYYGAN
NLRNNIPS18TensorflowT-Non-local, Recurrent
SRCliqueNetNIPS18---Wavelet
FFDNetTIP18MatlabYYConditional denoise
CBDNetCVPR19MatlabT-Blind-denoise
SOFVSRACCV18PyTorch-YVideoSR
ESRGANECCVW18PyTorch-Y1st place PIRM 2018
TecoGANarXivTensorflow-TVideoSR GAN
RBPNCVPR19PyTorch-YVideoSR
DPSRCVPR19Pytorch--
SRFBNCVPR19Pytorch--
SRNTTCVPR19Tensorflow--Adobe
SANCVPR19empty--AliDAMO SOTA
AdaFMCVPR19Pytorch--SenseTime Oral

*The 1st repo is by paper author.

**Y: included; -: not included; T: under-testing.

You can download pre-trained weights through prepare_data, or visit the hyperlink at .

Link of datasets

(please contact me if any of links offend you or any one disabled)

NameUsage#SiteComments
SET5Test5downloadjbhuang0604
SET14Test14downloadjbhuang0604
SunHay80Test80downloadjbhuang0604
Urban100Test100downloadjbhuang0604
VID4Test4download4 videos
BSD100Train300downloadjbhuang0604
BSD300Train/Val300download-
BSD500Train/Val500download-
91-ImageTrain91downloadYang
DIV2KTrain/Val900websiteNTIRE17
WaterlooTrain4741website-
MCL-VTrain12website12 videos
GOPROTrain/Val33website33 videos, deblur
CelebATrain202599websiteHuman faces
SintelTrain/Val35websiteOptical flow
FlyingChairsTrain22872websiteOptical flow
DNDTest50websiteReal noisy photos
RENOIRTrain120websiteReal noisy photos
NCTest60websiteNoisy photos
SIDD(M)Train/Val200websiteNTIRE 2019 Real Denoise
RSRTrain/Val80downloadNTIRE 2019 Real SR
Vimeo-90kTrain/Test89800website90k HQ videos

Other open datasets: Kaggle ImageNet COCO

VSR package

This package offers a training and data processing framework based on TF. What I made is a simple, easy-to-use framework without lots of encapulations and abstractions. Moreover, VSR can handle raw NV12/YUV as well as a sequence of images as inputs.

Install

  1. Prepare proper tensorflow and pytorch(optional). For example, GPU and CUDA10.0 (recommend to use conda):

    conda install tensorflow-gpu==1.15.0
    # optional
    # conda install pytorch
    
  2. Install VSR package

    # For someone see this doc online
    # git clone https://github.com/loseall/VideoSuperResolution && cd VideoSuperResolution
    pip install -e .
    

Getting Started

  1. Download pre-trained weights and (optinal) training datasets. For instance, let's begin with VESPCN and vid4 test data:

    python prepare_data.py --filter vespcn vid4
    
  2. Customize backend cd ~/.vsr/ touch config.yml

    backend: tensorflow  # (tensorflow, pytorch)
    verbose: info        # (debug, info, warning, error)
    
  3. Evaluate

    cd Train
    python eval.py srcnn -t vid4 --pretrain=/path/srcnn.pth
    
  4. Train

    python prepare_data.py --filter mcl-v
    cd Train
    python train.py vespcn --dataset mcl-v --memory_limit 1GB --epochs 100
    

OK, that's all you need. For more details, use --help to get more information.


More documents can be found at Docs.