Home

Awesome

EQVI-Enhanced Quadratic Video Interpolation

winning solution of AIM2020 VTSR Challenge

Authors: Yihao Liu*, Liangbin Xie*, Li Siyao, Wenxiu Sun, Yu Qiao, Chao Dong [paper]
*equal contribution

If you find our work is useful, please kindly cite it.

@InProceedings{liu2020enhanced,  
author = {Yihao Liu and Liangbin Xie and Li Siyao and Wenxiu Sun and Yu Qiao and Chao Dong},  
title = {Enhanced quadratic video interpolation},  
booktitle = {European Conference on Computer Vision Workshops},  
year = {2020},  
}

visual_comparison

News

TODO

:construction_worker: The list gose on and on...
So many things to do, let me have a break... :see_no_evil:

Preparation

Dependencies

Install correlation package

In our implementation, we use ScopeFlow as a pretrained flow estimation module.
Please follow the instructions to install the required correlation package:

cd models/scopeflow_models/correlation_package
python setup.py install

Note:
if you use CUDA>=9.0, just execute the above commands straightforward;
if you use CUDA==8.0, you need to change the folder name correlation_package_init into correlation_package, and then execute the above commands.

Please refer to ScopeFlow and irr for more information.

Download pretrained models

unzip checkpoints.zip

There should be five models in the checkpoints folder:

Model Performance Comparison on REDS_VTSR (PSNR/SSIM)

ModelbaselineRCSNRQFPMS-FusionREDS_VTSR val (30 clips)<sup>*</sup>REDS_VTSR5 (5 clips) <sup>**</sup>
Stage3 RCSN+RQFP:white_check_mark::white_check_mark::white_check_mark::x:24.035424.9633/0.7268
Stage4 MS-Fusion:white_check_mark::white_check_mark::white_check_mark::white_check_mark:24.056224.9706/0.7263
Stage123 scratch:white_check_mark::white_check_mark::white_check_mark::x:24.096225.0699/0.7296
Stage123 scratch vgg:white_check_mark::white_check_mark::white_check_mark::x:24.006924.9684/0.7237

* The performance is evaluated by x2 interpolation (interpolate 1 frame between two given frames).
** Poposed in our [EQVI paper]. Clip 002, 005, 010, 017 and 025 of REDS_VTSR validation set.

Clarification:

Sample interpolated results

For convenient comparison, we now provide two predicted results produced by EQVI (Stage123_scratch_checkpoint.ckpt) and EQVI-P (Stage123_scratch_vgg_checkpoint) on REDS_VTSR validation set. You can download them at Google Drive.

Data preparation

The REDS_VTSR training and validation dataset can be found here.
More datasets and models will be included soon.

Quick Testing

  1. Specify the inference settings
    modify configs/config_xxx.py, including:
  1. Execute the following command to start inference:
CUDA_VISIBLE_DEVICES=0 python interpolate_REDS_VTSR.py configs/config_xxx.py

Note: interpolate_REDS_VTSR.py is specially coded with REDS_VTSR dataset.

:zap: Now we support testing for arbitrary dataset with a generic inference script interpolate_EQVI.py.

CUDA_VISIBLE_DEVICES=0 python interpolate_EQVI.py configs/config_xxx.py

The output results will be stored in the specified $store_path$.

Training

  1. Specify the training settings in configs/config_train_EQVI_VTSR.py
  2. execute the following commands:
    CUDA_VISIBLE_DEVICES=0,1,2,3 python train_EQVI_lap_l1.py --config configs/config_train_EQVI_VTSR.py
    Note:
    (1) This will train EQVI model with equipping RCSN and RQFP from scratch. The performance is better than the results we reported in the paper.
    (2) We print training logs after each epoch, so it dose take a while to show the logs. Specifically, we use 4 GTX 2080Ti GPUs to train the model. About 3600s for one epoch. The training procedure lasts about 3-5 days.
    (3) The dataloader is coded with REDS_VTSR dataset. If you want to train on your own dataset, you may need to modify or rewrite the dataloader file.