Home

Awesome

DeMFI (ECCV2022)

ArXiv ECCV2022 GitHub Stars visitors

This is the official repository of DeMFI (Deep Joint Deblurring and Multi-Frame Interpolation).

[ArXiv_ver.] [ECCV2022_ver.] [Supp.] [Demo] [Poster] [Video5mins(YouTube)]

Last Update: 10 JULY 2022 - This work has been accepted to ECCV2022, we have uploaded a camera ready version (ECCV format) and a supplementary material in above links. Please note that the overall architecture and main experiments are same as the initial arxiv version.

<!-- **Reference**: -->

Reference

Jihyong Oh and Munchurl Kim "DeMFI: Deep Joint Deblurring and Multi-Frame Interpolation with Flow-Guided Attentive Correlation and Recursive Boosting", European Conference on Computer Vision, 2022.

BibTeX

@inproceedings{Oh2022DeMFI,
  title={DeMFI: Deep Joint Deblurring and Multi-Frame Interpolation with Flow-Guided Attentive Correlation and Recursive Boosting},
  author={Oh, Jihyong and Kim, Munchurl},
  booktitle={European Conference on Computer Vision (ECCV)},
  year={2022}
}

If you find this repository useful, please consider citing our paper.

Examples of the Demo (x8 Multi-Frame Interpolation) videos (240fps) interpolated from blurry videos (30fps)

gif1 gif2 gif3 gif4 gif5

The 30fps blurry input frames are interpolated to be 240fps sharp frames. All results are encoded at 30fps to be played as x8 slow motion and spatially down-scaled due to the limit of file sizes. Please watch the full versions of them with this demo including additional scenes.

Table of Contents

  1. Requirements
  2. Test
  3. Test_Custom
  4. Training
  5. Collection_of_Visual_Results
  6. Visualizations
  7. Arbitrary_M
  8. Contact

Requirements

Our code is implemented using PyTorch1.7, and was tested under the following setting:

Caution: since there is "align_corners=True" option in "nn.functional.interpolate" and "nn.functional.grid_sample" in PyTorch1.7, we recommend you to follow our settings. Especially, if you use the other PyTorch versions, it may lead to yield a different performance.

Test

Quick Start for Evaluations on Test Datasets (Deblurring and Multi-Frame Interpolation (x8) as in Table 2)

  1. Download the source codes in a directory of your choice <source_path>.
  2. We follow a blurry formation setting from BIN (Blurry Video Frame Interpolation) by averaging 11 consecutive frames at a stride of 8 frames over time to synthesize blurry frames captured by a long exposure, which finally generates blurry frames of 30fps with K = 8 and τ = 5 in Eq. 1.
  3. Download datasets from the dropbox links; Adobe240 (main, split1, split2, split3, split4) (split zip files, 49.7GB), GoPro(HD) (14.4GB). Since the copyrights for diverse videos of YouTube240 belong to each creator, we appreciate your understanding that it cannot be distributed. Original copyrights for Adobe240 and GoPro are provided via link1 and link2, respectively.
  4. Directory formats seem like below:
DeMFI
└── Datasets
      ├──── Adobe_240fps_blur
         ├──── test
             ├──── 720p_240fps_1
                 ├──── 00001.png
                 ├──── ...
                 └──── 00742.png
             ...
             ├──── IMG_0183           
         ├──── test_blur
            ├──── ...          
         ├──── train
            ├──── ...          
         ├──── train_blur 
            ├──── ...
  1. Download the pre-trained weight of DeMFI-Net<sub>rb</sub>(5,N<sub>tst</sub>), which was trained by Adobe240, from this link to place in <source_path>/checkpoint_dir/DeMFInet_exp1.
DeMFI
└── checkpoint_dir
   └── DeMFInet_exp1
         ├── DeMFInet_exp1_latest.pt           
  1. Run main.py with the following options in parse_args:
# For evaluating on Adobe240
python main.py --gpu 0 --phase 'test' --exp_num 1 --test_data_path './Datasets/Adobe_240fps_blur' --N_tst 3 --multiple_MFI 8 
# For evaluating on GoPro(HD)
python main.py --gpu 0 --phase 'test' --exp_num 1 --test_data_path './Datasets/GoPro_blur' --N_tst 3 --multiple_MFI 8 

Description

Test_Custom

Quick Start for your own blurry video data ('--custom_path') for any Multi-Frame Interpolation (x M)

  1. Download the source codes in a directory of your choice <source_path>.
  2. First prepare your own blurry videos in '.png' format in <source_path>/custom_path by following a hierarchy as belows:
DeMFI
└── custom_path
   ├── scene1
       ├── 'xxx.png'
       ├── ...
       └── 'xxx.png'
   ...
   
   ├── sceneN
       ├── 'xxxxx.png'
       ├── ...
       └── 'xxxxx.png'

  1. Since DeMFI-Net takes 4 input frames, each scene must have at least 4 frames.
  2. Download the pre-trained weight of DeMFI-Net<sub>rb</sub>(5,N<sub>tst</sub>), which was trained by Adobe240, from this link to place in <source_path>/checkpoint_dir/DeMFInet_exp1.
DeMFI
└── checkpoint_dir
   └── DeMFInet_exp1
         ├── DeMFInet_exp1_latest.pt           
  1. Run main.py with the following options in parse_args (ex) joint Deblurring and Multi-Frame Interpolation (x8)):
python main.py --gpu 0 --phase 'test_custom' --exp_num 1 --N_tst 3 --multiple_MFI 8 --custom_path './custom_path' --patch_boundary 32

Description

Training

Quick Start for Adobe240

  1. Download the source codes in a directory of your choice <source_path>.
  2. First download our Adobe240 (main, split1, split2, split3, split4) (split zip files, 49.7GB) and unzip & place them as belows:
DeMFI
└── Datasets
      ├──── Adobe_240fps_blur
         ├──── test
             ├──── 720p_240fps_1
                 ├──── 00001.png
                 ├──── ...
                 └──── 00742.png
             ...
             ├──── IMG_0183           
         ├──── test_blur
            ├──── ...          
         ├──── train
            ├──── ...          
         ├──── train_blur 
            ├──── ...
  1. Run main.py with the following options in parse_args:
python main.py --phase 'train' --exp_num 1 --train_data_path './Datasets/Adobe_240fps_blur' --test_data_path './Datasets/Adobe_240fps_blur' --N_trn 5 --N_tst 3

Description

Collection_of_Visual_Results

Visualizations

Arbitrary_M

Contact

If you have any question, please send an email to [Jihyong Oh] - jhoh94@kaist.ac.kr.

License

The source codes including the checkpoint can be freely used for research and education only. Any commercial use should get formal permission first.

Acknowledgement

We follow a blurry formation setting from BIN (Blurry Video Frame Interpolation) by averaging 11 consecutive frames at a stride of 8 frames over time to synthesize blurry frames captured by a long exposure, which finally generates blurry frames of 30fps with K = 8 and τ = 5 in Eq. 1. We thank the authors for sharing codes for their awesome works.