Home

Awesome

<div align="center"> <h2>FMA-Net (CVPR 2024 Oral)</h2> <div> <a href='https://sites.google.com/view/geunhyukyouk/' target='_blank'>Geunhyuk Youk</a><sup>1</sup>&nbsp; <a href='https://sites.google.com/view/ozbro/' target='_blank'>Jihyong Oh</a><sup>ā€  2</sup>&nbsp; <a href='https://www.viclab.kaist.ac.kr/' target='_blank'>Munchurl Kim</a><sup>ā€  1</sup> </div> <div> <sup>ā€ </sup>Co-corresponding authors</span> </div> <div> <sup>1</sup>Korea Advanced Institute of Science and Technology, South Korea </div> <div> <sup>2</sup>Chung-Ang University, South Korea </div> <div> <h4 align="center"> <a href="https://kaist-viclab.github.io/fmanet-site/" target='_blank'> <img src="https://img.shields.io/badge/šŸ³-Project%20Page-blue"> </a> <a href="https://arxiv.org/abs/2401.03707" target='_blank'> <img src="https://img.shields.io/badge/arXiv-2401.03707-b31b1b.svg"> </a> <a href="https://www.youtube.com/watch?v=kO7KavOH6vw" target='_blank'> <img src="https://img.shields.io/badge/Demo%20Video-%23FF0000.svg?logo=YouTube&logoColor=white"> </a> <a href="https://www.youtube.com/watch?v=G6qqJXztJDM" target='_blank'> <img src="https://img.shields.io/badge/Presentation-%23FF0000.svg?logo=YouTube&logoColor=white"> </a> <img alt="GitHub Repo stars" src="https://img.shields.io/github/stars/KAIST-VICLab/FMA-Net"> </h4> </div>
<div align="center"> <h4> This repository is the official PyTorch implementation of "FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring". </h4> </div> </div>

šŸ“§ News

šŸ“ TODO

<!-- **Reference**: -->

Reference

If you find FMA-Net useful, please consider citing:

@InProceedings{Youk_2024_CVPR,
    author    = {Youk, Geunhyuk and Oh, Jihyong and Kim, Munchurl},
    title     = {FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2024},
    pages     = {44-55}
}

Contents

Requirements

Data Preprocessing

Pretrained Model

Pre-trained model can be downloaded from here.

Training

# download code
git clone https://github.com/KAIST-VICLab/FMA-Net
cd FMA-Net

# train FMA-Net on REDS dataset
python main.py --train --config_path experiment.cfg

Testing

# test FMA-Net on REDS dataset
python main.py --test --config_path experiment.cfg

# test on your own datasets
python main.py --test_custom --config_path experiment.cfg

Results

Please visit our project page and demo video for diverse visual results.

License

The source codes including the checkpoint can be freely used for research and education only. Any commercial use should get formal permission from the principal investigator (Prof. Munchurl Kim, mkimee@kaist.ac.kr).

Acknowledgement

This work was supported by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT): No. 2021-0-00087, Development of high-quality conversion technology for SD/HD low-quality media and No. RS2022-00144444, Deep Learning Based Visual Representational Learning and Rendering of Static and Dynamic Scenes.