Awesome
<h2 align="center"> <b>MoTIF: Learning Motion Trajectories with Local Implicit Neural Functions <br> for Continuous Space-Time Video Super-Resolution</b><b><i>ICCV 2023</i></b>
<div align="center"> <a href="https://github.com/sichun233746/MoTIF" target="_blank"> <img src="https://img.shields.io/badge/ICCV 2023-red"></a> <a href="https://arxiv.org/abs/2307.07988" target="_blank"> <img src="https://img.shields.io/badge/Paper-orange" alt="paper"></a> <!--<a href="https://red-fairy.github.io/ZeroShotDayNightDA-Webpage/supp.pdf" target="_blank"> <img src="https://img.shields.io/badge/Supplementary-green" alt="supp"></a>--> <a href="https://sichun233746.github.io/MoTIF/" target="_blank"> <img src="https://img.shields.io/badge/Project Page-blue" alt="Project Page"/></a> </div> </h2>This the official repository of the paper MoTIF: Learning Motion Trajectories with Local Implicit Neural Functions <br> for Continuous Space-Time Video Super-Resolution.
For more information, please visit our project website.
Authors: Yi-Hsin Chen*, Si-Cun Chen*, Yi-Hsin Chen, Yen-Yu Lin, Wen-Hsiao Peng
Abstract
This work addresses continuous space-time video super-resolution (C-STVSR) that aims to up-scale an input video both spatially and temporally by any scaling factors. One key challenge of C-STVSR is to propagate information temporally among the input video frames. To this end, we introduce a space-time local implicit neural function. It has the striking feature of learning forward motion for a continuum of pixels. We motivate the use of forward motion from the perspective of learning individual motion trajectories, as opposed to learning a mixture of motion trajectories with backward motion. To ease motion interpolation, we encode sparsely sampled forward motion extracted from the input video as the contextual input. Along with a reliability-aware splatting and decoding scheme, our framework, termed MoTIF, achieves the state-of-the-art performance on C-STVSR.
Code
Test code draft available.
Testing
- Install all the dependencies.
- Download pretrained weights.
- Edit test.yml for different datasets.
- Run
python test.py
Pre-trained weights
Citation
If you find this work useful in your research, please consider citing:
@inproceedings{chen2023MoTIF,
title={MoTIF: Learning Motion Trajectories with Local Implicit Neural Functions for Continuous Space-Time Video Super-Resolution},
author={Yi-Hsin Chen, Si-Cun Chen, Yi-Hsin Chen, Yen-Yu Lin, Wen-Hsiao Peng},
booktitle={ICCV},
year={2023}
}
Contact
If you have any questions, please contact Si-Cun Chen (sicun.mapl.cs09@nycu.edu.tw)