Home

Awesome

FlowGrounded-VideoPrediction

Torch implementation of our ECCV18 paper on video prediction based on one single still image.

<p> <img src='examples/walk0.png' width=100 /> <img src='examples/walk_pred16.gif' width=100 /> <img src='examples/flag0.png' width=100 /> <img src='examples/flag_pred16.gif' width=100 /> <img src='examples/cloud0.png' width=100 /> <img src='examples/cloud_pred16.gif' width=100 /> </p>

In each panel from left to right: one single starting frame and the predicted sequence (next 16 frames).

Getting started

git clone https://github.com/Yijunmaverick/FlowGrounded-VideoPrediction
cd FlowGrounded-VideoPrediction

Preparation

cd datasets/
sh data_process.sh
cd ..
sh download_models.sh

Training

th train_3DcVAE.lua --dataRoot datasets/DTexture
th train_flow2rgb.lua --dataRoot datasets/DTexture

Testing

th test.lua --dataRoot datasets/DTexture
python gif.py

Citation

@inproceedings{Prediction-ECCV-2018,
    author = {Li, Yijun and Fang, Chen and Yang, Jimei and Wang, Zhaowen and Lu, Xin and Yang, Ming-Hsuan},
    title = {Flow-Grounded Spatial-Temporal Video Prediction from Still Images},
    booktitle = {European Conference on Computer Vision},
    year = {2018}
}

Acknowledgement