Home

Awesome

<img src='imgs/illustration.gif' align="right" width=200>

<br><br><br><br>

Few-shot vid2vid

Project | YouTube | arXiv

<h3><b>[Note] This repo is now deprecated. Please refer to the new Imaginaire repo: https://github.com/NVlabs/imaginaire.</b></h3>

Pytorch implementation for few-shot photorealistic video-to-video translation. It can be used for generating human motions from poses, synthesizing people talking from edge maps, or turning semantic label maps into photo-realistic videos. The core of video-to-video translation is image-to-image translation. Some of our work in that space can be found in pix2pixHD and SPADE. <br><br> Few-shot Video-to-Video Synthesis
Ting-Chun Wang, Ming-Yu Liu, Andrew Tao, Guilin Liu, Jan Kautz, Bryan Catanzaro
NVIDIA Corporation
In Neural Information Processing Systems (NeurIPS) 2019

License

Copyright (C) 2019 NVIDIA Corporation. All rights reserved.

This work is made available under the Nvidia Source Code License (1-Way Commercial). To view a copy of this license, visit https://nvlabs.github.io/few-shot-vid2vid/License.txt

The code is released for academic research use only. For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing

Example Results

<p align='center'> <img src='imgs/dance.gif' width='400'/> <img src='imgs/statue.gif' width='400'/> </p> <p align='center'> <img src='imgs/face.gif' width='400'/> <img src='imgs/mona_lisa.gif' width='400'/> </p> <p align='center'> <img src='imgs/street.gif' width='400'/> </p>

Prerequisites

Getting Started

Installation

[Note] <b>This repo is now deprecated. Please refer to the new Imaginaire repo: https://github.com/NVlabs/imaginaire.</b>

pip install dominate requests
pip install dlib
git clone https://github.com/NVlabs/few-shot-vid2vid
cd few-shot-vid2vid

Dataset

Training

[Note] <b>This repo is now deprecated. Please refer to the new Imaginaire repo: https://github.com/NVlabs/imaginaire.</b>

Training with pose datasets

Training with face datasets

Training with street dataset

python train.py --name street --dataset_mode fewshot_street --adaptive_spade --loadSize 512 --fineSize 512 --batchSize 6

Training with your own dataset

Testing

More Training/Test Details

Citation

If you find this useful for your research, please cite the following paper.

@inproceedings{wang2019fewshotvid2vid,
   author    = {Ting-Chun Wang and Ming-Yu Liu and Andrew Tao 
                and Guilin Liu and Jan Kautz and Bryan Catanzaro},
   title     = {Few-shot Video-to-Video Synthesis},
   booktitle = {Conference on Neural Information Processing Systems (NeurIPS)},   
   year      = {2019},
}

Acknowledgments

We thank Karan Sapra for generating the segmentation maps for us.</br> This code borrows heavily from pix2pixHD and vid2vid.