Home

Awesome

<div align="center" width="100%"> <h1>🎬Show-1</h1> </div> <div> <div align="center"> <a href='https://junhaozhang98.github.io/' target='_blank'>David Junhao Zhang<sup>*</sup></a>&emsp; <a href='https://zhangjiewu.github.io/' target='_blank'>Jay Zhangjie Wu<sup>*</sup></a>&emsp; <a href='https://jia-wei-liu.github.io/' target='_blank'>Jia-Wei Liu<sup>*</sup></a> <br> <a href='https://ruizhaocv.github.io/' target='_blank'>Rui Zhao<sup></sup></a>&emsp; <a href='https://siacorplab.nus.edu.sg/people/ran-lingmin/' target='_blank'>Lingmin Ran<sup></sup></a>&emsp; <a href='https://ycgu.site/' target='_blank'>Yuchao Gu<sup></sup></a>&emsp; <a href='https://scholar.google.com/citations?user=No9OsocAAAAJ&hl=en' target='_blank'>Difei Gao<sup></sup></a>&emsp; <a href='https://sites.google.com/view/showlab/home?authuser=0' target='_blank'>Mike Zheng Shou<sup>&#x2709</sup></a> </div> <div> <div align="center"> <a href='https://sites.google.com/view/showlab/home?authuser=0' target='_blank'>Show Lab, National University of Singapore</a> </br> <sup>*</sup> Equal Contribution&emsp; <sup>&#x2709</sup> Corresponding Author </div>

Hits

Project Page | arXiv | PDF | 🤗 Space | Colab | Replicate Demo

News

Setup

Requirements

pip install -r requirements.txt

Note: PyTorch 2.0+ is highly recommended for more efficiency and speed on GPUs.

Weights

All model weights for Show-1 are available on Show Lab's HuggingFace page: Base Model (show-1-base), Interpolation Model (show-1-interpolation), and Super-Resolution Model (show-1-sr1, show-1-sr2).

Note that our show-1-sr1 incorporates the image super-resolution model from DeepFloyd-IF, DeepFloyd/IF-II-L-v1.0, to upsample the first frame of the video. To obtain the respective weights, follow their official instructions.

Usage

To generate a video from a text prompt, run the command below:

python run_inference.py

By default, the videos generated from each stage are saved to the outputs folder in the GIF format. The script will automatically fetch the necessary model weights from HuggingFace. If you prefer, you can manually download the weights using git lfs and then update the pretrained_model_path to point to your local directory. Here's how:

git lfs install
git clone https://huggingface.co/showlab/show-1-base 

A demo is also available on the showlab/Show-1 🤗 Space. You can use the gradio demo locally by running:

python app.py

Demo Video

https://github.com/showlab/Show-1/assets/55792387/32242135-25a5-4757-b494-91bf314581e8

Citation

If you make use of our work, please cite our paper.

@article{zhang2023show,
  title={Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation},
  author={Zhang, David Junhao and Wu, Jay Zhangjie and Liu, Jia-Wei and Zhao, Rui and Ran, Lingmin and Gu, Yuchao and Gao, Difei and Shou, Mike Zheng},
  journal={arXiv preprint arXiv:2309.15818},
  year={2023}
}

Commercial Use

We are working with the university (NUS) to figure out the exact paperwork needed for approving commercial use request. In the meantime, to speed up the process, we'd like to solicit intent of interest from community and later on we will process these requests with high priority. If you are keen, can you kindly email us at mike.zheng.shou@gmail.com and junhao.zhang@u.nus.edu to answer the following questions, if possible:

Shoutouts