Home

Awesome

FreeNoise: Tuning-Free Longer Video Diffusion via Noise Rescheduling

๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ The LongerCrafter for longer high-quality video generation are now released!

<div align="center"> <p style="font-weight: bold"> โœ… totally <span style="color: red; font-weight: bold">no</span> tuning &nbsp;&nbsp;&nbsp;&nbsp; โœ… less than <span style="color: red; font-weight: bold">20%</span> extra time &nbsp;&nbsp;&nbsp;&nbsp; โœ… support <span style="color: red; font-weight: bold">512</span> frames &nbsp;&nbsp;&nbsp;&nbsp; </p>

<a href='https://arxiv.org/abs/2310.15169'><img src='https://img.shields.io/badge/arXiv-2310.15169-b31b1b.svg'></a> ย ย ย ย ย  <a href='http://haonanqiu.com/projects/FreeNoise.html'><img src='https://img.shields.io/badge/Project-Page-Green'></a> ย ย ย ย ย  Hugging Face Spaces ย ย ย ย ย  Replicate

Haonan Qiu, Menghan Xia*, Yong Zhang, Yingqing He, <br> Xintao Wang, Ying Shan, and Ziwei Liu* <br><br> (* corresponding author)

From Tencent AI Lab and Nanyang Technological University.

<img src=assets/t2v/hd01.gif>

<p>Input: "A chihuahua in astronaut suit floating in space, cinematic lighting, glow effect"; <br> Resolution: 1024 x 576; Frames: 64.</p> <img src=assets/t2v/hd02.gif> <p>Input: "Campfire at night in a snowy forest with starry sky in the background"; <br> Resolution: 1024 x 576; Frames: 64.</p> </div>

๐Ÿ”† Introduction

๐Ÿค—๐Ÿค—๐Ÿค— LongerCrafter (FreeNoise) is a tuning-free and time-efficient paradigm for longer video generation based on pretrained video diffusion models.

1. Longer Single-Prompt Text-to-video Generation

<div align="center"> <img src=assets/t2v/sp512.gif> <p>Longer single-prompt results. Resolution: 256 x 256; Frames: 512. (Compressed)</p> </div>

2. Longer Multi-Prompt Text-to-video Generation

<div align="center"> <img src=assets/t2v/mp256.gif> <p>Longer multi-prompt results. Resolution: 256 x 256; Frames: 256. (Compressed)</p> </div>

๐Ÿ“ Changelog

๐Ÿงฐ Models

ModelResolutionCheckpointDescription
VideoCrafter (Text2Video)576x1024Hugging FaceSupport 64 frames on NVIDIA A100 (40GB)
VideoCrafter (Text2Video)256x256Hugging FaceSupport 512 frames on NVIDIA A100 (40GB)
VideoCrafter2 (Text2Video)320x512Hugging FaceSupport 128 frames on NVIDIA A100 (40GB)

(Reduce the number of frames when you have smaller GPUs, e.g. 256x256 resolutions with 64 frames.)

โš™๏ธ Setup

Install Environment via Anaconda (Recommended)

conda create -n freenoise python=3.8.5
conda activate freenoise
pip install -r requirements.txt

๐Ÿ’ซ Inference

1. Longer Text-to-Video

<!-- 1) Download pretrained T2V models via [Hugging Face](https://huggingface.co/VideoCrafter/Text2Video-512-v1/blob/main/model.ckpt), and put the `model.ckpt` in `checkpoints/base_512_v1/model.ckpt`. 2) Input the following commands in terminal. ```bash sh scripts/run_text2video_freenoise_512.sh ``` -->
  1. Download pretrained T2V models via Hugging Face, and put the model.ckpt in checkpoints/base_1024_v1/model.ckpt.
  2. Input the following commands in terminal.
  sh scripts/run_text2video_freenoise_1024.sh

2. Longer Multi-Prompt Text-to-Video

  1. Download pretrained T2V models via Hugging Face, and put the model.ckpt in checkpoints/base_256_v1/model.ckpt.
  2. Input the following commands in terminal.
  sh scripts/run_text2video_freenoise_mp_256.sh

๐Ÿงฒ Support For Other Models

FreeNoise is supposed to work on other similar frameworks. An easy way to test compatibility is by shuffling the noise to see whether a new similar video can be generated (set eta to 0). If your have any questions about applying FreeNoise to other frameworks, feel free to contact Haonan Qiu.

Current official implementation: FreeNoise-VideoCrafter, FreeNoise-AnimateDiff, FreeNoise-LaVie

๐Ÿ‘จโ€๐Ÿ‘ฉโ€๐Ÿ‘งโ€๐Ÿ‘ฆ Crafter Family

VideoCrafter: Framework for high-quality video generation.

ScaleCrafter: Tuning-free method for high-resolution image/video generation.

TaleCrafter: An interactive story visualization tool that supports multiple characters.

๐Ÿ˜‰ Citation

@misc{qiu2023freenoise,
      title={FreeNoise: Tuning-Free Longer Video Diffusion Via Noise Rescheduling}, 
      author={Haonan Qiu and Menghan Xia and Yong Zhang and Yingqing He and Xintao Wang and Ying Shan and Ziwei Liu},
      year={2023},
      eprint={2310.15169},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

๐Ÿ“ข Disclaimer

We develop this repository for RESEARCH purposes, so it can only be used for personal/research/non-commercial purposes.