Home

Awesome

๐Ÿ“ท EasyAnimate | An End-to-End Solution for High-Resolution and Long Video Generation

๐Ÿ˜Š EasyAnimate is an end-to-end solution for generating high-resolution and long videos. We can train transformer based diffusion generators, train VAEs for processing long videos, and preprocess metadata.

๐Ÿ˜Š Based on Sora like structure and DIT, we use transformer as a diffuser for video generation. We built easyanimate based on motion module, u-vit and slice-vae. In the future, we will try more training programs to improve the effect.

๐Ÿ˜Š Welcome!

Arxiv Page Project Page Modelscope Studio Hugging Face Spaces Discord Page

English | ็ฎ€ไฝ“ไธญๆ–‡

Table of Contents

Introduction

EasyAnimate is a pipeline based on the transformer architecture that can be used to generate AI photos and videos, train baseline models and Lora models for the Diffusion Transformer. We support making predictions directly from the pre-trained EasyAnimate model to generate videos of about different resolutions, 6 seconds with 24 fps (1 ~ 144 frames, in the future, we will support longer videos). Users are also supported to train their own baseline models and Lora models to perform certain style transformations.

We will support quick pull-ups from different platforms, refer to Quick Start.

What's New:

Function๏ผš

These are our generated results GALLERY (Click the image below to see the video):

Watch the video

Our UI interface is as follows: ui

Quick Start

1. Cloud usage: AliyunDSW/Docker

a. From AliyunDSW

DSW has free GPU time, which can be applied once by a user and is valid for 3 months after applying.

Aliyun provide free GPU time in Freetier, get it and use in Aliyun PAI-DSW to start EasyAnimate within 5min!

DSW Notebook

b. From ComfyUI

Our ComfyUI is as follows, please refer to ComfyUI README for details. workflow graph

c. From docker

If you are using docker, please make sure that the graphics card driver and CUDA environment have been installed correctly in your machine.

Then execute the following commands in this way:

EasyAnimateV4:

# pull image
docker pull mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:easyanimate

# enter image
docker run -it -p 7860:7860 --network host --gpus all --security-opt seccomp:unconfined --shm-size 200g mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:easyanimate

# clone code
git clone https://github.com/aigc-apps/EasyAnimate.git

# enter EasyAnimate's dir
cd EasyAnimate

# download weights
mkdir models/Diffusion_Transformer
mkdir models/Motion_Module
mkdir models/Personalized_Model

wget https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV4-XL-2-InP.tar.gz -O models/Diffusion_Transformer/EasyAnimateV4-XL-2-InP.tar.gz

cd models/Diffusion_Transformer/
tar -zxvf EasyAnimateV4-XL-2-InP.tar.gz
cd ../../
<details> <summary>(Obsolete) EasyAnimateV3:</summary>
# pull image
docker pull mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:easyanimate

# enter image
docker run -it -p 7860:7860 --network host --gpus all --security-opt seccomp:unconfined --shm-size 200g mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:easyanimate

# clone code
git clone https://github.com/aigc-apps/EasyAnimate.git

# enter EasyAnimate's dir
cd EasyAnimate

# download weights
mkdir models/Diffusion_Transformer
mkdir models/Motion_Module
mkdir models/Personalized_Model

wget https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV3-XL-2-InP-512x512.tar -O models/Diffusion_Transformer/EasyAnimateV3-XL-2-InP-512x512.tar

cd models/Diffusion_Transformer/
tar -xvf EasyAnimateV3-XL-2-InP-512x512.tar
cd ../../
</details> <details> <summary>(Obsolete) EasyAnimateV2:</summary>
# pull image
docker pull mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:easyanimate

# enter image
docker run -it -p 7860:7860 --network host --gpus all --security-opt seccomp:unconfined --shm-size 200g mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:easyanimate

# clone code
git clone https://github.com/aigc-apps/EasyAnimate.git

# enter EasyAnimate's dir
cd EasyAnimate

# download weights
mkdir models/Diffusion_Transformer
mkdir models/Motion_Module
mkdir models/Personalized_Model

wget https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV2-XL-2-512x512.tar -O models/Diffusion_Transformer/EasyAnimateV2-XL-2-512x512.tar

cd models/Diffusion_Transformer/
tar -xvf EasyAnimateV2-XL-2-512x512.tar
cd ../../
</details> <details> <summary>(Obsolete) EasyAnimateV1:</summary>
# pull image
docker pull mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:easyanimate

# enter image
docker run -it -p 7860:7860 --network host --gpus all --security-opt seccomp:unconfined --shm-size 200g mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:easyanimate

# clone code
git clone https://github.com/aigc-apps/EasyAnimate.git

# enter EasyAnimate's dir
cd EasyAnimate

# download weights
mkdir models/Diffusion_Transformer
mkdir models/Motion_Module
mkdir models/Personalized_Model

wget https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV3-XL-2-InP-512x512.tar -O models/Diffusion_Transformer/EasyAnimateV3-XL-2-InP-512x512.tar
wget https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV3-XL-2-InP-768x768.tar -O models/Diffusion_Transformer/EasyAnimateV3-XL-2-InP-768x768.tar
wget https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV3-XL-2-InP-960x960.tar -O models/Diffusion_Transformer/EasyAnimateV3-XL-2-InP-960x960.tar

cd models/Diffusion_Transformer/
tar -xvf EasyAnimateV3-XL-2-InP-512x512.tar
tar -xvf EasyAnimateV3-XL-2-InP-768x768.tar
tar -xvf EasyAnimateV3-XL-2-InP-960x960.tar
cd ../../
</details>

2. Local install: Environment Check/Downloading/Installation

a. Environment Check

We have verified EasyAnimate execution on the following environment:

The detailed of Windows:

The detailed of Linux:

We need about 60GB available on disk (for saving weights), please check!

The video sizes of EasyAnimateV4 that can be generated by different graphics memory include:

GPU memory384x672x72384x672x144576x1008x72576x1008x144768x1344x72768x1344x144960x1680x96
12GBโญ•๏ธโญ•๏ธโญ•๏ธโญ•๏ธโŒโŒโŒ
16GBโœ…โœ…โญ•๏ธโญ•๏ธโญ•๏ธโŒโŒ
24GBโœ…โœ…โœ…โœ…โœ…โŒโŒ
40GBโœ…โœ…โœ…โœ…โœ…โœ…โœ…
80GBโœ…โœ…โœ…โœ…โœ…โœ…โœ…

โœ… indicates it can run with low_gpu_memory_mode=False, โญ•๏ธ indicates it can run with low_gpu_memory_mode=True, โŒ indicates it cannot run. Note that running with low_gpu_memory_mode=True will be slower.

Some graphics cards, like the 2080ti and V100, do not support torch.bfloat16. If you are using one of these cards, you will need to modify the weight_dtype to torch.float16 in both the app.py and predict files to run the program.

The generation time of different GPUs at 25 steps is as follows:

GPU384x672x72384x672x144576x1008x72576x1008x144768x1344x72768x1344x144960x1680x96
A10 24GB~180s~370s~480s~1800s(โญ•๏ธ)~1000sโŒโŒ
A100 80GB~60s~180s~200s~600s~500s~1800s~1800s

(โญ•๏ธ) indicates that it can run with low_gpu_memory_mode=True, but at a slower speed, while โŒ indicates that it cannot run.

<details> <summary>(Obsolete) EasyAnimateV3:</summary>

The video sizes of EasyAnimateV3 that can be generated by different graphics memory include:

GPU memory384x672x72384x672x144576x1008x72576x1008x144720x1280x72720x1280x144
12GBโญ•๏ธโญ•๏ธโญ•๏ธโญ•๏ธโŒโŒ
16GBโœ…โœ…โญ•๏ธโญ•๏ธโญ•๏ธโŒ
24GBโœ…โœ…โœ…โœ…โœ…โŒ
40GBโœ…โœ…โœ…โœ…โœ…โœ…
80GBโœ…โœ…โœ…โœ…โœ…โœ…
</details>

b. Weights

We'd better place the weights along the specified path:

EasyAnimateV4:

๐Ÿ“ฆ models/
โ”œโ”€โ”€ ๐Ÿ“‚ Diffusion_Transformer/
โ”‚   โ””โ”€โ”€ ๐Ÿ“‚ EasyAnimateV4-XL-2-InP/
โ”œโ”€โ”€ ๐Ÿ“‚ Personalized_Model/
โ”‚   โ””โ”€โ”€ your trained trainformer model / your trained lora model (for UI load)
<details> <summary>(Obsolete) EasyAnimateV3:</summary>:
๐Ÿ“ฆ models/
โ”œโ”€โ”€ ๐Ÿ“‚ Diffusion_Transformer/
โ”‚   โ””โ”€โ”€ ๐Ÿ“‚ EasyAnimateV3-XL-2-InP-512x512/
โ”œโ”€โ”€ ๐Ÿ“‚ Personalized_Model/
โ”‚   โ””โ”€โ”€ your trained trainformer model / your trained lora model (for UI load)
</details> <details> <summary>(Obsolete) EasyAnimateV2:</summary>
๐Ÿ“ฆ models/
โ”œโ”€โ”€ ๐Ÿ“‚ Diffusion_Transformer/
โ”‚   โ””โ”€โ”€ ๐Ÿ“‚ EasyAnimateV2-XL-2-512x512/
โ”œโ”€โ”€ ๐Ÿ“‚ Personalized_Model/
โ”‚   โ””โ”€โ”€ your trained trainformer model / your trained lora model (for UI load)
</details> <details> <summary>(Obsolete) EasyAnimateV1:</summary>
๐Ÿ“ฆ models/
โ”œโ”€โ”€ ๐Ÿ“‚ Diffusion_Transformer/
โ”‚   โ””โ”€โ”€ ๐Ÿ“‚ PixArt-XL-2-512x512/
โ”œโ”€โ”€ ๐Ÿ“‚ Motion_Module/
โ”‚   โ””โ”€โ”€ ๐Ÿ“„ easyanimate_v1_mm.safetensors
โ”œโ”€โ”€ ๐Ÿ“‚ Personalized_Model/
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ easyanimate_portrait.safetensors
โ”‚   โ””โ”€โ”€ ๐Ÿ“„ easyanimate_portrait_lora.safetensors
</details>

How to use

<h3 id="video-gen">1. Inference </h3>

a. Using Python Code

b. Using webui

c. From ComfyUI

Please refer to ComfyUI README for details.

2. Model Training

A complete EasyAnimate training pipeline should include data preprocessing, Video VAE training, and Video DiT training. Among these, Video VAE training is optional because we have already provided a pre-trained Video VAE.

<h4 id="data-preprocess">a. data preprocessing</h4>

We have provided a simple demo of training the Lora model through image data, which can be found in the wiki for details.

A complete data preprocessing link for long video segmentation, cleaning, and description can refer to README in the video captions section.

If you want to train a text to image and video generation model. You need to arrange the dataset in this format.

๐Ÿ“ฆ project/
โ”œโ”€โ”€ ๐Ÿ“‚ datasets/
โ”‚   โ”œโ”€โ”€ ๐Ÿ“‚ internal_datasets/
โ”‚       โ”œโ”€โ”€ ๐Ÿ“‚ train/
โ”‚       โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ 00000001.mp4
โ”‚       โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ 00000002.jpg
โ”‚       โ”‚   โ””โ”€โ”€ ๐Ÿ“„ .....
โ”‚       โ””โ”€โ”€ ๐Ÿ“„ json_of_internal_datasets.json

The json_of_internal_datasets.json is a standard JSON file. The file_path in the json can to be set as relative path, as shown in below:

[
    {
      "file_path": "train/00000001.mp4",
      "text": "A group of young men in suits and sunglasses are walking down a city street.",
      "type": "video"
    },
    {
      "file_path": "train/00000002.jpg",
      "text": "A group of young men in suits and sunglasses are walking down a city street.",
      "type": "image"
    },
    .....
]

You can also set the path as absolute path as follow:

[
    {
      "file_path": "/mnt/data/videos/00000001.mp4",
      "text": "A group of young men in suits and sunglasses are walking down a city street.",
      "type": "video"
    },
    {
      "file_path": "/mnt/data/train/00000001.jpg",
      "text": "A group of young men in suits and sunglasses are walking down a city street.",
      "type": "image"
    },
    .....
]
<h4 id="vae-train">b. Video VAE training (optional)</h4>

Video VAE training is an optional option as we have already provided pre trained Video VAEs. If you want to train video vae, you can refer to README in the video vae section.

<h4 id="dit-train">c. Video DiT training </h4>

If the data format is relative path during data preprocessing, please set scripts/train.sh as follow.

export DATASET_NAME="datasets/internal_datasets/"
export DATASET_META_NAME="datasets/internal_datasets/json_of_internal_datasets.json"

If the data format is absolute path during data preprocessing, please set scripts/train.sh as follow.

export DATASET_NAME=""
export DATASET_META_NAME="/mnt/data/json_of_internal_datasets.json"

Then, we run scripts/train.sh.

sh scripts/train.sh

For details on setting some parameters, please refer to Readme Train and Readme Lora.

<details> <summary>(Obsolete) EasyAnimateV1:</summary> If you want to train EasyAnimateV1. Please switch to the git branch v1. </details>

Model zoo

EasyAnimateV4:

We attempted to implement EasyAnimate using 3D full attention, but this structure performed moderately on slice VAE and incurred considerable training costs. As a result, the performance of version V4 did not significantly surpass that of version V3. Due to limited resources, we are migrating EasyAnimate to a retrained 16-channel MagVit to pursue better model performance.

NameTypeStorage SpaceUrlHugging FaceDescription
EasyAnimateV4-XL-2-InP.tar.gzEasyAnimateV4Before extraction: 8.9 GB / After extraction: 14.0 GBDownload๐Ÿค—LinkOur official graph-generated video model is capable of predicting videos at multiple resolutions (512, 768, 1024, 1280) and has been trained on 144 frames at a rate of 24 frames per second.

EasyAnimateV3:

NameTypeStorage SpaceUrlHugging FaceDescription
EasyAnimateV3-XL-2-InP-512x512.tarEasyAnimateV318.2GBDownload๐Ÿค—LinkEasyAnimateV3 official weights for 512x512 text and image to video resolution. Training with 144 frames and fps 24
EasyAnimateV3-XL-2-InP-768x768.tarEasyAnimateV318.2GBDownload๐Ÿค—LinkEasyAnimateV3 official weights for 768x768 text and image to video resolution. Training with 144 frames and fps 24
EasyAnimateV3-XL-2-InP-960x960.tarEasyAnimateV318.2GBDownload๐Ÿค—LinkEasyAnimateV3 official weights for 960x960 text and image to video resolution. Training with 144 frames and fps 24
<details> <summary>(Obsolete) EasyAnimateV2:</summary> | Name | Type | Storage Space | Url | Hugging Face | Description | |--|--|--|--|--|--| | EasyAnimateV2-XL-2-512x512.tar | EasyAnimateV2 | 16.2GB | [Download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV2-XL-2-512x512.tar) | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV2-XL-2-512x512) | EasyAnimateV2 official weights for 512x512 resolution. Training with 144 frames and fps 24 | | EasyAnimateV2-XL-2-768x768.tar | EasyAnimateV2 | 16.2GB | [Download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Diffusion_Transformer/EasyAnimateV2-XL-2-768x768.tar) | [๐Ÿค—Link](https://huggingface.co/alibaba-pai/EasyAnimateV2-XL-2-768x768) | EasyAnimateV2 official weights for 768x768 resolution. Training with 144 frames and fps 24 | | easyanimatev2_minimalism_lora.safetensors | Lora of Pixart | 485.1MB | [Download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Personalized_Model/easyanimatev2_minimalism_lora.safetensors) | - | A lora training with a specifial type images. Images can be downloaded from [Url](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/v2/Minimalism.zip). | </details> <details> <summary>(Obsolete) EasyAnimateV1:</summary>

1ใ€Motion Weights

NameTypeStorage SpaceUrlDescription
easyanimate_v1_mm.safetensorsMotion Module4.1GBdownloadTraining with 80 frames and fps 12

2ใ€Other Weights

NameTypeStorage SpaceUrlDescription
PixArt-XL-2-512x512.tarPixart11.4GBdownloadPixart-Alpha official weights
easyanimate_portrait.safetensorsCheckpoint of Pixart2.3GBdownloadTraining with internal portrait datasets
easyanimate_portrait_lora.safetensorsLora of Pixart654.0MBdownloadTraining with internal portrait datasets
</details>

Algorithm Detailed

1. Data Preprocessing

Video Cut

For long video cut, EasyAnimate utilizes PySceneDetect to identify scene changes within the video and performs scene cutting based on certain threshold values to ensure consistency in the themes of the video segments. After cutting, we only keep segments with lengths ranging from 3 to 10 seconds for model training.

Video Cleaning and Description

Following SVD's data preparation process, EasyAnimate provides a simple yet effective data processing pipeline for high-quality data filtering and labeling. It also supports distributed processing to accelerate the speed of data preprocessing. The overall process is as follows:

2. Model Architecture

EasyAnimateV4:

We used Hunyuan-DiT as the underlying framework, and modified the VAE and DiT model structures on this basis to better support video generation. Please refer to the original resource page and follow the corresponding license.

The overall structure of EasyAnimateV4 is as follows:

EasyAnimateV4 includes two text encoders, Video VAE (video encoder and decoder), and Diffusion Transformer (DiT). The MT5 Encoder and multi-modal CLIP are used as text encoders. EasyAnimateV4 employs 3D global attention for video reconstruction, eliminating the separation of motion modules and base models as seen in V3. This ensures coherent frame generation and seamless motion transitions through global attention.

The pipeline structure of EasyAnimateV4 is as follows:

<img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/framework_v4.jpg" alt="ui" style="zoom:50%;" />

The foundational model structure of EasyAnimateV4 is as follows:

<img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/pipeline_v4.jpg" alt="ui" style="zoom:50%;" />

The Slice VAE exhibits some stuttering during scene changes because the later latents cannot fully access information from the preceding blocks during decoding.

Referring to MagVit, we stored the results after convolution of the previous blocks. Except for the initial video block, each subsequent video block during convolution only accessed the features of the preceding video blocks, not the following ones. After this modification, the decoder's reconstruction results are smoother compared to the original Slice VAE.

<details> <summary>(Obsolete) EasyAnimateV3:</summary> We have adopted [PixArt-alpha](https://github.com/PixArt-alpha/PixArt-alpha) as the base model and modified the VAE and DiT model structures on this basis to better support video generation. The overall structure of EasyAnimate is as follows:

The diagram below outlines the pipeline of EasyAnimate. It includes the Text Encoder, Video VAE (video encoder and decoder), and Diffusion Transformer (DiT). The T5 Encoder is used as the text encoder. Other components are detailed in the sections below.

<img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/pipeline_v3.jpg" alt="ui" style="zoom:50%;" />

We have expanded the DiT framework originally designed for 2D image synthesis to accommodate the complexities of 3D video generation by incorporating a special motion module block named Hybrid Motion Module.

In the motion module, we employ a combination of temporal attention and global attention to ensure the generation of coherent frames and seamless motion transitions.

Additionally, referencing U-ViT, it introduces a skip connection structure into EasyAnimate to further optimize deeper features by incorporating shallow features. A fully connected layer is also zero-initialized for each skip connection structure, allowing it to be applied as a plug-in module to previously trained and well-performing DiTs.

Moreover, it proposes Slice VAE, which addresses the memory difficulties encountered by MagViT when dealing with long and large videos, while also achieving greater compression in the temporal dimension during video encoding and decoding stages compared to MagViT.

For more details, please refer to arxiv.

</details>

TODO List

Contact Us

  1. Use Dingding to search group 77450006752 or Scan to join
  2. You need to scan the image to join the WeChat group or if it is expired, add this student as a friend first to invite you.
<img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/group/dd.png" alt="ding group" width="30%"/> <img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/group/wechat.jpg" alt="Wechat group" width="30%"/> <img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/group/person.jpg" alt="Person" width="30%"/>

Reference

License

This project is licensed under the Apache License (Version 2.0).