Home

Awesome

<div align=center> <img src="https://github.com/PKU-YuanGroup/ConsisID/blob/main/asserts/ConsisID_logo.png?raw=true" width="150px"> </div> <h2 align="center"> <a href="https://arxiv.org/abs/2411.17440">Identity-Preserving Text-to-Video Generation by Frequency Decomposition</a></h2> <h5 align="center"> If you like our project, please give us a star ⭐ on GitHub for the latest update. </h2> <h5 align="center">

hf_space Open In Colab hf_paper arXiv Home Page Dataset zhihu zhihu zhihu License github

</h5> <div align="center"> This repository is the official implementation of ConsisID, a tuning-free DiT-based controllable IPT2V model to keep human-identity consistent in the generated video. The approach draws inspiration from previous studies on frequency analysis of vision/diffusion transformers. </div> <br> <details open><summary>πŸ’‘ We also have other video generation projects that may interest you ✨. </summary><p> <!-- may -->

Open-Sora Plan: Open-Source Large Video Generation Model <br> Bin Lin, Yunyang Ge and Xinhua Cheng etc. <br> github github arXiv <br>

MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators <br> Shenghai Yuan, Jinfa Huang and Yujun Shi etc. <br> github github arXiv <br>

ChronoMagic-Bench: A Benchmark for Metamorphic Evaluation of Text-to-Time-lapse Video Generation <br> Shenghai Yuan, Jinfa Huang and Yongqi Xu etc. <br> github github arXiv <br>

</p></details>

πŸ“£ News

😍 Gallery

Identity-Preserving Text-to-Video Generation.

Demo Video of ConsisID or you can click <a href="https://github.com/SHYuanBest/shyuanbest_media/raw/refs/heads/main/ConsisID/showcase_videos.mp4">here</a> to watch the video.

πŸ€— Demo

Diffusers API

import torch
from diffusers import ConsisIDPipeline
from diffusers.pipelines.consisid.consisid_utils import prepare_face_models, process_face_embeddings_infer
from diffusers.utils import export_to_video
from huggingface_hub import snapshot_download

snapshot_download(repo_id="BestWishYsh/ConsisID-preview", local_dir="BestWishYsh/ConsisID-preview")
face_helper_1, face_helper_2, face_clip_model, face_main_model, eva_transform_mean, eva_transform_std = (
    prepare_face_models("BestWishYsh/ConsisID-preview", device="cuda", dtype=torch.bfloat16)
)
pipe = ConsisIDPipeline.from_pretrained("BestWishYsh/ConsisID-preview", torch_dtype=torch.bfloat16)
pipe.to("cuda")

# ConsisID works well with long and well-described prompts. Make sure the face in the image is clearly visible (e.g., preferably half-body or full-body).
prompt = "The video captures a boy walking along a city street, filmed in black and white on a classic 35mm camera. His expression is thoughtful, his brow slightly furrowed as if he's lost in contemplation. The film grain adds a textured, timeless quality to the image, evoking a sense of nostalgia. Around him, the cityscape is filled with vintage buildings, cobblestone sidewalks, and softly blurred figures passing by, their outlines faint and indistinct. Streetlights cast a gentle glow, while shadows play across the boy's path, adding depth to the scene. The lighting highlights the boy's subtle smile, hinting at a fleeting moment of curiosity. The overall cinematic atmosphere, complete with classic film still aesthetics and dramatic contrasts, gives the scene an evocative and introspective feel."
image = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/refs%2Fpr%2F406/diffusers/consisid/consisid_image_3.png?download=true"

id_cond, id_vit_hidden, image, face_kps = process_face_embeddings_infer(
    face_helper_1,
    face_clip_model,
    face_helper_2,
    eva_transform_mean,
    eva_transform_std,
    face_main_model,
    "cuda",
    torch.bfloat16,
    image,
    is_align_face=True,
)

video = pipe(
    image=image,
    prompt=prompt,
    num_inference_steps=50,
    guidance_scale=6.0,
    use_dynamic_cfg=False,
    id_vit_hidden=id_vit_hidden,
    id_cond=id_cond,
    kps_cond=face_kps,
    generator=torch.Generator("cuda").manual_seed(42),
)
export_to_video(video.frames[0], "output.mp4", fps=8)

Gradio Web UI

Highly recommend trying out our web demo by the following command, which incorporates all features currently supported by ConsisID. We also provide online demo in Hugging Face Spaces.

python app.py

CLI Inference

python infer.py --model_path BestWishYsh/ConsisID-preview

warning: It is worth noting that even if we use the same seed and prompt but we change a machine, the results will be different.

Prompt Refiner

ConsisID has high requirements for prompt quality. You can use GPT-4o to refine the input text prompt, an example is as follows (original prompt: "a man is playing guitar.")

a man is playing guitar.

Change the sentence above to something like this (add some facial changes, even if they are minor. Don't make the sentence too long): 

The video features a man standing next to an airplane, engaged in a conversation on his cell phone. he is wearing sunglasses and a black top, and he appears to be talking seriously. The airplane has a green stripe running along its side, and there is a large engine visible behind his. The man seems to be standing near the entrance of the airplane, possibly preparing to board or just having disembarked. The setting suggests that he might be at an airport or a private airfield. The overall atmosphere of the video is professional and focused, with the man's attire and the presence of the airplane indicating a business or travel context.

Some sample prompts are available here.

GPU Memory Optimization

ConsisID requires about 44 GB of GPU memory to decode 49 frames (6 seconds of video at 8 FPS) with output resolution 720x480 (W x H), which makes it not possible to run on consumer GPUs or free-tier T4 Colab. The following memory optimizations could be used to reduce the memory footprint. For replication, you can refer to this script.

Feature (overlay the previous)Max Memory AllocatedMax Memory Reserved
-37 GB44 GB
enable_model_cpu_offload22 GB25 GB
enable_sequential_cpu_offload16 GB22 GB
vae.enable_slicing16 GB22 GB
vae.enable_tiling5 GB7 GB
# turn on if you don't have multiple GPUs or enough GPU memory(such as H100)
pipe.enable_model_cpu_offload()
pipe.enable_sequential_cpu_offload()
pipe.vae.enable_slicing()
pipe.vae.enable_tiling()

warning: it will cost more time in inference and may also reduce the quality.

βš™οΈ Requirements and Installation

We recommend the requirements as follows.

Environment

# 0. Clone the repo
git clone --depth=1 https://github.com/PKU-YuanGroup/ConsisID.git
cd ConsisID

# 1. Create conda environment
conda create -n consisid python=3.11.0
conda activate consisid

# 3. Install PyTorch and other dependencies using conda
# CUDA 11.8
conda install pytorch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 pytorch-cuda=11.8 -c pytorch -c nvidia
# CUDA 12.1
conda install pytorch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 pytorch-cuda=12.1 -c pytorch -c nvidia

# 4. Install pip dependencies
pip install -r requirements.txt

Download ConsisID

The weights are available at πŸ€—HuggingFace and 🟣WiseModel, and will be automatically downloaded when runing app.py and infer.py, or you can download it with the following commands.

# way 1
# if you are in china mainland, run this first: export HF_ENDPOINT=https://hf-mirror.com
cd util
python download_weights.py

# way 2
# if you are in china mainland, run this first: export HF_ENDPOINT=https://hf-mirror.com
huggingface-cli download --repo-type model \
BestWishYsh/ConsisID-preview \
--local-dir ckpts

# way 3
git lfs install
git clone https://www.wisemodel.cn/SHYuanBest/ConsisID-Preview.git

Once ready, the weights will be organized in this format:

πŸ“¦ ckpts/
β”œβ”€β”€ πŸ“‚ data_process/
β”œβ”€β”€ πŸ“‚ face_encoder/
β”œβ”€β”€ πŸ“‚ scheduler/
β”œβ”€β”€ πŸ“‚ text_encoder/
β”œβ”€β”€ πŸ“‚ tokenizer/
β”œβ”€β”€ πŸ“‚ transformer/
β”œβ”€β”€ πŸ“‚ vae/
β”œβ”€β”€ πŸ“„ configuration.json
β”œβ”€β”€ πŸ“„ model_index.json

πŸ—οΈ Training

Data preprocessing

Please refer to this guide for how to obtain the training data required by ConsisID. If you want to train a text to image and video generation model. You need to arrange all the dataset in this format:

πŸ“¦ datasets/
β”œβ”€β”€ πŸ“‚ captions/
β”‚   β”œβ”€β”€ πŸ“„ dataname_1.json
β”‚   β”œβ”€β”€ πŸ“„ dataname_2.json
β”œβ”€β”€ πŸ“‚ dataname_1/
β”‚   β”œβ”€β”€ πŸ“‚ refine_bbox_jsons/
β”‚   β”œβ”€β”€ πŸ“‚ track_masks_data/
β”‚   β”œβ”€β”€ πŸ“‚ videos/
β”œβ”€β”€ πŸ“‚ dataname_2/
β”‚   β”œβ”€β”€ πŸ“‚ refine_bbox_jsons/
β”‚   β”œβ”€β”€ πŸ“‚ track_masks_data/
β”‚   β”œβ”€β”€ πŸ“‚ videos/
β”œβ”€β”€ ...
β”œβ”€β”€ πŸ“„ total_train_data.txt

Video DiT training

First, setting hyperparameters:

Then, we run the following bash to start training:

# For single rank
bash train_single_rank.sh
# For multi rank
bash train_multi_rank.sh

πŸ™Œ Friendly Links

We found some plugins created by community developers. Thanks for their efforts:

If you find related work, please let us know.

🐳 Dataset

We release the subset of the data used to train ConsisID. The dataset is available at HuggingFace, or you can download it with the following command. Some samples can be found on our Project Page.

huggingface-cli download --repo-type dataset \
BestWishYsh/ConsisID-preview-Data \
--local-dir BestWishYsh/ConsisID-preview-Data

πŸ› οΈ Evaluation

We release the data used for evaluation in ConsisID, which is available at HuggingFace. Please refer to this guide for how to evaluate customized model.

πŸ‘ Acknowledgement

πŸ”’ License

✏️ Citation

If you find our paper and codes useful in your research, please consider giving a star :star: and citation :pencil:.

@article{yuan2024identity,
  title={Identity-Preserving Text-to-Video Generation by Frequency Decomposition},
  author={Yuan, Shenghai and Huang, Jinfa and He, Xianyi and Ge, Yunyuan and Shi, Yujun and Chen, Liuhan and Luo, Jiebo and Yuan, Li},
  journal={arXiv preprint arXiv:2411.17440},
  year={2024}
}

🀝 Contributors

<a href="https://github.com/PKU-YuanGroup/ConsisID/graphs/contributors"> <img src="https://contrib.rocks/image?repo=PKU-YuanGroup/ConsisID&anon=true" /> </a>