Home

Awesome

DiffSynth Studio

PyPI license open issues GitHub pull-requests GitHub latest commit

<p align="center"> <a href="https://trendshift.io/repositories/10946" target="_blank"><img src="https://trendshift.io/api/badge/repositories/10946" alt="modelscope%2FDiffSynth-Studio | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a> </p>

Document: https://diffsynth-studio.readthedocs.io/zh-cn/latest/index.html

Introduction

DiffSynth Studio is a Diffusion engine. We have restructured architectures including Text Encoder, UNet, VAE, among others, maintaining compatibility with models from the open-source community while enhancing computational performance. We provide many interesting features. Enjoy the magic of Diffusion models!

Until now, DiffSynth Studio has supported the following models:

News

Installation

Install from source code (recommended):

git clone https://github.com/modelscope/DiffSynth-Studio.git
cd DiffSynth-Studio
pip install -e .

Or install from pypi:

pip install diffsynth

Usage (in Python code)

The Python examples are in examples. We provide an overview here.

Download Models

Download the pre-set models. Model IDs can be found in config file.

from diffsynth import download_models

download_models(["FLUX.1-dev", "Kolors"])

Download your own models.

from diffsynth.models.downloader import download_from_huggingface, download_from_modelscope

# From Modelscope (recommended)
download_from_modelscope("Kwai-Kolors/Kolors", "vae/diffusion_pytorch_model.fp16.bin", "models/kolors/Kolors/vae")
# From Huggingface
download_from_huggingface("Kwai-Kolors/Kolors", "vae/diffusion_pytorch_model.fp16.safetensors", "models/kolors/Kolors/vae")

Video Synthesis

Text-to-video using CogVideoX-5B

CogVideoX-5B is released by ZhiPu. We provide an improved pipeline, supporting text-to-video, video editing, self-upscaling and video interpolation. examples/video_synthesis

The video on the left is generated using the original text-to-video pipeline, while the video on the right is the result after editing and frame interpolation.

https://github.com/user-attachments/assets/26b044c1-4a60-44a4-842f-627ff289d006

Long Video Synthesis

We trained extended video synthesis models, which can generate 128 frames. examples/ExVideo

https://github.com/modelscope/DiffSynth-Studio/assets/35051019/d97f6aa9-8064-4b5b-9d49-ed6001bb9acc

https://github.com/user-attachments/assets/321ee04b-8c17-479e-8a95-8cbcf21f8d7e

Toon Shading

Render realistic videos in a flatten style and enable video editing features. examples/Diffutoon

https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/b54c05c5-d747-4709-be5e-b39af82404dd

https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/20528af5-5100-474a-8cdc-440b9efdd86c

Video Stylization

Video stylization without video models. examples/diffsynth

https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/59fb2f7b-8de0-4481-b79f-0c3a7361a1ea

Image Synthesis

Generate high-resolution images, by breaking the limitation of diffusion models! examples/image_synthesis.

LoRA fine-tuning is supported in examples/train.

FLUXStable Diffusion 3
image_1024_cfgimage_1024
KolorsHunyuan-DiT
image_1024image_1024
Stable DiffusionStable Diffusion XL
10241024

Usage (in WebUI)

Create stunning images using the painter, with assistance from AI!

https://github.com/user-attachments/assets/95265d21-cdd6-4125-a7cb-9fbcf6ceb7b0

This video is not rendered in real-time.

Before launching the WebUI, please download models to the folder ./models. See here.

pip install gradio
python apps/gradio/DiffSynth_Studio.py

20240822102002

pip install streamlit streamlit-drawable-canvas
python -m streamlit run apps/streamlit/DiffSynth_Studio.py

https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/93085557-73f3-4eee-a205-9829591ef954