Home

Awesome

LivePortrait for Stable Diffusion WebUI

This extension is for AUTOMATIC1111's Stable Diffusion web UI, it allows to add a LivePortrait tab to the original Stable Diffusion WebUI to benefit from LivePortrait features.

image

Installation

[!Note] Make sure your system has FFmpeg installed. For details on FFmpeg installation, see how to install FFmpeg.

  1. Open "Extensions" tab.
  2. Open "Install from URL" tab in the tab.
  3. Enter https://github.com/dimitribarbot/sd-webui-live-portrait.git to "URL for extension's git repository".
  4. Press "Install" button.
  5. It may take a few minutes to install as XPose may be compiled. At the end, you will see the message "Installed into stable-diffusion-webui\extensions\sd-webui-live-portrait. Use Installed tab to restart".
  6. Go to "Installed" tab, click "Check for updates", and then click "Apply and restart UI". (The next time you can also use these buttons to update this extension.)

/!\ Important notes /!\

XPose, the face detector model used for animal mode, is currently not working with MacOS or non NVIDIA graphic cards. To allow animal mode to work correctly, follow the instructions described here.

Output

By default, generated files can be found in the stable-diffusion-webui/outputs/live-portrait folder. It can be overriden in Automatic1111's SD WebUI settings (see section below).

Settings

In the Automatic1111's SD WebUI settings tab, under the Live Portrait section, you can find the following configuration settings:

image

Models

LivePortrait

Model files go here (automatically downloaded if the folder is not present during first run): stable-diffusion-webui/models/liveportrait (human) and stable-diffusion-webui/models/liveportrait_animals (animals).

Pickle files have all been converted to safetensors by Kijai. If necessary, they can be downloaded from: https://huggingface.co/Kijai/LivePortrait_safetensors/tree/main (thank you Kijai).

Face detectors

For human mode, you can either use the original default Insightface, or Google's MediaPipe, or Face Alignment (see Settings section above or API section below).

Biggest difference is the license: Insightface is strictly for NON-COMMERCIAL use. MediaPipe is a bit worse at detection, and can't run on GPU in Windows, though it's much faster on CPU compared to Insightface. Face Alignment can use blazeface back camera model (or SFD or RetinaFace), it's far better for smaller faces than MediaPipe, that only can use the blazeface short model. The warmup on the first run when using this can take a long time, but subsequent runs are quick.

Insightface models go here (automatically downloaded if the folder is not present during first run): stable-diffusion-webui/models/insightface/models/buffalo_l. If necessary, they can be downloaded from: https://github.com/deepinsight/insightface/releases/download/v0.7/buffalo_l.zip.

For animal mode, this extension is using XPose which is also strictly for NON-COMMERCIAL use and is not compatible with MacOS. XPose model goes here (automatically downloaded if not present during first run): stable-diffusion-webui/models/liveportrait_animals.

If necessary, it can be downloaded from: https://huggingface.co/KwaiVGI/LivePortrait/resolve/main/liveportrait_animals/xpose.pth.

API

Routes have been added to the Automatic1111's SD WebUI API:

Parameters are the same as LivePortrait ones (see output of command python inference.py --help in LivePortrait repository) except for:

Additional parameters for the /live-portrait/human/retargeting/image endpoint are:

Additional parameters for the /live-portrait/human/retargeting/image/init endpoint are:

Additional parameters for the /live-portrait/human/retargeting/video endpoint are:

Thanks

Original author's link: https://liveportrait.github.io/

This project has been inspired and uses models converted by kijai: https://github.com/kijai/ComfyUI-LivePortraitKJ