Home

Awesome

<div align="center"> <img src='https://user-images.githubusercontent.com/4397546/229094115-862c747e-7397-4b54-ba4a-bd368bfe2e0f.png' width='500px'/> <!--<h2> 😭 SadTalker: <span style="font-size:12px">Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation </span> </h2> -->

<a href='https://arxiv.org/abs/2211.12194'><img src='https://img.shields.io/badge/ArXiv-PDF-red'></a>   <a href='https://sadtalker.github.io'><img src='https://img.shields.io/badge/Project-Page-Green'></a>   Open In Colab   Hugging Face Spaces   sd webui-colab   <br> Replicate Discord

<div> <a target='_blank'>Wenxuan Zhang <sup>*,1,2</sup> </a>&emsp; <a href='https://vinthony.github.io/' target='_blank'>Xiaodong Cun <sup>*,2</a>&emsp; <a href='https://xuanwangvc.github.io/' target='_blank'>Xuan Wang <sup>3</sup></a>&emsp; <a href='https://yzhang2016.github.io/' target='_blank'>Yong Zhang <sup>2</sup></a>&emsp; <a href='https://xishen0220.github.io/' target='_blank'>Xi Shen <sup>2</sup></a>&emsp; </br> <a href='https://yuguo-xjtu.github.io/' target='_blank'>Yu Guo<sup>1</sup> </a>&emsp; <a href='https://scholar.google.com/citations?hl=zh-CN&user=4oXBp9UAAAAJ' target='_blank'>Ying Shan <sup>2</sup> </a>&emsp; <a target='_blank'>Fei Wang <sup>1</sup> </a>&emsp; </div> <br> <div> <sup>1</sup> Xi'an Jiaotong University &emsp; <sup>2</sup> Tencent AI Lab &emsp; <sup>3</sup> Ant Group &emsp; </div> <br> <i><strong><a href='https://arxiv.org/abs/2211.12194' target='_blank'>CVPR 2023</a></strong></i> <br> <br>

sadtalker

<b>TL;DR:       single portrait image 🙎‍♂️      +       audio 🎤       =       talking head video 🎞.</b>

<br> </div>

Highlights

still+enhancer in v0.0.1still + enhancer in v0.0.2input image @bagbag1815
<video src="https://user-images.githubusercontent.com/48216707/229484996-5d7be64f-2553-4c9e-a452-c5cf0b8ebafe.mp4" type="video/mp4"> </video><video src="https://user-images.githubusercontent.com/4397546/230717873-355b7bf3-d3de-49f9-a439-9220e623fce7.mp4" type="video/mp4"> </video><img src='./examples/source_image/full_body_2.png' width='380'>

Changelog

The previous changelog can be found here.

To-Do

We're tracking new updates in issue #280.

Troubleshooting

If you have any problems, please read our FAQs before opening an issue.

1. Installation.

Community tutorials: 中文Windows教程 (Chinese Windows tutorial) | 日本語コース (Japanese tutorial).

Linux/Unix

  1. Install Anaconda, Python and git.

  2. Creating the env and install the requirements.

git clone https://github.com/OpenTalker/SadTalker.git

cd SadTalker 

conda create -n sadtalker python=3.8

conda activate sadtalker

pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113

conda install ffmpeg

pip install -r requirements.txt

### Coqui TTS is optional for gradio demo. 
### pip install TTS

Windows

A video tutorial in chinese is available here. You can also follow the following instructions:

  1. Install Python 3.8 and check "Add Python to PATH".
  2. Install git manually or using Scoop: scoop install git.
  3. Install ffmpeg, following this tutorial or using scoop: scoop install ffmpeg.
  4. Download the SadTalker repository by running git clone https://github.com/Winfredy/SadTalker.git.
  5. Download the checkpoints and gfpgan models in the downloads section.
  6. Run start.bat from Windows Explorer as normal, non-administrator, user, and a Gradio-powered WebUI demo will be started.

macOS

A tutorial on installing SadTalker on macOS can be found here.

Docker, WSL, etc

Please check out additional tutorials here.

2. Download Models

You can run the following script on Linux/macOS to automatically download all the models:

bash scripts/download_models.sh

We also provide an offline patch (gfpgan/), so no model will be downloaded when generating.

Pre-Trained Models

<!-- TODO add Hugging Face links -->

GFPGAN Offline Patch

<!-- TODO add Hugging Face links --> <details><summary>Model Details</summary>

Model explains:

New version
ModelDescription
checkpoints/mapping_00229-model.pth.tarPre-trained MappingNet in Sadtalker.
checkpoints/mapping_00109-model.pth.tarPre-trained MappingNet in Sadtalker.
checkpoints/SadTalker_V0.0.2_256.safetensorspackaged sadtalker checkpoints of old version, 256 face render).
checkpoints/SadTalker_V0.0.2_512.safetensorspackaged sadtalker checkpoints of old version, 512 face render).
gfpgan/weightsFace detection and enhanced models used in facexlib and gfpgan.
Old version
ModelDescription
checkpoints/auido2exp_00300-model.pthPre-trained ExpNet in Sadtalker.
checkpoints/auido2pose_00140-model.pthPre-trained PoseVAE in Sadtalker.
checkpoints/mapping_00229-model.pth.tarPre-trained MappingNet in Sadtalker.
checkpoints/mapping_00109-model.pth.tarPre-trained MappingNet in Sadtalker.
checkpoints/facevid2vid_00189-model.pth.tarPre-trained face-vid2vid model from the reappearance of face-vid2vid.
checkpoints/epoch_20.pthPre-trained 3DMM extractor in Deep3DFaceReconstruction.
checkpoints/wav2lip.pthHighly accurate lip-sync model in Wav2lip.
checkpoints/shape_predictor_68_face_landmarks.datFace landmark model used in dilb.
checkpoints/BFM3DMM library file.
checkpoints/hubFace detection models used in face alignment.
gfpgan/weightsFace detection and enhanced models used in facexlib and gfpgan.

The final folder will be shown as:

<img width="331" alt="image" src="https://user-images.githubusercontent.com/4397546/232511411-4ca75cbf-a434-48c5-9ae0-9009e8316484.png"> </details>

3. Quick Start

Please read our document on best practices and configuration tips

WebUI Demos

Online Demo: HuggingFace | SDWebUI-Colab | Colab

Local WebUI extension: Please refer to WebUI docs.

Local gradio demo (recommanded): A Gradio instance similar to our Hugging Face demo can be run locally:

## you need manually install TTS(https://github.com/coqui-ai/TTS) via `pip install tts` in advanced.
python app_sadtalker.py

You can also start it more easily:

CLI usage

Animating a portrait image from default config:
python inference.py --driven_audio <audio.wav> \
                    --source_image <video.mp4 or picture.png> \
                    --enhancer gfpgan 

The results will be saved in results/$SOME_TIMESTAMP/*.mp4.

Full body/image Generation:

Using --still to generate a natural full body video. You can add enhancer to improve the quality of the generated video.

python inference.py --driven_audio <audio.wav> \
                    --source_image <video.mp4 or picture.png> \
                    --result_dir <a file to store results> \
                    --still \
                    --preprocess full \
                    --enhancer gfpgan 

More examples and configuration and tips can be founded in the >>> best practice documents <<<.

Citation

If you find our work useful in your research, please consider citing:

@article{zhang2022sadtalker,
  title={SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation},
  author={Zhang, Wenxuan and Cun, Xiaodong and Wang, Xuan and Zhang, Yong and Shen, Xi and Guo, Yu and Shan, Ying and Wang, Fei},
  journal={arXiv preprint arXiv:2211.12194},
  year={2022}
}

Acknowledgements

Facerender code borrows heavily from zhanglonghao's reproduction of face-vid2vid and PIRender. We thank the authors for sharing their wonderful code. In training process, we also used the model from Deep3DFaceReconstruction and Wav2lip. We thank for their wonderful work.

We also use the following 3rd-party libraries:

Extensions:

Related Works

Disclaimer

This is not an official product of Tencent.

1. Please carefully read and comply with the open-source license applicable to this code before using it. 
2. Please carefully read and comply with the intellectual property declaration applicable to this code before using it.
3. This open-source code runs completely offline and does not collect any personal information or other data. If you use this code to provide services to end-users and collect related data, please take necessary compliance measures according to applicable laws and regulations (such as publishing privacy policies, adopting necessary data security strategies, etc.). If the collected data involves personal information, user consent must be obtained (if applicable). Any legal liabilities arising from this are unrelated to Tencent.
4. Without Tencent's written permission, you are not authorized to use the names or logos legally owned by Tencent, such as "Tencent." Otherwise, you may be liable for legal responsibilities.
5. This open-source code does not have the ability to directly provide services to end-users. If you need to use this code for further model training or demos, as part of your product to provide services to end-users, or for similar use, please comply with applicable laws and regulations for your product or service. Any legal liabilities arising from this are unrelated to Tencent.
6. It is prohibited to use this open-source code for activities that harm the legitimate rights and interests of others (including but not limited to fraud, deception, infringement of others' portrait rights, reputation rights, etc.), or other behaviors that violate applicable laws and regulations or go against social ethics and good customs (including providing incorrect or false information, spreading pornographic, terrorist, and violent information, etc.). Otherwise, you may be liable for legal responsibilities.

LOGO: color and font suggestion: ChatGPT, logo font: Montserrat Alternates .

All the copyrights of the demo images and audio are from community users or the generation from stable diffusion. Feel free to contact us if you would like use to remove them.

<!-- Spelling fixed on Tuesday, September 12, 2023 by @fakerybakery (https://github.com/fakerybakery). These changes are licensed under the Apache 2.0 license. -->