Awesome
:book: Depth-Aware Generative Adversarial Network for Talking Head Video Generation (CVPR 2022)
<p align="center"> <small>:fire: If DaGAN is helpful in your photos/projects, please help to :star: it or recommend it to your friends. Thanks:fire:</small> </p> <p align="center"> <small>:fire: Seeking for the collaboration and internship opportunities. :fire:</small> </p><!-- > [Fa-Ting Hong](https://harlanhong.github.io), [Longhao Zhang](https://dblp.org/pid/236/7382.html), [Li Shen](https://scholar.google.co.uk/citations?user=ABbCaxsAAAAJ&hl=en), [Dan Xu](https://www.danxurgb.net) <br> --> <!-- > The Hong Kong University of Science and Technology, Alibaba Cloud -->[Paper] [Project Page] [Demo] [Poster Video]<br>
Fa-Ting Hong, Longhao Zhang, Li Shen, Dan Xu <br> The Hong Kong University of Science and Technology<br> Alibaba Cloud
Cartoon Sample
Human Sample
Voxceleb1 Dataset
<p align="center"> <img src="assets/visual_vox1.png"> </p>:triangular_flag_on_post: Updates
-
:fire::fire::white_check_mark: July 20 2023: Our new talking head work **MCNet was accpted by ICCV2023. There's no need to train a facial depth network, which makes it more convenient for users to test and fine-tune.
-
:fire::fire::white_check_mark: July 26, 2022: The normal dataparallel training scripts were released since some researchers informed me they ran into DistributedDataParallel problems. Please try to train your own model using this command. Also, we deleted the command line "with torch.autograd.set_detect_anomaly(True)" to boost the training speed.
-
:fire::fire::white_check_mark: June 26, 2022: The repo of our face depth network is released, please refer to Face-Depth-Network and feel free to email me if you meet any problem.
-
:fire::fire::white_check_mark: June 21, 2022: [Digression] I am looking for research intern/research assistant opportunities in European next year. Please contact me If you think I'm qualified for your position.
-
:fire::fire::white_check_mark: May 19, 2022: The depth face model (50 layers) trained on Voxceleb2 is released! (The corresponding checkpoint of DaGAN will release soon). Click the LINK
-
:fire::fire::white_check_mark: April 25, 2022: Integrated into Huggingface Spaces 🤗 using Gradio. Try out the web demo: (GPU version will come soon!)
-
:fire::fire::white_check_mark: Add SPADE model, which produces more natural results.
:wrench: Dependencies and Installation
- Python >= 3.7 (Recommend to use Anaconda or Miniconda)
- PyTorch >= 1.7
- Option: NVIDIA GPU + CUDA
- Option: Linux
Installation
We now provide a clean version of DaGAN, which does not require customized CUDA extensions. <br>
-
Clone repo
git clone https://github.com/harlanhong/CVPR2022-DaGAN.git cd CVPR2022-DaGAN
-
Install dependent packages
pip install -r requirements.txt ## Install the Face Alignment lib cd face-alignment pip install -r requirements.txt python setup.py install
:zap: Quick Inference
We take the paper version for an example. More models can be found here.
YAML configs
See config/vox-adv-256.yaml
to get description of each parameter.
Pre-trained checkpoint
The pre-trained checkpoint of face depth network and our DaGAN checkpoints can be found under following link: OneDrive.
Inference! To run a demo, download checkpoint and run the following command:
CUDA_VISIBLE_DEVICES=0 python demo.py --config config/vox-adv-256.yaml --driving_video path/to/driving --source_image path/to/source --checkpoint path/to/checkpoint --relative --adapt_scale --kp_num 15 --generator DepthAwareGenerator
The result will be stored in result.mp4
. The driving videos and source images should be cropped before it can be used in our method. To obtain some semi-automatic crop suggestions you can use python crop-video.py --inp some_youtube_video.mp4
. It will generate commands for crops using ffmpeg.
:computer: Training
Datasets
- VoxCeleb. Please follow the instruction from https://github.com/AliaksandrSiarohin/video-preprocessing.
Train on VoxCeleb
To train a model on specific dataset run:
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --master_addr="0.0.0.0" --master_port=12348 run.py --config config/vox-adv-256.yaml --name DaGAN --rgbd --batchsize 12 --kp_num 15 --generator DepthAwareGenerator
<div id="dataparallel" >Or</div>
CUDA_VISIBLE_DEVICES=0,1,2,3 python run_dataparallel.py --config config/vox-adv-256.yaml --device_ids 0,1,2,3 --name DaGAN_voxceleb2_depth --rgbd --batchsize 48 --kp_num 15 --generator DepthAwareGenerator
<!-- CUDA_VISIBLE_DEVICES=1,2,3,4,5,6,7 python -m torch.distributed.launch --master_addr="0.0.0.0" --master_port=12348 run.py --config config/vox-adv-256.yaml --name SpadeDaGAN --rgbd --batchsize 6 --kp_num 15 --generator SPADEDepthAwareGenerator -->
The code will create a folder in the log directory (each run will create a new name-specific directory).
Checkpoints will be saved to this folder.
To check the loss values during training see log.txt
.
By default the batch size is tunned to run on 8 GeForce RTX 3090 gpu (You can obtain the best performance after about 150 epochs). You can change the batch size in the train_params in .yaml
file.
Also, you can watch the training loss by running the following command:
tensorboard --logdir log/DaGAN/log
When you kill your process for some reasons in the middle of training, a zombie process may occur, you can kill it using our provided tool:
python kill_port.py PORT
Training on your own dataset
-
Resize all the videos to the same size e.g 256x256, the videos can be in '.gif', '.mp4' or folder with images. We recommend the later, for each video make a separate folder with all the frames in '.png' format. This format is loss-less, and it has better i/o performance.
-
Create a folder
data/dataset_name
with 2 subfolderstrain
andtest
, put training videos in thetrain
and testing in thetest
. -
Create a config
config/dataset_name.yaml
, in dataset_params specify the root dir theroot_dir: data/dataset_name
. Also adjust the number of epoch in train_params.
:scroll: Acknowledgement
Our DaGAN implementation is inspired by FOMM. We appreciate the authors of FOMM for making their codes available to public.
:scroll: BibTeX
@inproceedings{hong2022depth,
title={Depth-Aware Generative Adversarial Network for Talking Head Video Generation},
author={Hong, Fa-Ting and Zhang, Longhao and Shen, Li and Xu, Dan},
journal={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2022}
}
@article{hong2023dagan,
title={DaGAN++: Depth-Aware Generative Adversarial Network for Talking Head Video Generation},
author={Hong, Fa-Ting and and Shen, Li and Xu, Dan},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
year={2023}
}
:e-mail: Contact
If you have any question or collaboration need (research purpose or commercial purpose), please email fhongac@cse.ust.hk
.