Home

Awesome

TalkLip net

This repo is the official implementation of 'Seeing What You Said: Talking Face Generation Guided by a Lip Reading Expert', CVPR 2023.

Arxiv | Paper

🔥 News

  1. We upload a Talking_face_demo.pptx to this repository which contains some demo videos.
  2. Fix the GPU out-of-memory error in train.py. Running train.py with a batch_size of 8 requires approximately 24GB of memory. However, in some rare cases, it might need more than 24GB and trigger an error. We have resolved this issue using a try-and-catch mechanism. -- 19/July/2023
  3. We upload a checkpoint of the discriminator as requested in the issue.
  4. We upload an eval_lrs.sh in the evaluation folder, which allows you to evaluate all metrics on LRS2 with a single command.

Prerequisite

  1. pip install torch==1.12.0+cu113 torchvision==0.13.0+cu113 -f https://download.pytorch.org/whl/torch_stable.html.
  2. Install AV-Hubert by following his installation.
  3. Install supplementary packages via pip install -r requirement.txt
  4. Install ffmpeg. We adopt version=4.3.2. Please double check wavforms extracted from mp4 files. Extracted wavforms should not contain prefix of 0. If you use anaconda, you can refer to conda install -c conda-forge ffmpeg==4.2.3
  5. Download the pre-trained checkpoint of face detector pre-trained model and put it to face_detection/detection/sfd/s3fd.pth. Alternative link.

Dataset and pre-processing

  1. Download LRS2 for training and evaluation. Note that we do not use the pretrain set.
  2. Download LRW for evaluation.
  3. To extract wavforms from mp4 files:
python preparation/audio_extract.py --filelist $filelist  --video_root $video_root --audio_root $audio_root
  1. To detect bounding boxes in videos and save it:
python preparation/bbx_extract.py --filelist $filelist  --video_root $video_root --bbx_root $bbx_root --gpu $gpu
sh preprocess.sh

Checkpoints

ModelDescriptionLink
TalkLip (g)TalkLip net with the global audio encoderLink
TalkLip (g+c)TalkLip net with the global audio encoder and contrastive learningLink
Lip reading observer 1AV-hubert (large) fine-tuned on LRS2Link
Lip reading observer 2Conformer lip-reading networkLink
Lip reading expertlip-reading network for training of talking face generationLink
DiscriminatorDiscriminator of GANLink

Train

Some AV-Hubert files need to be modified.

rm xxx/av_hubert/avhubert/hubert_asr.py
cp avhubert_modification/hubert_asr_wav2lip.py xxx/av_hubert/avhubert/hubert_asr.py

rm xxx/av_hubert/fairseq/fairseq/criterions/label_smoothed_cross_entropy.py
cp avhubert_modification/label_smoothed_cross_entropy_wav2lip.py xxx/av_hubert/fairseq/fairseq/criterions/label_smoothed_cross_entropy.py

You can train with the following command.

python train.py --file_dir $file_list_dir --video_root $video_root --audio_root $audio_root \
--bbx_root $bbx_root --word_root $word_root --avhubert_root $avhubert_root --avhubert_path $avhubert_path \
--checkpoint_dir $checkpoint_dir --log_name $log_name --cont_w $cont_w --lip_w $lip_w --perp_w $perp_w \
--gen_checkpoint_path $gen_checkpoint_path --disc_checkpoint_path $disc_checkpoint_path

Note: Sometimes, discriminator losses may diverge during training (close to 100). Please stop the training and resume it with a reliable checkpoint.

Test

The below command is to synthesize videos for quantitative evaluation in our paper.

python inf_test.py --filelist $filelist --video_root $video_root --audio_root $audio_root \
--bbx_root $bbx_root --save_root $syn_video_root --ckpt_path $talklip_ckpt --avhubert_root $avhubert_root

Demo

I update the inf_demo.py on 4/April as I previously suppose that the height and width of output videos are the same when I set cv2.VideoWriter(). Please ensure the sampling rate of the input audio file is 16000 hz.

If you want to reenact the lip movement of a video with a different speech, you can use the following command.

python inf_demo.py --video_path $video_file --wav_path $audio_file --ckpt_path $talklip_ckpt --avhubert_root $avhubert_root

**Please ensure that the input audio only has one channel

Evaluation

Please follow README.md in the evaluation directory

Citation

@inproceedings{wang2023seeing,
  title={Seeing What You Said: Talking Face Generation Guided by a Lip Reading Expert},
  author={Wang, Jiadong and Qian, Xinyuan and Zhang, Malu and Tan, Robby T and Li, Haizhou},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={14653--14662},
  year={2023}
}