Home

Awesome

EmoFace: Audio-driven Emotional 3D Face Animation

PyTorch implementation for the paper EmoFace: Audio-driven Emotional 3D Face Animation.

Environment

Data

As we could not publish the full dataset yet, we provide one piece of data from the evaluation set.

Training and Testing

First, dataloader for training and testing need to be generated by data.py. The path to the dataset needs to be assigned.

Training and Testing is combined in main.py. To run the model with default settings, there is only need to set maximum epoches.

 python main.py --max_epoch 1000

When training, the weight would be saved in directory weight/.

Blink

The directory blink contains files related to blinks.

Demo

demo.py uses the trained model to output corresponding controller rig values for audio clips. The weight of model PATH, path to audio files audio_path and save files pred_path need to be assigned inside the script.

Visualization

Output of the model is .txt files containing controller values, each row stands for one frame. To visualize the output, you need a MetaHuman model with all the controller rigs in the valid_attr_names.txt.