Home

Awesome

DECA: Detailed Expression Capture and Animation (SIGGRAPH2021)

<p align="center"> <img src="TestSamples/teaser/results/teaser.gif"> </p> <p align="center">input image, aligned reconstruction, animation with various poses & expressions<p align="center">

Open In Colab

This is the official Pytorch implementation of DECA.

DECA reconstructs a 3D head model with detailed facial geometry from a single input image. The resulting 3D head model can be easily animated. Please refer to the arXiv paper for more details.

The main features:

Getting Started

Clone the repo:

git clone https://github.com/YadiraF/DECA
cd DECA

Requirements

Usage

  1. Prepare data
    run script:

    bash fetch_data.sh
    
    <!-- or manually download data form [FLAME 2020 model](https://flame.is.tue.mpg.de/download.php) and [DECA trained model](https://drive.google.com/file/d/1rp8kdyLPvErw2dTmqtjISRVvQLj6Yzje/view?usp=sharing), and put them in ./data -->

    (Optional for Albedo)
    follow the instructions for the Albedo model to get 'FLAME_albedo_from_BFM.npz', put it into ./data

  2. Run demos
    a. reconstruction

    python demos/demo_reconstruct.py -i TestSamples/examples --saveDepth True --saveObj True
    

    to visualize the predicted 2D landmanks, 3D landmarks (red means non-visible points), coarse geometry, detailed geometry, and depth.

    <p align="center"> <img src="Doc/images/id04657-PPHljWCZ53c-000565_inputs_inputs_vis.jpg"> </p> <p align="center"> <img src="Doc/images/IMG_0392_inputs_vis.jpg"> </p> You can also generate an obj file (which can be opened with Meshlab) that includes extracted texture from the input image.

    Please run python demos/demo_reconstruct.py --help for more details.

    b. expression transfer

    python demos/demo_transfer.py
    

    Given an image, you can reconstruct its 3D face, then animate it by tranfering expressions from other images. Using Meshlab to open the detailed mesh obj file, you can see something like that:

    <p align="center"> <img src="Doc/images/soubhik.gif"> </p> (Thank Soubhik for allowing me to use his face ^_^)

    Note that, you need to set '--useTex True' to get full texture.

    c. for the teaser gif (reposing and animation)

    python demos/demo_teaser.py 
    

    More demos and training code coming soon.

Evaluation

DECA (ours) achieves 9% lower mean shape reconstruction error on the NoW Challenge dataset compared to the previous state-of-the-art method.
The left figure compares the cumulative error of our approach and other recent methods (RingNet and Deng et al. have nearly identitical performance, so their curves overlap each other). Here we use point-to-surface distance as the error metric, following the NoW Challenge.

<p align="left"> <img src="Doc/images/DECA_evaluation_github.png"> </p>

For more details of the evaluation, please check our arXiv paper.

Training

  1. Prepare Training Data

    a. Download image data
    In DECA, we use VGGFace2, BUPT-Balancedface and VoxCeleb2

    b. Prepare label
    FAN to predict 68 2D landmark
    face_segmentation to get skin mask

    c. Modify dataloader
    Dataloaders for different datasets are in decalib/datasets, use the right path for prepared images and labels.

  2. Download face recognition trained model
    We use the model from VGGFace2-pytorch for calculating identity loss, download resnet50_ft, and put it into ./data

  3. Start training

    Train from scratch:

    python main_train.py --cfg configs/release_version/deca_pretrain.yml 
    python main_train.py --cfg configs/release_version/deca_coarse.yml 
    python main_train.py --cfg configs/release_version/deca_detail.yml 
    

    In the yml files, write the right path for 'output_dir' and 'pretrained_modelpath'.
    You can also use released model as pretrained model, then ignor the pretrain step.

Related works:

Citation

If you find our work useful to your research, please consider citing:

@inproceedings{DECA:Siggraph2021,
  title={Learning an Animatable Detailed {3D} Face Model from In-The-Wild Images},
  author={Feng, Yao and Feng, Haiwen and Black, Michael J. and Bolkart, Timo},
  journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH)}, 
  volume = {40}, 
  number = {8}, 
  year = {2021}, 
  url = {https://doi.org/10.1145/3450626.3459936} 
}
<!-- ## Notes 1. Training code will also be released in the future. -->

License

This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.

Acknowledgements

For functions or scripts that are based on external sources, we acknowledge the origin individually in each file.
Here are some great resources we benefit:

We would also like to thank other recent public 3D face reconstruction works that allow us to easily perform quantitative and qualitative comparisons :)
RingNet, Deep3DFaceReconstruction, Nonlinear_Face_3DMM, 3DDFA-v2, extreme_3d_faces, facescape

<!-- 3DMMasSTN, DenseReg, 3dmm_cnn, vrn, pix2vertex -->