Home

Awesome

Towards Fast, Accurate and Stable 3D Dense Face Alignment

License GitHub repo size

By Jianzhu Guo, Xiangyu Zhu, Yang Yang, Fan Yang, Zhen Lei and Stan Z. Li. The code repo is owned and maintained by Jianzhu Guo.

<p align="center"> <img src="docs/images/webcam.gif" alt="demo" width="512px"> </p>

[Updates]

Introduction

This work extends 3DDFA, named 3DDFA_V2, titled Towards Fast, Accurate and Stable 3D Dense Face Alignment, accepted by ECCV 2020. The supplementary material is here. The gif above shows a webcam demo of the tracking result, in the scenario of my lab. This repo is the official implementation of 3DDFA_V2.

Compared to 3DDFA, 3DDFA_V2 achieves better performance and stability. Besides, 3DDFA_V2 incorporates the fast face detector FaceBoxes instead of Dlib. A simple 3D render written by c++ and cython is also included. This repo supports the onnxruntime, and the latency of regressing 3DMM parameters using the default backbone is about 1.35ms/image on CPU with a single image as input. If you are interested in this repo, just try it on this google colab! Welcome for valuable issues, PRs and discussions 😄

<!-- Currently, the pre-trained model, inference code and some utilities are released. -->

Getting started

Requirements

See requirements.txt, tested on macOS and Linux platforms. The Windows users may refer to FQA for building issues. Note that this repo uses Python3. The major dependencies are PyTorch, numpy, opencv-python and onnxruntime, etc. If you run the demos with --onnx flag to do acceleration, you may need to install libomp first, i.e., brew install libomp on macOS.

Usage

  1. Clone this repo
git clone https://github.com/cleardusk/3DDFA_V2.git
cd 3DDFA_V2
  1. Build the cython version of NMS, Sim3DR, and the faster mesh render
<!-- ```shell script cd FaceBoxes sh ./build_cpu_nms.sh cd .. cd Sim3DR sh ./build_sim3dr.sh cd .. # the faster mesh render cd utils/asset gcc -shared -Wall -O3 render.c -o render.so -fPIC cd ../.. ``` or simply build them by -->
sh ./build.sh
  1. Run demos
# 1. running on still image, the options include: 2d_sparse, 2d_dense, 3d, depth, pncc, pose, uv_tex, ply, obj
python3 demo.py -f examples/inputs/emma.jpg --onnx # -o [2d_sparse, 2d_dense, 3d, depth, pncc, pose, uv_tex, ply, obj]

# 2. running on videos
python3 demo_video.py -f examples/inputs/videos/214.avi --onnx

# 3. running on videos smoothly by looking ahead by `n_next` frames
python3 demo_video_smooth.py -f examples/inputs/videos/214.avi --onnx

# 4. running on webcam
python3 demo_webcam_smooth.py --onnx

The implementation of tracking is simply by alignment. If the head pose > 90° or the motion is too fast, the alignment may fail. A threshold is used to trickly check the tracking state, but it is unstable.

You can refer to demo.ipynb or google colab for the step-by-step tutorial of running on the still image.

For example, running python3 demo.py -f examples/inputs/emma.jpg -o 3d will give the result below:

<p align="center"> <img src="docs/images/emma_3d.jpg" alt="demo" width="640px"> </p>

Another example:

<p align="center"> <img src="docs/images/trump_biden_3d.jpg" alt="demo" width="640px"> </p>

Running on a video will give:

<p align="center"> <img src="docs/images/out.gif" alt="demo" width="512px"> </p>

More results or demos to see: Hathaway.

<!-- Obviously, the eyes parts are not good. -->

Features (up to now)

<table> <tr> <th>2D sparse</th> <th>2D dense</th> <th>3D</th> </tr> <tr> <td><img src="docs/images/trump_hillary_2d_sparse.jpg" width="360" alt="2d sparse"></td> <td><img src="docs/images/trump_hillary_2d_dense.jpg" width="360" alt="2d dense"></td> <td><img src="docs/images/trump_hillary_3d.jpg" width="360" alt="3d"></td> </tr> <tr> <th>Depth</th> <th>PNCC</th> <th>UV texture</th> </tr> <tr> <td><img src="docs/images/trump_hillary_depth.jpg" width="360" alt="depth"></td> <td><img src="docs/images/trump_hillary_pncc.jpg" width="360" alt="pncc"></td> <td><img src="docs/images/trump_hillary_uv_tex.jpg" width="360" alt="uv_tex"></td> </tr> <tr> <th>Pose</th> <th>Serialization to .ply</th> <th>Serialization to .obj</th> </tr> <tr> <td><img src="docs/images/trump_hillary_pose.jpg" width="360" alt="pose"></td> <td><img src="docs/images/ply.jpg" width="360" alt="ply"></td> <td><img src="docs/images/obj.jpg" width="360" alt="obj"></td> </tr> </table>

Configs

The default backbone is MobileNet_V1 with input size 120x120 and the default pre-trained weight is weights/mb1_120x120.pth, shown in configs/mb1_120x120.yml. This repo provides another config in configs/mb05_120x120.yml, with the widen factor 0.5, being smaller and faster. You can specify the config by -c or --config option. The released models are shown in the below table. Note that the inference time on CPU in the paper is evaluated using TensorFlow.

ModelInput#Params#MacsInference (TF)
MobileNet120x1203.27M183.5M~6.2ms
MobileNet x0.5120x1200.85M49.5M~2.9ms

Surprisingly, the latency of onnxruntime is much smaller. The inference time on CPU with different threads is shown below. The results are tested on my MBP (i5-8259U CPU @ 2.30GHz on 13-inch MacBook Pro), with the 1.5.1 version of onnxruntime. The thread number is set by os.environ["OMP_NUM_THREADS"], see speed_cpu.py for more details.

ModelTHREAD=1THREAD=2THREAD=4
MobileNet4.4ms2.25ms1.35ms
MobileNet x0.51.37ms0.7ms0.5ms

Latency

The onnx option greatly reduces the overall CPU latency, but face detection still takes up most of the latency time, e.g., 15ms for a 720p image. 3DMM parameters regression takes about 1~2ms for one face, and the dense reconstruction (more than 30,000 points, i.e. 38,365) is about 1ms for one face. Tracking applications may benefit from the fast 3DMM regression speed, since detection is not needed for every frame. The latency is tested using my 13-inch MacBook Pro (i5-8259U CPU @ 2.30GHz).

The default OMP_NUM_THREADS is set 4, you can specify it by setting os.environ['OMP_NUM_THREADS'] = '$NUM' or inserting export OMP_NUM_THREADS=$NUM before running the python script.

<p align="center"> <img src="docs/images/latency.gif" alt="demo" width="640px"> </p>

FQA

  1. What is the training data?

    We use 300W-LP for training. You can refer to our paper for more details about the training. Since few images are closed-eyes in the training data 300W-LP, the landmarks of eyes are not accurate when closing. The eyes part of the webcam demo are also not good.

  2. Running on Windows.

    You can refer to this comment for building NMS on Windows.

Acknowledgement

Other implementations or applications

Citation

If your work or research benefits from this repo, please cite two bibs below : ) and 🌟 this repo.

@inproceedings{guo2020towards,
    title =        {Towards Fast, Accurate and Stable 3D Dense Face Alignment},
    author =       {Guo, Jianzhu and Zhu, Xiangyu and Yang, Yang and Yang, Fan and Lei, Zhen and Li, Stan Z},
    booktitle =    {Proceedings of the European Conference on Computer Vision (ECCV)},
    year =         {2020}
}

@misc{3ddfa_cleardusk,
    author =       {Guo, Jianzhu and Zhu, Xiangyu and Lei, Zhen},
    title =        {3DDFA},
    howpublished = {\url{https://github.com/cleardusk/3DDFA}},
    year =         {2018}
}

Contact

Jianzhu Guo (郭建珠) [Homepage, Google Scholar]: guojianzhu1994@foxmail.com or guojianzhu1994@gmail.com or jianzhu.guo@nlpr.ia.ac.cn (this email will be invalid soon).