Awesome
Occlusion Robust 3D face Reconstruction
Code for Occlusion Robust 3D Face Reconstruction in "Complete Face Recovery GAN: Unsupervised Joint Face Rotation and De-Occlusion from a Single-View Image (WACV 2022)" Link
<img src="./data/figure1_3d.png" style="zoom:60%;" />Yeong-Joon Ju, Gun-Hee Lee, Jung-Ho Hong, and Seong-Whan Lee
Abstract
We propose our novel two stage fine-tuning strategy for occlusion-robust 3D face reconstruction. The training method is split into two training stages due to the difficulty of initial training for extreme occlusions. We fine-tune the baseline with our newly created datasets in the first stage and with teacher-student learning method in the second stage.
Our baseline is Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set and we also referred this code. Note that we focus on alignments and colors for guidance of CFR-GAN in occluded facial images.
Blended results
<img src="./data/figure2_blend.png" style="zoom:60%;" />First row is baseline results and second row is our results.
Requirements
-
Python 3.7 or 3.8 can be used.
pip install -r requirements.txt
-
Install the Pytorch3D==0.2.5
-
Basel Face Model 2009 (BFM09) and Expression Basis (transferred from Facewarehouse by Guo et al.). The original BFM09 model does not handle expression variations so extra expression basis are needed.
- However, we made BFM_model_80.mat (Dimension of id coef and tex coef is 80). Download and move it to mmRegressor/BFM folder.
Inference
- Download our trained weights to saved_models folder
For alignment, You can use MTCNN or RetinaFace but we recommend to use RetinaFace.
git clone https://github.com/biubug6/Pytorch_Retinaface.git
Download weights
Estimate 3D faces from your images
python inference.py --img_path [your image path] --save_path [your save path] --model_path [WEIGHT PATH]
Training with your dataset
<img src="data/app_occ_ex.png" style="zoom:60%;" />Preprocessing:
Prepare your own dataset for data augmentation. The datasets used in this paper can be downloaded in follows:
Except when the dataset has facial landmarks labels, you should predict facial landmarks. We recommend using 3DDFA v2. If you want to reduce error propagation of the facial alignment networks, prepend a flag to filename. (ex) "pred"+[filename])
In order to train occlusion-robust 3D face model, occluded face image datasets are essential, but they are absent. So, we create datasets by synthesizing the hand-shape mask.
python create_train_stage1.py --img_path [your image folder] --lmk_path [your landmarks folder] --save_path [path to save]
For first training stage, prepare occluded
(augmented images), ori_img
(original images), landmarks
(3D landmarks) folders or modify folder name in train_stage1.py
.
**You must align images with align.py**
meta file format is:
[filename] left eye x left eye y right eye x right eye y nose x nose y left mouth x left mouth y ...
You can use MTCNN or RetinaFace
First Fine-tuning Stage:
Instead of skin mask, we use BiseNet, face parsing network. The codes and weights were modified and re-trained from this code. <u>If you want to get more clear textures, use a skin detector of baseline.</u>
- Download weights of face parsing networks to faceParsing folder.
- Download weights of baseline 3D networks to mmRegressor/network folder.
- Download weights of face recognition networks to saved_models folder. This network weight was specifically trained for stage1-training.
Train occlusion-robust 3D face model
python train_stage1.py
To show logs
tensorboard --logdir=logs_stage1 --bind_all --reload_multifile True
Second Fine-tuning Stage:
Train
python train_stage2.py
To show logs
tensorboard --logdir=logs_stage2 --bind_all --reload_multifile True
Evaluation
python evaluation/benchmark_nme_aflw_2000.py
If you would like to evaluate your results, please refer estimate_aflw2000.py
Citation
@InProceedings{Ju_2022_WACV,
author = {Ju, Yeong-Joon and Lee, Gun-Hee and Hong, Jung-Ho and Lee, Seong-Whan},
title = {Complete Face Recovery GAN: Unsupervised Joint Face Rotation and De-Occlusion From a Single-View Image},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2022},
pages = {3711-3721}
}
@inproceedings{deng2019accurate,
title={Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set},
author={Yu Deng and Jiaolong Yang and Sicheng Xu and Dong Chen and Yunde Jia and Xin Tong},
booktitle={IEEE Computer Vision and Pattern Recognition Workshops},
year={2019}
}
Acknowledgement
This work was supported by Institute of Information & communications Technology Planning Evaluation (IITP) grant funded by the Korea government(MSIT) (No. 2019-0-00079, Artificial Intelligence Graduate School Program(Korea University))