Home

Awesome

Teaser image

Unsupervised High-Fidelity Facial Texture Generation and Reconstruction<br> Ron Slossberg, Ibrahim Jubran, Ron Kimmel<br> https://arxiv.org/abs/2110.04760<br> European Conference on Computer Vision (ECCV) 2022 <br>

Abstract: Many methods have been proposed over the years to tackle the task of facial 3D geometry and texture recovery from a single image. Such methods often fail to provide high-fidelity texture without relying on 3D facial scans during training. In contrast, the complementary task of 3D facial generation has not received as much attention. As opposed to the 2D texture domain, where GANs have proven to produce highly realistic facial images, the more challenging 3D domain has not yet caught up to the same levels of realism and diversity.

In this paper, we propose a novel unified pipeline for both tasks, generation of texture with coupled geometry, and reconstruction of high-fidelity texture. Our texture model is learned, in an unsupervised fashion, from natural images as opposed to scanned textures. To our knowledge, this is the first such unified framework independent of scanned textures.

Our novel training pipeline incorporates a pre-trained 2D facial generator coupled with a deep feature manipulation methodology. By applying our two-step geometry fitting process, we seamlessly integrate our modeled textures into synthetically generated background images forming a realistic composition of our textured model with background, hair, teeth, and body. This enables us to apply transfer learning from the 2D image domain, thus leveraging the high-quality results obtained in this domain.

We provide a comprehensive study on several recent methods comparing our model in generation and reconstruction tasks. As the extensive qualitative, as well as quantitative analysis, demonstrate, we achieve state-of-the-art results for both tasks.

Result Summary

The main contributions of our work are as follows:

Example Results

Trained generator weights can be downloaded from Google Drive.

Additional details, results, and ablation study are provided within our paper.

Citation

Our repo is based on StyleGAN2-ADA. Please cite

@inproceedings{Karras2020ada,
  title     = {Training Generative Adversarial Networks with Limited Data},
  author    = {Tero Karras and Miika Aittala and Janne Hellsten and Samuli Laine and Jaakko Lehtinen and Timo Aila},
  booktitle = {Proc. NeurIPS},
  year      = {2020}
}
@inproceedings{slossberg2022unsupervised,
  title={Unsupervised high-fidelity facial texture generation and reconstruction},
  author={Slossberg, Ron and Jubran, Ibrahim and Kimmel, Ron},
  booktitle={Computer Vision--ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23--27, 2022, Proceedings, Part XIII},
  pages={212--229},
  year={2022},
  organization={Springer}
}