Awesome
TRUST: Towards Racially Unbiased Skin Tone Estimation via Scene Disambiguation (ECCV2022)
<p align="center"> <img src="teaser_final_v5_font_change.JPG"> </p>This is the official Pytorch implementation of TRUST.
- We identify, analyze and quantify the problem of biased facial albedo estimation.
- We propose FAIR Challenge, a new synthetic benchmark including novel evaluation protocol that measures albedo estimation in terms of skin tone and diversity.
- We propose TRUST, a new network that estimates facial albedo with more accuracy and less bias in skin tone, hence the reconstructed 3D head avatar can be faithful and inclusive from a single image.
Please refer to the arXiv paper for more details.
Getting Started
Clone the repo:
git clone https://github.com/HavenFeng/TRUST/
cd TRUST
Requirements
- Python 3.8 (numpy, skimage, scipy, opencv)
- PyTorch >= 1.7 (pytorch3d compatible)
You can run
If you encountered errors when installing PyTorch3D, please follow the official installation guide to re-install the library.pip install -r requirements.txt
Usage
-
Prepare data & models
Please check our project website to download the FAIR benchmark dataset and our released pretrained models.
After downloading the pretrained models, put them in ./data -
Run test
a. FAIR benchmarkpython test.py --test_folder '/path/to/trust_models' --test_split val
change the test_split flag to run on test set or validation set.
Evaluation
TRUST (ours) achieves 57% lower error of the total score (35% lower on Average ITA error, 77% lower on Bias error), on the FAIR Challenge compared to the previous state-of-the-art method.
For more details of the evaluation, please check our arXiv paper.
Citation
If you find our work useful to your research, please consider citing:
@inproceedings{Feng:TRUST:ECCV2022,
title = {Towards Racially Unbiased Skin Tone Estimation via Scene Disambiguation},
author = {Feng, Haiwen and Bolkart, Timo and Tesch, Joachim and Black, Michael J. and Abrevaya, Victoria},
booktitle = {European Conference on Computer Vision},
year = {2022}
}
Notes
Training code will also be released in the future.
License
This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.
Acknowledgements
For functions or scripts that are based on external sources, we acknowledge the origin individually in each file.
Here are some great resources we benefit:
- DECA for the general framework of 3D face reconstruction
- FLAME_PyTorch and TF_FLAME for the FLAME model
- Pytorch3D, neural_renderer, SoftRas for rendering
- kornia for image/rotation processing
- face-alignment for cropping
- FAN for landmark detection
- face_segmentation for skin mask
We would also like to thank other recent public 3D face reconstruction works that allow us to easily perform quantitative and qualitative comparisons :)
DECA,
Deep3DFaceReconstruction,
GANFit,
INORig,
MGCNet
This work was partly supported by the German Federal Ministry of Education and Research (BMBF): Tuebingen AI Center, FKZ: 01IS18039B