Home

Awesome

TransFace

This is the official PyTorch implementation of [ICCV-2023] TransFace: Calibrating Transformer Training for Face Recognition from a Data-Centric Perspective.

[Arxiv Version]

image

News

<a href='https://facechain-fact.github.io/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> YouTube

The entire framework of FaceChain-FACT is shown in the figure below.

image

ModelScope

You can quickly experience and invoke our TransFace model on the ModelScope.

# Usage: Input aligned facial images (112x112) to obtain a 512-dimensional facial feature vector.
# For convenience, the model integrates the RetinaFace model for face detection and keypoint estimation.
# Provide two images as input, and for each image, the model will independently perform face detection,
# select the largest face, align it, and extract the corresponding facial features.
# Finally, the model will return a similarity score indicating the resemblance between the two faces.

from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
from modelscope.outputs import OutputKeys
import numpy as np

face_mask_recognition_func = pipeline(Tasks.face_recognition, 'damo/cv_vit_face-recognition')
img1 = 'https://modelscope.oss-cn-beijing.aliyuncs.com/test/images/face_recognition_1.png'
img2 = 'https://modelscope.oss-cn-beijing.aliyuncs.com/test/images/face_recognition_2.png'
emb1 = face_mask_recognition_func(img1)[OutputKeys.IMG_EMBEDDING]
emb2 = face_mask_recognition_func(img2)[OutputKeys.IMG_EMBEDDING]
sim = np.dot(emb1[0], emb2[0])
print(f'Face cosine similarity={sim:.3f}, img1:{img1}  img2:{img2}')

Requirements

Datasets

You can download the training datasets, including MS1MV2 and Glint360K:

You can download the test dataset IJB-C as follows:

How to Train Models

  1. You need to modify the path of training data in every configuration file in folder configs.

  2. To run on a machine with 8 GPUs:

python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=12581 train.py 

How to Test Models

  1. You need to modify the path of IJB-C dataset in eval_ijbc.py.

  2. Run:

python eval_ijbc.py --model-prefix work_dirs/glint360k_vit_s/model.pt --result-dir work_dirs/glint360k_vit_s --network vit_s_dp005_mask_0 > ijbc_glint360k_vit_s.log 2>&1 &

TransFace Pretrained Models

You can download the TransFace models reported in our paper as follows:

Training DataModelIJB-C(1e-6)IJB-C(1e-5)IJB-C(1e-4)IJB-C(1e-3)IJB-C(1e-2)IJB-C(1e-1)
MS1MV2TransFace-S86.7593.8796.4597.5198.3498.99
MS1MV2TransFace-B86.7394.1596.5597.7398.4799.11
MS1MV2TransFace-L86.9094.5596.5997.8098.4599.04
Training DataModelIJB-C(1e-6)IJB-C(1e-5)IJB-C(1e-4)IJB-C(1e-3)IJB-C(1e-2)IJB-C(1e-1)
Glint360KTransFace-S89.9396.0697.3398.0098.4999.11
Glint360KTransFace-B88.6496.1897.4598.1798.6699.23
Glint360KTransFace-L89.7196.2997.6198.2698.6499.19

You can test the accuracy of these model: (e.g. Glint360K TransFace-L)

python eval_ijbc.py --model-prefix work_dirs/glint360k_vit_l/glint360k_model_TransFace_L.pt --result-dir work_dirs/glint360k_vit_l --network vit_l_dp005_mask_005 > ijbc_glint360k_vit_l.log 2>&1 &

Citation

@inproceedings{dan2023transface,
  title={TransFace: Calibrating Transformer Training for Face Recognition from a Data-Centric Perspective},
  author={Dan, Jun and Liu, Yang and Xie, Haoyu and Deng, Jiankang and Xie, Haoran and Xie, Xuansong and Sun, Baigui},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={20642--20653},
  year={2023}
}

Acknowledgments

We thank Insighface for the excellent code base.