Home

Awesome

DeepFace-EMD: Re-ranking Using Patch-wise Earth Mover’s Distance Improves Out-Of-Distribution Face Identification

tldr: Improving OOD face identification (e.g. on masked faces) by harnessing pre-trained face models for patch-wise similarity-based re-ranking. Accuracy improved without any further training and without synthetic or augmented data.

Official Implementation for the paper DeepFace-EMD: Re-ranking Using Patch-wise Earth Mover’s Distance Improves Out-Of-Distribution Face Identification (2022) by Hai Phan and Anh Nguyen.

:star2: Online web demo: https://aub.ie/face (video)

If you use this software, please consider citing:

@inproceedings{hai2022deepface,
  title={DeepFace-EMD: Re-ranking Using Patch-wise Earth Mover’s Distance Improves Out-Of-Distribution Face Identification},
  author={Hai Phan, Anh Nguyen},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2022}
}

1. Requirements

Python >= 3.5
Pytorch > 1.0
Opencv >= 3.4.4
pip install tqdm

2. Download datasets and pretrained models

  1. Download LFW, out-of-distribution (OOD) LFW test sets, and pretrained models: Google Drive

  2. Create the following folders:

mkdir data
mkdir pretrained
  1. Extract LFW datasets (e.g. lfw_crop_96x112.tar.gz) to data/
  2. Copy models (e.g. resnet18_110.pth) to pretrained/

3. How to run

3.1 Run examples

3.2 Reproduce results

3.3 Run visualization with two images

python visualize_faces.py -method [methods] -fm [face models] -model_path [model dir] -in1 [1st image] -in2 [2nd image] -weight [1/0: showing weight heatmaps] 

The results are in results/flow and results/heatmap (if -weight flag is on).

3.4 Use your own images

  1. Facial alignment. See align_face.py for details.
pip install scikit-image
pip install face-alignment
ref_pts = [ [61.4356, 54.6963],[118.5318, 54.6963], [93.5252, 90.7366],[68.5493, 122.3655],[110.7299, 122.3641]]
crop_size = (160, 160)
  1. Create a folder including all persons (folders: name of person) and put it to '/data'
  2. Create a txt file with format: [image_path],[label] of that folder (See lfw file for details)
  3. Modify face loader: Add your txt file in function: get_face_dataloader.

3.5. Training on masked images

See this folder for the data and instructions for train on masked images following the experiments in Sec. 4.3 Re-ranking rivals finetuning on masked images in the paper.

4. Demo

Based on StyleGAN-2.0, we built a stream-lit-powered demo video that allows users to first choose a real image and manipulate it (e.g. by adding masks, sunglasses or changing facial expression via StyleGAN) and then input it into our DeepFace-EMD system. This demo allows one to interactively probe the standard MIPS search and also our search systems and compare their robustness and accuracy qualitatively.

All the code to run this stream-lit app on your server (GPU required) is here.

5. License

MIT

6. References

  1. W. Zhao, Y. Rao, Z. Wang, J. Lu, Zhou. Towards interpretable deep metric learning with structural matching, ICCV 2021 DIML
  2. J. Deng, J. Guo, X. Niannan, and StefanosZafeiriou. Arcface: Additive angular margin loss for deepface recognition, CVPR 2019 Arcface Pytorch
  3. H. Wang, Y. Wang, Z. Zhou, X. Ji, DihongGong, J. Zhou, Z. Li, W. Liu. Cosface: Large margin cosine loss for deep face recognition, CVPR 2018 CosFace Pytorch
  4. F. Schroff, D. Kalenichenko, J. Philbin. Facenet: A unified embedding for face recognition and clustering. CVPR 2015 FaceNet Pytorch
  5. L. Weiyang, W. Yandong, Y. Zhiding, L. Ming, R. Bhiksha, S. Le. SphereFace: Deep Hypersphere Embedding for Face Recognition, CVPR 2017 sphereface, sphereface pytorch
  6. Chi Zhang, Yujun Cai, Guosheng Lin, Chunhua Shen. Deepemd: Differentiable earth mover’s distance for few-shotlearning, CVPR 2020 paper