Home

Awesome

   

<div align="center"> <img src="assets/opensphere_logo2.png" width="600"/> </div> <div align="center">

arxiv-link project-page made-with-pytorch License: MIT

OpenSphere is a hyperspherical face recognition library based on PyTorch. Check out the project homepage.

</div> &nbsp; <p align="center"> <img src="assets/teaser.gif" width="580"/> </p>

Introduction

OpenSphere provides a consistent and unified training and evaluation framework for hyperspherical face recognition research. The framework decouples the loss function from the other varying components such as network architecture, optimizer, and data augmentation. It can fairly compare different loss functions in hyperspherical face recognition on popular benchmarks, serving as a transparent platform to reproduce published results.

<!-- TABLE OF CONTENTS -->

Table of Contents: - <a href="#key-features">Key features</a> - <a href="#setup">Setup</a> - <a href="#get-started">Get started</a> - <a href="#log-and-pretrained-models">Pretrained models</a> - <a href="#reproduce-published-results">Reproducible results</a> - <a href="#citation">Citation</a> -

<details open> <summary>Supported Projects</summary> </details>

Update

Key Features

Setup

  1. Clone the OpenSphere repository. We'll call the directory that you cloned OpenSphere as $OPENSPHERE_ROOT.

    git clone https://github.com/ydwen/opensphere.git
    
  2. Construct virtual environment in Anaconda:

    conda env create -f environment.yml
    

Get started

In this part, we assume you are in the directory $OPENSPHERE_ROOT. After successfully completing the Setup, you are ready to run all the following experiments.

  1. Download and process the datasets
  1. Train a model (see the training config file for the detailed setup)

    We give a few examples for training on different datasets with different backbone architectures:

  1. Test a model (see the testing config file for detailed setup)

For more information about how to use training and testing config files, please see here.

Results and pretrained models

<div align="center">
LossArchitectureDatasetConfig & Training Log & Pretrained Model
SphereFaceSFNet-20 (w/o BN)VGGFace2Google Drive
SphereFace+SFNet-20 (w/o BN)VGGFace2Google Drive
SphereFace-R (HFN,v2)SFNet-20 (w/o BN)VGGFace2Google Drive
SphereFace-R (SFN,v2)SFNet-20 (w/o BN)VGGFace2To be added
SphereFace2SFNet-20 (w/o BN)VGGFace2Google Drive
SphereFaceSFNet-64 (w/ BN)MS1MGoogle Drive
SphereFace+SFNet-64 (w/ BN)MS1MGoogle Drive
SphereFace-R (HFN,v2)SFNet-64 (w/ BN)MS1MTo be added
SphereFace2SFNet-64 (w/ BN)MS1MTo be added
SphereFaceIResNet-100MS1MGoogle Drive
SphereFace+IResNet-100MS1MGoogle Drive
SphereFace-R (HFN,v2)IResNet-100MS1MGoogle Drive
SphereFace2IResNet-100MS1MTo be added
SphereFaceSFNet-64 (w/ BN)Glint360KTo be added
SphereFace+SFNet-64 (w/ BN)Glint360KTo be added
SphereFace-R (HFN,v2)SFNet-64 (w/ BN)Glint360KTo be added
SphereFace2SFNet-64 (w/ BN)Glint360KTo be added
SphereFaceIResNet-100Glint360KTo be added
SphereFace+IResNet-100Glint360KTo be added
SphereFace-R (HFN,v2)IResNet-100Glint360KTo be added
SphereFace2IResNet-100Glint360KTo be added
</div>

Reproduce published results

We create an additional folder config/papers that is used to provide detailed config files and reproduce results in published papers. Currently we provide config files for the following papers:

Citation

If you find OpenSphere useful in your research, please consider to cite:

For SphereFace:

@article{Liu2022SphereFaceR,
  title={SphereFace Revived: Unifying Hyperspherical Face Recognition},
  author={Liu, Weiyang and Wen, Yandong and Raj, Bhiksha and Singh, Rita and Weller, Adrian},
  journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2022}
}
  
@InProceedings{Liu2017SphereFace,
  title = {SphereFace: Deep Hypersphere Embedding for Face Recognition},
  author = {Liu, Weiyang and Wen, Yandong and Yu, Zhiding and Li, Ming and Raj, Bhiksha and Song, Le},
  booktitle = {CVPR},
  year = {2017}
}

@inproceedings{Liu2016lsoftmax,
  title={Large-Margin Softmax Loss for Convolutional Neural Networks},
  author={Liu, Weiyang and Wen, Yandong and Yu, Zhiding and Yang, Meng},
  booktitle={ICML},
  year={2016}
}

For SphereFace+:

@InProceedings{Liu2018MHE,
  title={Learning towards Minimum Hyperspherical Energy},
  author={Liu, Weiyang and Lin, Rongmei and Liu, Zhen and Liu, Lixin and Yu, Zhiding and Dai, Bo and Song, Le},
  booktitle={NeurIPS},
  year={2018}
}

For SphereFace2:

@InProceedings{wen2021sphereface2,
  title = {SphereFace2: Binary Classification is All You Need for Deep Face Recognition},
  author = {Wen, Yandong and Liu, Weiyang and Weller, Adrian and Raj, Bhiksha and Singh, Rita},
  booktitle = {ICLR},
  year = {2022}
}

Contact

Yandong Wen and Weiyang Liu

Questions can also be left as issues in the repository. We will be happy to answer them.