Home

Awesome

Gluon FR Toolkit

Documentation Status

GluonFR is a toolkit based on MXnet-Gluon, provides SOTA deep learning algorithm and models in face recognition.

Installation

GluonFR supports Python 3.5 or later. To install this package you need install GluonCV and MXNet first:

pip install gluoncv --pre
pip install mxnet-mkl --pre --upgrade
# if cuda XX is installed
pip install mxnet-cuXXmkl --pre --upgrade

Then install gluonfr:

pip install git+https://github.com/THUFutureLab/gluon-face.git@master
pip install gluonfr

GluonFR Introduction:

GluonFR is based on MXnet-Gluon, if you are new to it, please check out dmlc 60-minute crash course.

Data:

This part provides input pipeline for training and validation, all datasets is aligned by mtcnn and cropped to (112, 112) by DeepInsight, they converted images to train.rec, train.idx and val_data.bin files, please check out [insightface/Dataset-Zoo] for more information. In data/dali_utils.py, there is a simple example of Nvidia-DALI. It is worth trying when data augmentation with cpu can not satisfy the speed of gpu training,

The files should be prepared like:

face/
    emore/
        train.rec
        train.idx
        property
    ms1m/
        train.rec
        train.idx
        property
    lfw.bin
    agedb_30.bin
    ...
    vgg2_fp.bin

We use ~/.mxnet/datasets as default dataset root to match mxnet setting.

Model_Zoo:

mobile_facenet, res_attention_net, se_resnet...

Loss:

GluonFR provides implement of losses in recent, including SoftmaxCrossEntropyLoss, ArcLoss, TripletLoss, RingLoss, CosLoss, L2Softmax, ASoftmax, CenterLoss, ContrastiveLoss, ... , and we will keep updating in future.
If there is any method we overlooked, please open an issue.

Example:

examples/ shows how to use gluonfr to train a face recognition model, and how to get Mnist 2-D feature embedding visualization.

Losses in GluonFR:

The last column of this chart is the best LFW accuracy reported in paper, they are trained with different data and networks, later we will give our results of these method with same train data and network.

MethodPaperVisualization of MNISTLFW
Contrastive LossContrastiveLoss--
Triplet1503.03832-99.63±0.09
Center LossCenterLoss<img src="resources/mnist-euclidean/center-train-epoch100.png"/>99.28
L2-Softmax1703.09507-99.33
A-Softmax1704.08063-99.42
CosLoss/AMSoftmax1801.05599/1801.05599<img src="resources/minst-angular/cosloss-train-epoch95.png"/>99.17
Arcloss1801.07698<img src="resources/minst-angular/arcloss-train-epoch100.png"/>99.82
Ring loss1803.00130<img src="resources/mnist-euclidean/ringloss-train-epoch95-0.1.png"/>99.52
LGM Loss1803.02988<img src="resources/mnist-euclidean/LGMloss-train-epoch100.png"/>99.20±0.03

Pretrained Models

See Model Zoo in doc.

Todo

Docs

Please checkout link.
For Chinese Version: link

Authors

{ haoxintong Yangxv Haoyadong Sunhao }

Discussion

中文社区Gluon-Forum Feel free to use English here :D.

References

  1. MXNet Documentation and Tutorials https://zh.diveintodeeplearning.org/

  2. NVIDIA DALI documentationNVIDIA DALI documentation

  3. Deepinsight insightface