Home

Awesome

InsightFace-REST

WARNING: Latest update may cause troubles with previously compiled Numba functions. If you met any errors concerning 'modules not found' Run following command in repo root to remove __pycache__:

find . | grep -E "(__pycache__|\.pyc$)" | sudo xargs rm -rf

This repository aims to provide convenient, easy deployable and scalable REST API for InsightFace face detection and recognition pipeline using FastAPI for serving and NVIDIA TensorRT for optimized inference.

Code is heavily based on API code in official DeepInsight InsightFace repository.

This repository provides source code for building face recognition REST API and converting models to ONNX and TensorRT using Docker.

Draw detections example

Key features:

Contents

List of supported models

Prerequesites

Running with Docker

API usage

Work in progress

Known issues

Changelog

List of supported models:

Detection:

ModelAuto downloadBatch inferenceDetection (ms)Inference (ms)GPU-Util (%)SourceONNX File
retinaface_r50_v1Yes*12.38.426official packagelink
retinaface_mnet025_v1Yes*8.64.617official packagelink
retinaface_mnet025_v2Yes*8.84.917official packagelink
mnet_cov2Yes*8.74.618mnet_cov2link
centerfaceYes10.63.519Star-Clouds/CenterFacelink
scrfd_10g_bnkpsYes*Yes3.3216SCRFDlink
scrfd_2.5g_bnkpsYes*Yes2.21.113SCRFDlink
scrfd_500m_bnkpsYes*Yes1.90.813SCRFDlink
scrfd_10g_gnkpsYes*Yes3.32.217SCRFD**link
scrfd_2.5g_gnkpsYes*Yes2.31.214SCRFD**link
scrfd_500m_gnkpsYes*Yes2.11.314SCRFD**link
yolov5s-faceYes*Yesyolov5-facelink
yolov5m-faceYes*Yesyolov5-facelink
yolov5l-faceYes*Yesyolov5-facelink

Note: Performance metrics measured on NVIDIA RTX2080 SUPER + Intel Core i7-5820K (3.3Ghz * 6 cores) for api/src/test_images/lumia.jpg with force_fp16=True, det_batch_size=1 and max_size=640,640.

Detection time include inference, pre- and postprocessing, but does not include image reading, decoding and resizing.

Note 2: SCRFD family models requires input image shape dividable by 32, i.e 640x640, 1024x768.

Recognition:

ModelAuto downloadBatch inferenceInference b=1 (ms)Inference b=64 (ms)SourceONNX File
arcface_r100_v1Yes*Yes2.654.8official packagelink
r100-arcface-msfdrop75NoYes--SubCenter-ArcFaceNone
r50-arcface-msfdrop75NoYes--SubCenter-ArcFaceNone
glint360k_r100FC_1.0NoYes--Partial-FCNone
glint360k_r100FC_0.1NoYes--Partial-FCNone
glintr100Yes*Yes2.654.7official packagelink
w600k_r50Yes*Yes1.933.2official packagelink
w600k_mbfYes*Yes0.79.9official packagelink
adaface_ir101_webface12mYes*Yes--AdaFace repolink

Other:

ModelAuto downloadInference codeSourceONNX File
genderage_v1Yes*Yesofficial packagelink
mask_detectorYes*YesFace-Mask-Detectionlink
mask_detector112Yes*YesFace-Mask-Detection***link
2d106detNoNocoordinateRegNone

* - Models will be downloaded from Google Drive, which might be inaccessible in some regions like China.

** - custom models retrained for this repo. Original SCRFD models have bug (deepinsight/insightface#1518) with detecting large faces occupying >40% of image. These models are retrained with Group Normalization instead of Batch Normalization, which fixes bug, though at cost of some accuracy.

Models accuracy on WiderFace benchmark:

ModelEasyMediumHard
scrfd_10g_gnkps95.5194.1282.14
scrfd_2.5g_gnkps93.5791.7076.08
scrfd_500m_gnkps88.7086.1163.57

*** - custom model retrained for 112x112 input size to remove excessive resize operations and improve performance.

Requirements:

  1. Docker
  2. Nvidia-container-toolkit
  3. Nvidia GPU drivers (470.x.x and above)

Running with Docker:

  1. Clone repo.
  2. Execute deploy_trt.sh from repo's root, edit settings if needed.
  3. Go to http://localhost:18081 to access documentation and try API

If you have multiple GPU's with enough GPU memory you can try running multiple containers by editing n_gpu and n_workers parameters in deploy_trt.sh.

By default container is configured to build TRT engines without FP16 support, to enable it change value of force_fp16 to True in deploy_trt.sh. Keep in mind, that your GPU should support fast FP16 inference (NVIDIA GPUs of RTX20xx series and above, or server GPUs like TESLA P100, T4 etc. ).

Also if you want to test API in non-GPU environment you can run service with deploy_cpu.sh script. In this case ONNXRuntime will be used as inference backend.

For pure MXNet based version, without TensorRT support you can check depreciated v0.5.0 branch

API usage:

For example of API usage example please refer to demo_client.py code.

Work in progress:

Known issues:

Changelog:

2021-11-06 v0.7.0.0

Since a lot of updates happened since last release version is updated straight to v0.7.0.0

Comparing to previous release (v0.6.2.0) this release brings improved performance for SCRFD based detectors.

Here is performance comparison on GPU Nvidia RTX 2080 Super for scrfd_10g_gnkps detector paired with glintr100 recognition model (all tests are using src/api_trt/test_images/Stallone.jpg, 1 face per image):

Num workersClient threadsFPS v0.6.2.0FPS v0.7.0.0Speed-up
115610383.9%
1307212877.7%
63014517923.4%

Additions:

Model Zoo:

Improvements:

Fixes:

2021-09-09 v0.6.2.0

REST-API

2021-08-07 v0.6.1.0

REST-API

2021-06-16 v0.6.0.0

REST-API

2021-05-08 v0.5.9.9

REST-API

2021-03-27 v0.5.9.8

REST-API

REST-API & conversion scripts:

2021-03-01 v0.5.9.7

REST-API & conversion scripts:

2021-03-01 v0.5.9.6

REST-API:

2021-02-13

REST-API:

2020-12-26

REST-API & conversion scripts:

2020-12-26

REST-API:

Conversion scripts:

2020-12-06

REST-API:

Conversion scripts:

2020-11-20

REST API:

Conversion scripts:

2020-11-07

Conversion scripts:

REST API:

2020-10-22

Conversion scripts:

REST API:

TensorRT version contains MXNet and ONNXRuntime compiled for CPU for testing and conversion purposes.

2020-10-16

Conversion scripts:

REST API:

2020-09-28