Home

Awesome

Gestalt Engine's Face Cropper

Running the detect_pipe.py-script will loop through all the images in a given directory, returning the cropped version of a detected face in each image. Using the default settings from the Pytorch_Retinaface repository by @biubug6 results in sufficient cropped faces.

The following command line arguments you can select some relevant settings (more can be found in code):

argumentoptionsdescription
--images_dirstring with directory pathLocation of the directory with the images to run the detector on.
--save_dirstring with directory pathDesired location of the directory where cropped images will be saved to.
--crop_sizeintDesired width and height of the resulting cropped face, if 0 crops to actual bounding box without resize (default = 0)
--use_subdirectoriessets flag on useWhen set the images_dir directory given is expected to contain subdirectories (e.g. ../examples/{1,2,..,n})
--multiple_per_imagesets flag on useWhen set allows multiple face detection per image, otherwise keeps only the most confident one. (Not extensively tested, not recommended for unsupervised use)
--result_typecrop or coordsDesired result type from the pipeline; default is cropped images, alternatively bbox coords of the rotated faces will be stored in face_coords.csv
--fill_colorfloatfill_color * 256 is will be used as the color to fill expanded images (e.g. due to rotation)
--cpusets flag on useWhen set, uses the CPU instead of GPU (only use when no GPU is available, as it is much slower).

Running the code with the following settings will look for the images_dir in a data-directory and look for a single face in each image that is in the images_dir directory, not including subdirectories:

python detect_pipe.py --images_dir ../data/GestaltMatcherDB/images/ --save_dir ../data/GestaltMatcherDB/images_cropped/ --crop_size 100

The resulting detection are square cropped to 100x100 and saved in the save_dir directory.

In the future this repository will be updated with new branches when different face detectors are used.

The face cropper requires the model-weights "Resnet50_Final.pth". Remember to download them from Google Docs with pw: fstq

Below you can find more information from the original repository w.r.t. installation, performance, etc. of the model.


RetinaFace in PyTorch

A PyTorch implementation of RetinaFace: Single-stage Dense Face Localisation in the Wild. Model size only 1.7M, when Retinaface use mobilenet0.25 as backbone net. We also provide resnet50 as backbone net to get better result. The official code in Mxnet can be found here.

Mobile or Edge device deploy

We also provide a set of Face Detector for edge device in here from python training to C++ inference.

WiderFace Val Performance in single scale When using Resnet50 as backbone net.

Styleeasymediumhard
Pytorch (same parameter with Mxnet)94.82 %93.84%89.60%
Pytorch (original image scale)95.48%94.04%84.43%
Mxnet94.86%93.87%88.33%
Mxnet(original image scale)94.97%93.89%82.27%

WiderFace Val Performance in single scale When using Mobilenet0.25 as backbone net.

Styleeasymediumhard
Pytorch (same parameter with Mxnet)88.67%87.09%80.99%
Pytorch (original image scale)90.70%88.16%73.82%
Mxnet88.72%86.97%79.19%
Mxnet(original image scale)89.58%87.11%69.12%
<p align="center"><img src="curve/Widerface.jpg" width="640"\></p>

FDDB Performance.

FDDB(pytorch)performance
Mobilenet0.2598.64%
Resnet5099.22%
<p align="center"><img src="curve/FDDB.png" width="640"\></p>

Contents

Installation

Clone and install
  1. git clone https://github.com/biubug6/Pytorch_Retinaface.git

  2. Pytorch version 1.1.0+ and torchvision 0.3.0+ are needed.

  3. Codes are based on Python 3

Data
  1. Download the WIDERFACE dataset.

  2. Download annotations (face bounding boxes & five facial landmarks) from baidu cloud or dropbox

  3. Organise the dataset directory as follows:

  ./data/widerface/
    train/
      images/
      label.txt
    val/
      images/
      wider_val.txt

ps: wider_val.txt only include val file names but not label information.

Data1

We also provide the organized dataset we used as in the above directory structure.

Link: from google cloud or baidu cloud Password: ruck

Training

We provide restnet50 and mobilenet0.25 as backbone network to train model. We trained Mobilenet0.25 on imagenet dataset and get 46.58% in top 1. If you do not wish to train the model, we also provide trained model. Pretrain model and trained model are put in google cloud and baidu cloud Password: fstq . The model could be put as follows:

  ./weights/
      mobilenet0.25_Final.pth
      mobilenetV1X0.25_pretrain.tar
      Resnet50_Final.pth
  1. Before training, you can check network configuration (e.g. batch_size, min_sizes and steps etc..) in data/config.py and train.py.

  2. Train the model using WIDER FACE:

CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py --network resnet50 or
CUDA_VISIBLE_DEVICES=0 python train.py --network mobile0.25

Evaluation

Evaluation widerface val

  1. Generate txt file
python test_widerface.py --trained_model weight_file --network mobile0.25 or resnet50
  1. Evaluate txt results. Demo come from Here
cd ./widerface_evaluate
python setup.py build_ext --inplace
python evaluation.py
  1. You can also use widerface official Matlab evaluate demo in Here

Evaluation FDDB

  1. Download the images FDDB to:
./data/FDDB/images/
  1. Evaluate the trained model using:
python test_fddb.py --trained_model weight_file --network mobile0.25 or resnet50
  1. Download eval_tool to evaluate the performance.
<p align="center"><img src="curve/1.jpg" width="640"\></p>

TensorRT

-TensorRT

References

@inproceedings{deng2019retinaface,
title={RetinaFace: Single-stage Dense Face Localisation in the Wild},
author={Deng, Jiankang and Guo, Jia and Yuxiang, Zhou and Jinke Yu and Irene Kotsia and Zafeiriou, Stefanos},
booktitle={arxiv},
year={2019}