Home

Awesome

Introduction

I was looking for a cat face landmark predictor to use in one of my projects. After a quick search I realized there is no out-of-the-box solution with a high enough accuracy. The closest I found was cat_hipsterizer. I tested the predictor included in the repo and decided its accuracy needs some improvement. This is how frederic was born.

The predictor consists of two models stacked on each other. The first one predicts the ROI (bounding box) and the second one detects landmarks inside the ROI. The detected landmarks are then translated into the original image. I train the models on an improved version of cat-dataset. The predictor assumes that there is one and only one cat in the picture.

<img src="./graphics/frederic.jpg" width="300"> <img src="./graphics/frederic_predicted.jpg" width="300">

Fryderyk - the cat behind the name of this project. Predicted landmarks are presented as yellow dots (0 - right eye, 1 - left eye, 2 - mouth, 3 - right ear, 4 - left ear).

Installation

Frederic hasn't been distributed as a package, so the only way to use it (at least for now) is to get a copy of this repo by either git clone or download.

It requires keras and Pillow in order to run. You may use the convenience scripts provided: winstall.bat on Windows or install.sh on Linux. They will create a new conda environment with all the necessary packages installed. You may want to adjust the keras version (GPU vs CPU) and CUDA version that is supported by your system. You can get Miniconda from here.

Then you need to make sure that frederic can be found by your interpreter. One way to achieve it is to set the PYTHONPATH environment variable: set "PYTHONPATH=C:\path\to\frederic" on Windows or export PYTHONPATH="/path/to/frederic" on Linux.

Usage

Once frederic is successfully installed you can run this example:

import frederic

img_path = '/path/to/an/image.jpg'
img = frederic.load_image(img_path)
predictor = frederic.Predictor()
landmarks = predictor.predict(img)
frederic.save_landmarks(landmarks, img_path + '.cat')
frederic.draw_landmarks(img, landmarks)
frederic.save_image(img, img_path + '.landmarks.png')

The above code will load an image, predict landmarks, save them in a '.cat' file and save a copy of the image with landmarks drawn in it. Running the predictor for the first time will also download the necessary trained models and store them in keras' cache.

Please note that loading the models is a relatively expensive operation. In the above example the models are loaded only when the predict method is called, not when the predictor is created. You can control this behavior with the lazy parameter of the constructor. With lazy=False the models are loaded upon predictor creation, so the consequent call to predict method will happen without delay.

Performance

An average error of each landmark along any axis is 2.07 pixels or 1.6% of a cat face size.

The first table presents the performance of the whole pipeline. MAE and RMSE values are expressed in pixels, MSE in pixels squared, MAPE and RMSPE in percent. Percentage values were obtained by dividing each error by the length of the longer edge of the bounding box of the given face.

Whole pipelinetrainvalidationtest
MAE1.592.092.07
MSE13.2417.8776.23
RMSE3.644.238.73
RMSPE8.503.2220.32
MAPE0.811.111.60
MAPE eyes0.710.781.28
MAPE mouth0.740.951.37
MAPE ears0.961.522.04

Images below show examples of predicted landmarks with MAPE close to 1.6 (average error of the whole pipeline).

<img src="./graphics/1.582818924_00001472_019_s.jpg" height="190"> <img src="./graphics/1.611532178_00001395_015_s.jpg" height="190"> <img src="./graphics/1.611642589_00001425_023_s.jpg" height="190"> <img src="./graphics/1.646175463_00001385_011_s.jpg" height="190">

Examples of predicted landmarks with MAPE close to 1.6. Ground truth landmarks are presented in red and predicted landmarks in yellow. Landmarks are labeled: 0 - right eye, 1 - left eye, 2 - mouth, 3 - right ear, 4 - left ear.

We can see that RMSPE is much higher than MAPE. This happens, because RMSPE punishes large errors incomparably more than the small ones. The image below presents the worst prediction in the test set. Indeed, MAPE in this case equals to 720% (the second worst one is 36%). Due to a terribly wrong ROI prediction the landmarks are off by a huge margin. Interestingly, this single image has a huge influence on the total metrics. If we removed this image completely from the test set MAPE drops down to 1.05 (from 1.6) and RMSPE to 2.62 (from 20.32).

<img src="./graphics/719.892645700_CAT_06_00001440_008_s.jpg" height="190">

The worst prediction in the test set

The table below depicts results of the bounding box (ROI) prediction model. MSE, RMSE, MAE and MAPE were computed for two points: top-left and bottom-right corner of the bounding box. Intersection over union (IoU) of the predicted and ground truth boxes is presented in percent.

Bounding boxtrainvalidationtest
IoU94.3893.7293.88
MAE0.911.591.68
MAPE1.501.622.06
MSE2.519.1520.29
RMSE1.583.024.50

The last table shows the performance of the landmarks-inside-ROI predictor. The scores were computed using the ground truth bounding boxes.

Landmarkstrainvalidationtest
MAE1.291.591.55
MAPE0.690.860.84
MSE2.976.797.71
RMSE1.722.612.78

Training

To train the models yourself you should first get a copy of augmented cat-dataset. Move the data into the parent directory of your frederic copy or adjust the data path in the script. Then run hpsearch.bat on Windows or hpsearch.sh on Linux. This will train new models and place them in the tools/models directory.

In order to use your models for prediction you can pass the path in the Predictor construction call, e.g.

predictor = frederic.Predictor(bbox_model_path='/path/to/models/2019-10-07_11.35.58_iou.h5',
                               landmarks_model_path='/path/to/models/2019-10-04_12.56.47_mse.h5')

Structure

The predictor consists of two models - ROI predictor and landmarks-inside-ROI predictor. Both models have similar structure. They accept 224x224 RGB images as inputs. These are fed into a keras implementation of MobileNetV2 pretrained on imagenet. The fully-connected layer on top is discarded and global max pooling is applied. The resulting vector of 1280 numbers is then passed to two stacked fully-connected layers with relu nonlinearity and 128 units each. The last dense output layer applies no nonlinearity. The difference between the models is the size of the output - 4 or 14 for bounding box prediction and 10 for landmark prediction.

Training the bounding box predictor with MSE loss renders decent results. Using IoU loss (defined as 1 - IoU) increases the performance, but is unstable in the beginning of the training. To mitigate this issue I first train the model with MSE loss for a couple of batches and then switch to IoU. I also tried training using joined loss of IoU and MSE of landmarks in hope of achieving higher accuracy and stabilizing the process, but neither actually happened (BTW that explains the size of the output being 14 = 4 for bounding box + 10 for landmarks).

The landmarks predictor is trained with MSE loss. Both models are trained using Adam optimizer, which does a great job in this setting.

During training the data is augmented with horizontal flipping, rotation and cropping. The last part turned out to be the most beneficial as it allowed to feed the network with scale balanced images. I defined scale as a ratio between the longer bounding box edge and the longer picture edge.

The figures below show the distributions of scales during training using different cropping solutions. Random rotation (+- 15 degrees) was also applied in all cases. You can see that the dataset is highly imbalanced with most pictures scaled between 0.1 and 0.5. Random crop without extending the image is still imbalanced. Forcing the uniform distribution and extending the image with a black background does not improve the performance much. The last solution is the most interesting. It is rather a hack than a proper technique and it works only with this particular data. However, it brought a high accuracy gain.

<img src="./graphics/rotate_15_no_crop.png" width="320">

No crop

<img src="./graphics/rotate_15_crop.png" width="320">

Random crop, not extending the image

<img src="./graphics/rotate_15_crop_scale_balanced_black.png" width="320">

Random crop with uniform distribution of scales, extending the image with black background if necessary

<img src="./graphics/rotate_15_crop_scale_balanced.png" width="320">

Random crop with close to uniform distribution of scales, not extending the image

I tried applying a couple of ideas to improve the results, but they didn't bring any performance gains. I describe them below.

Authors of this paper report that including landmarks information in training improves face detection. I tried including landmarks in the loss function together with the bounding box data, but it failed to boost the accuracy.

In the same paper authors use landmarks normalization based on the center of the anchor. I tried normalizing the landmarks around the center of the face, but it didn't improve the performance.

The output of the network is a number between 0 and 224 (most of the cases - some landmarks lay outside of the image). I tried scaling the output to different ranges ([0;1], [-1;1], [-0.5;0.5], [0.25;0.75]) with linear or sigmoid/tanh functions, but I was unable to achieve a higher score.

I tried including this implementation of Lookahead and this implementation of Rectified Adam. Neither of them was able to improve the results.

Utilizing DenseNet121 instead of much smaller MobileNetV2 did not increase the accuracy. It actually rendered similar results.

Using float outputs instead of integers resulted in a slightly worse performance.

Training with RMSprop, SGD or Adam with differential learning rates led to a lower performance.