Home

Awesome

Head pose estimation

Realtime human head pose estimation with ONNX Runtime and OpenCV.

demo demo

How it works

There are three major steps:

  1. Face detection. A face detector is introduced to provide a face bounding box containing a human face. Then the face box is expanded and transformed to a square to suit the needs of later steps.
  2. Facial landmark detection. A pre-trained deep learning model take the face image as input and output 68 facial landmarks.
  3. Pose estimation. After getting 68 facial landmarks, the pose could be calculated by a mutual PnP algorithm.

Getting Started

These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.

Prerequisites

The code was tested on Ubuntu 22.04 with following frameworks:

Installing

Clone the repo:

git clone https://github.com/yinguobing/head-pose-estimation.git

Install dependencies with pip:

pip install -r requirements.txt

Pre-trained models provided in the assets directory. Download them with Git LFS:

git lfs pull

Or, download manually from the release page.

Running

A video file or a webcam index should be assigned through arguments. If no source provided, the built in webcam will be used by default.

Video file

For any video format that OpenCV supports (mp4, avi etc.):

python3 main.py --video /path/to/video.mp4

Webcam

The webcam index should be provided:

python3 main.py --cam 0

Retrain the model

Tutorials: https://yinguobing.com/deeplearning/

Training code: https://github.com/yinguobing/cnn-facial-landmark

Note: PyTorch version coming soon!

License

This project is licensed under the MIT License - see the LICENSE file for details.

Meanwhile:

Please refer to them for details.

Authors

Yin Guobing (尹国冰) - yinguobing

Acknowledgments

All datasets used in the training process:

The 3D face model is from OpenFace, you can find the original file here.

The build in face detector is SCRFD from InsightFace.