Awesome
Fast_Stacked_Hourglass_Network_OpenVino
A fast stacked hourglass network for human pose estimation on OpenVino. Stacked hourglass network proposed by Stacked Hourglass Networks for Human Pose Estimation is a very good network for single-person pose estimation regarding to speed and accuracy. This repo contains a demo to show how to depoly model trained by Keras. It converts a Keras model to IR and shows how to use the generated IR to do inference. Have fun with OpenVino!
Installation
- Python3
- Install OpenVino 2018 R5
- Install python dependencies
keras==2.1.5
scipy==1.2.0
tensorflow==1.12.0
opencv-python==3.4.3.18
[Keras] Convert pre-trained Keras models
Download pre-trained hourglass models
- Download models from Google drive and save them to
models
. You are going to download two files, one is json file for network configuration while another is weight. - hg_s2_b1_mobile, inputs: 256x256x3, Channel Number: 256, pckh 78.86% @MPII.
- hg_s2_b1_tiny, inputs:192x192x3, Channel Number: 128, pckh@75.11%MPII.
Convert keras models to tensorflow forzen pb
- Convert keras models to tf frozen pb
python3 tools/keras_to_tfpb.py --input_model_json ./models/path/to/network/json --input_model_weights
./models/path/to/network/weight/h5 --out_tfpb ./models/hg_s2_b1_tf.pb
Use OpenVino Model Optimizer to convert tf pb to IR.
- For CPU, please use mobile version
hg_s2_b1_mobile
and FP32
~/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py -w ./models/hg_s2_b1_tf.pb --input_shape [1,256,256,3] --data_type FP32 --output_dir ./models/ --model_name hg_s2_mobile
- For NCS2(Myriad), please use tiny version
hg_s2_b1_tiny
and FP16
~/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py -w ./models/hg_s2_b1_tf.pb --input_shape [1,192,128,3] --data_type FP16 --output_dir ./models/ --model_name hg_s2_tiny
.xml
and.bin
will be generated.
[PyTorch] Convert pre-trained Onnx models
Download model trained by pytorch
Download the model_best.onnx
model from below table to fit your accuracy and speed requirements. hg_s2_b1_mobile_fpd
model trained by using the knowledge distillation proposed by paper Fast Human Pose Estimation. Details can be found in Fast_Human_Pose_Estimation_Pytorch.
Model | in_res | featrues | # of Weights | Head | Shoulder | Elbow | Wrist | Hip | Knee | Ankle | Mean | Link |
---|---|---|---|---|---|---|---|---|---|---|---|---|
hg_s2_b1 | 256 | 128 | 6.73m | 95.74 | 94.51 | 87.68 | 81.70 | 87.81 | 80.88 | 76.83 | 86.58 | GoogleDrive |
hg_s2_b1_mobile | 256 | 128 | 2.31m | 95.80 | 93.61 | 85.50 | 79.63 | 86.13 | 77.82 | 73.62 | 84.69 | GoogleDrive |
hg_s2_b1_mobile_fpd | 256 | 128 | 2.31m | 95.67 | 94.07 | 86.31 | 79.68 | 86.00 | 79.67 | 75.51 | 85.41 | GoogleDrive |
hg_s2_b1_tiny | 192 | 128 | 2.31m | 94.95 | 92.87 | 84.59 | 78.19 | 84.68 | 77.70 | 73.07 | 83.88 | GoogleDrive |
Convert onnx to IR
Use model optimizer to convert onnx to IR. FP32 for CPU while FP16 for MYRIAD
~/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo.py -w ./models/model_best.onnx --data_type FP32 --output_dir ./models/ --model_name hg_s2_mobile_onnx
Run demo
- Run single image demo on CPU
cd src
python3 stacked_hourglass.py -i ../models/sample.jpg -m ../models/hg_s2_mobile.xml -d CPU -l /path/to/cpu/extension/library
- Run single image demo on NCS2(MYRIAD)
cd src
python3 stacked_hourglass.py -i ../models/sample.jpg -m ../models/hg_s2_tiny.xml -d MYRIAD
- Run Aysnc demo with camera input on CPU
cd src
python3 stacked_hourglass_camera_async.py -i cam -m ../models/hg_s2_mobile.xml -d CPU -l /path/to/cpu/extension/library
Reference
- OpenVino: https://github.com/opencv/dldt
- OpenCV OpenModelZoo: https://github.com/opencv/open_model_zoo
- Keras implementation for stacked hourglass: https://github.com/yuanyuanli85/Stacked_Hourglass_Network_Keras
- Pytorch-pose: https://github.com/yuanyuanli85/pytorch-pose
- Fast Human Pose Estimation https://github.com/yuanyuanli85/Fast_Human_Pose_Estimation_Pytorch