Home

Awesome

logo-pose-estimatiton-for-coreml-3

platform-ios swift-version lisence

This project is Pose Estimation on iOS with Core ML.<br>If you are interested in iOS + Machine Learning, visit here you can see various DEMOs.<br>

한국어 README

Jointed KeypointsHatmapsStill Image
<img src="https://user-images.githubusercontent.com/37643248/69563016-acdf3400-0ff3-11ea-8cd0-12df4a86e104.gif" width=240px><img src="https://user-images.githubusercontent.com/37643248/69562897-77d2e180-0ff3-11ea-83a7-ee557111b633.gif" width=240px><img src="resource/190629-poseestimation-stillimage-demo.gif" width=240px>

Video source:

Pose CapturingPose Matching
<img src="resource/demo-pose-capturing.gif"><img src="resource/demo-pose-matching.gif">

Features

How it works

how_it_works

Requirements

Model

Get PoseEstimationForMobile's model

Download this temporary models from following link.

Or

☞ Download Core ML model model_cpm.mlmodel or hourglass.mlmodel.

input_name_shape_dict = {"image:0":[1,192,192,3]} image_input_names=["image:0"] <br>output_feature_names = ['Convolutional_Pose_Machine/stage_5_out:0']

-in https://github.com/tucan9389/pose-estimation-for-mobile

Model Size, Minimum iOS Version, Input/Output Shape

ModelSize<br>(MB)Minimum<br>iOS VersionInput ShapeOutput Shape
cpm2.6iOS11[1, 192, 192, 3][1, 96, 96, 14]
hourhglass2iOS11[1, 192, 192, 3][1, 48, 48, 14]

Infernece Time (ms)

Model vs. Device11<br>ProXS<br>MaxXRX88+77+6S+6+
cpm52727323131393744115
hourhglass36729313237424894

mobile-pose-estimation

Total Time (ms)

Model vs. Device11<br>ProXS<br>MaxXRX88+77+6S+6+
cpm233940464745555856139
hourhglass231515384040485558106

FPS

Model vs. Device11<br>ProXS<br>MaxXRX88+77+6S+6+
cpm1523232020211716166
hourhglass1523232423231916158

Get your own model

Or you can use your own PoseEstimation model

Build & Run

1. Prerequisites

1.1 Import pose estimation model

모델 불러오기.png

Once you import the model, compiler generates model helper class on build path automatically. You can access the model through model helper class by creating an instance, not through build path.

1.2 Add permission in info.plist for device's camera access

prerequest_001_plist

2. Dependencies

No external library yet.

3. Code

3.1 Import Vision framework

import Vision

3.2 Define properties for Core ML

// properties on ViewController
typealias EstimationModel = model_cpm // model name(model_cpm) must be equal with mlmodel file name
var request: VNCoreMLRequest!
var visionModel: VNCoreMLModel!

3.3 Configure and prepare the model

override func viewDidLoad() {
    super.viewDidLoad()

    visionModel = try? VNCoreMLModel(for: EstimationModel().model)
	request = VNCoreMLRequest(model: visionModel, completionHandler: visionRequestDidComplete)
	request.imageCropAndScaleOption = .scaleFill
}

func visionRequestDidComplete(request: VNRequest, error: Error?) {
    /* ------------------------------------------------------ */
    /* something postprocessing what you want after inference */
    /* ------------------------------------------------------ */
}

3.4 Inference 🏃‍♂️

// on the inference point
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer)
try? handler.perform([request])

Performance Test

1. Import the model

You can download cpm or hourglass model for Core ML from tucan9389/pose-estimation-for-mobile repo.

2. Fix the model name on PoseEstimation_CoreMLTests.swift

<img src="resource/fix-model-name-for-testing.png" width="660px">

3. Run the test

Hit the ⌘ + U or click the Build for Testing icon.

<img src="resource/build-for-testing.png" width="320px">

See also