Home

Awesome

ImageClassification-CoreML

platform-ios swift-version lisence

DEMO-CoreML

Requirements

Model

Model Size, Minimum iOS Version, Download Link

ModelSize<br>(MB)Minimum<br>iOS VersionDownload Link
MobileNet17.1iOS11머신 러닝 - 모델 실행 - Apple Developer
MobileNetV224.7iOS11Machine Learning - Models - Apple Developer
MobileNetV2FP1612.4iOS11.2Machine Learning - Models - Apple Developer
MobileNetV2Int8LUT6.3iOS12Machine Learning - Models - Apple Developer
Resnet50102.6iOS11Machine Learning - Models - Apple Developer
Resnet50FP1651.3iOS11.2Machine Learning - Models - Apple Developer
Resnet50Int8LUT25.7iOS12Machine Learning - Models - Apple Developer
Resnet50Headless94.4iOS11Machine Learning - Models - Apple Developer
SqueezeNet5iOS11Machine Learning - Models - Apple Developer
SqueezeNetFP162.5iOS11.2Machine Learning - Models - Apple Developer
SqueezeNetInt8LUT1.3iOS12Machine Learning - Models - Apple Developer

Infernece Time (ms)

Infernece Time (ms)

Model vs. Device12<br>Pro1212<br>Mini11<br>ProXSXS<br>MaxXRX7+7
MobileNet17171413161819334335
MobileNetV215151714211821466453
MobileNetV2FP168171414201920486557
MobileNetV2Int8LUT18161614212120536453
Resnet5021182420272526617863
Resnet50FP1619181920262627647574
Resnet50Int8LUT19202020272526607775
Resnet50Headless11111113181318365453
SqueezeNet14151712171718243529
SqueezeNetFP1613161013171718243629
SqueezeNetInt8LUT16171513181918273430

Total Time (ms)

Model vs. Device12<br>Pro1212<br>Mini11<br>ProXSXS<br>MaxXRX7+7
MobileNet19181515182021354637
MobileNetV216181916232123486755
MobileNetV2FP168181815242123506960
MobileNetV2Int8LUT19181715232322556756
Resnet5022202522302829648266
Resnet50FP1620192022282830667876
Resnet50Int8LUT21212322292828638078
Resnet50Headless11111214191318365454
SqueezeNet15161814181820253731
SqueezeNetFP1614171113181819263831
SqueezeNetInt8LUT18171714202019293732

FPS

Model vs. Device12<br>Pro1212<br>Mini11<br>ProXSXS<br>MaxXRX7+7
MobileNet22242429232323232023
MobileNetV225242429232323201317
MobileNetV2FP1612242429232323181315
MobileNetV2Int8LUT23232329232323161316
Resnet5023232429232323141114
Resnet50FP1623242429232323141112
Resnet50Int8LUT23242329232323151112
Resnet50Headless21242329232323231617
SqueezeNet36242429232323232323
SqueezeNetFP1625232429232323232223
SqueezeNetInt8LUT22232329232323232323

Build & Run

1. Prerequisites

1.1 Import the Core ML model

모델 불러오기.png

Once you import the model, compiler generates model helper class on build path automatically. You can access the model through model helper class by creating an instance, not through build path.

1.2 Add permission in info.plist for device's camera access

prerequest_001_plist

2. Dependencies

No external library yet.

3. Code

3.1 Import Vision framework

import Vision

3.2 Define properties for Core ML

// MARK - Core ML model
typealias ClassificationModel = MobileNet
var coremlModel: ClassificationModel? = nil

// MARK: - Vision Properties
var request: VNCoreMLRequest?
var visionModel: VNCoreMLModel?

3.3 Configure and prepare the model

override func viewDidLoad() {
    super.viewDidLoad()

	if let visionModel = try? VNCoreMLModel(for: ClassificationModel().model) {
        self.visionModel = visionModel
        request = VNCoreMLRequest(model: visionModel, completionHandler: visionRequestDidComplete)
        request?.imageCropAndScaleOption = .scaleFill
    } else {
        fatalError()
    }
}

func visionRequestDidComplete(request: VNRequest, error: Error?) {
    /* ------------------------------------------------------ */
    /* something postprocessing what you want after inference */
    /* ------------------------------------------------------ */
}

3.4 Inference 🏃‍♂️

guard let request = request else { fatalError() }
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer)
try? handler.perform([request])