Skip to content
The example of running Object Detection using Core ML
Swift
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.

README.md

ObjectDetection-CoreML

platform-ios swift-version lisence

This project is Object Detection on iOS with Core ML.
If you are interested in iOS + Machine Learning, visit here you can see various DEMOs.
ObjectDetection-CoreML_DEMO001

How it works

(preparing...)

Requirements

  • Xcode 10.3+
  • iOS 12.0+
  • Swift 4.2

Model

Model Matadata

model metadata

Model Size, Minimum iOS Version, Download Link

Model Size
(MB)
Minimum
iOS Version
Download Link
YOLOv3 248.4 iOS12 Machine Learning - Models - Apple Developer
YOLOv3FP16 124.2 iOS12 Machine Learning - Models - Apple Developer
YOLOv3Int8LUT 62.2 iOS12 Machine Learning - Models - Apple Developer
YOLOv3Tiny 35.5 iOS12 Machine Learning - Models - Apple Developer
YOLOv3TinyFP16 17.8 iOS12 Machine Learning - Models - Apple Developer
YOLOv3TinyInt8LUT 8.9 iOS12 Machine Learning - Models - Apple Developer
MobileNetV2_SSDLite 9.3 iOS12 SSDMobileNet - Core ML Survival Guide
ObjectDetector 63.7 iOS12 Recognizing Objects in Live Capture | Apple Developer Documentation

Infernece Time (ms)

Model vs. Device 11
Pro
XS XS
Max
XR X 7+ 7
YOLOv3 83 108 93 100 356 569 561
YOLOv3FP16 84 104 89 101 348 572 565
YOLOv3Int8LUT 86 101 92 100 337 575 572
YOLOv3Tiny 44 46 41 47 106 165 168
YOLOv3TinyFP16 44 51 41 44 103 165 167
YOLOv3TinyInt8LUT 44 45 39 39 106 160 161
MobileNetV2_SSDLite 18 31 31 31 109 141 134
ObjectDetector 18 24 26 23 63 86 84

Total Time (ms)

Model vs. Device 11
Pro
XS XS
Max
XR X 7+ 7
YOLOv3 84 108 93 100 357 569 561
YOLOv3FP16 85 104 89 101 348 572 565
YOLOv3Int8LUT 86 102 92 102 338 576 573
YOLOv3Tiny 45 46 42 48 106 166 169
YOLOv3TinyFP16 45 51 41 44 104 165 167
YOLOv3TinyInt8LUT 45 45 39 40 107 160 161
MobileNetV2_SSDLite 19 32 31 32 109 142 134
ObjectDetector 18 25 26 23 64 87 85

FPS

Model vs. Device 11
Pro
XS XS
Max
XR X 7+ 7
YOLOv3 9 8 10 9 2 1 1
YOLOv3FP16 9 9 10 8 2 1 1
YOLOv3Int8LUT 9 9 10 9 2 1 1
YOLOv3Tiny 14 21 22 20 8 5 5
YOLOv3TinyFP16 14 19 23 21 9 5 5
YOLOv3TinyInt8LUT 14 21 24 23 8 5 5
MobileNetV2_SSDLite 29 23 23 23 8 6 6
ObjectDetector 29 23 23 24 14 10 11

Get your own model

Or you can use your own object detection model

Build & Run

1. Prerequisites

1.1 Import object detection model

모델 불러오기.png

Once you import the model, compiler generates model helper class on build path automatically. You can access the model through model helper class by creating an instance, not through build path.

1.2 Add permission in info.plist for device's camera access

prerequest_001_plist

2. Dependencies

No external library yet.

3. Code

3.1 Import Vision framework

import Vision

3.2 Define properties for Core ML

class ViewController: UIViewController {
    //
    // ...
    //

    // MARK: - Vision Properties
    var request: VNCoreMLRequest?
    var visionModel: VNCoreMLModel?

    //
    // ...
    //
}

3.3 Configure and prepare the model

// MARK: - Setup Core ML
extension ViewController {
    func setupModel() {
        if let visionModel = try? VNCoreMLModel(for: objectDectectionModel.model) {
            self.visionModel = visionModel
            request = VNCoreMLRequest(model: visionModel, completionHandler: visionRequestDidComplete)
            request?.imageCropAndScaleOption = .scaleFill
        } else {
            fatalError("fail to create vision model")
        }
    }
}
// MARK: - Post-processing
extension ViewController {
    func visionRequestDidComplete(request: VNRequest, error: Error?) {
        if let predictions = request.results as? [VNRecognizedObjectObservation] {
            // <# TODO #>
        }
    }
}

3.4 Inference 🏃‍♂

// MARK: - Inference!
extension ViewController {
    func predictUsingVision(pixelBuffer: CVPixelBuffer) {
        let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer)
        try? handler.perform([request])
    }
}

Performance Test

(preparing...)

See also

You can’t perform that action at this time.