Skip to content
Core ML을 사용하여 MobileNet.mlmodel을 실행시켜본 예제입니다.
Branch: master
Clone or download
Latest commit ea408f9 Feb 27, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
MobileNetApp.xcodeproj add measure module Feb 19, 2019
MobileNetApp add readme for en Feb 27, 2019
MobileNetAppUITests migration 'MobileNetApp' to new repo Jun 12, 2018
resource new demo gif Jul 4, 2018
.gitignore Initial commit Jun 12, 2018
LICENSE Initial commit Jun 12, 2018
README.md fix mistyping Feb 27, 2019
READMEkr.md add readme for en Feb 27, 2019

README.md

MobileNetApp for iOS

platform-ios swift-version lisence

DEMO-CoreML

Requirements

  • Xcode 9.2+
  • iOS 11.0+
  • Swift 4

Download model

  • MobileNet model for Core ML(MobileNet.mlmodel) ☞ Download Core ML model on Apple Developer Page.

Source Link

https://github.com/tensorflow/models/blob/master/slim/nets/mobilenet_v1.md

Caffe Version

Converted from a Caffe version of the original MobileNet model. https://github.com/shicai/MobileNet-Caffe

Authors

Original Paper Title: MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications Authors: Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam

Caffe version: Shicai Yang

License

Apache 2.0 http://www.apache.org/licenses/LICENSE-2.0

Build & Run

1. Prerequisites

1.1 Import the Core ML model

모델 불러오기.png

Once you import the model, compiler generates model helper class on build path automatically. You can access the model through model helper class by creating an instance, not through build path.

1.2 Add permission in info.plist for device's camera access

prerequest_001_plist

2. Dependencies

No external library yet.

3. Code

3.1 Import Vision framework

import Vision

3.2 Define properties for Core ML

// MARK - Core ML model
typealias ClassifierModel = MobileNet
var coremlModel: ClassifierModel? = nil

// MARK: - Vision Properties
var request: VNCoreMLRequest?
var visionModel: VNCoreMLModel?

3.3 Configure and prepare the model

override func viewDidLoad() {
    super.viewDidLoad()

	if let visionModel = try? VNCoreMLModel(for: ClassifierModel().model) {
        self.visionModel = visionModel
        request = VNCoreMLRequest(model: visionModel, completionHandler: visionRequestDidComplete)
        request?.imageCropAndScaleOption = .scaleFill
    } else {
        fatalError()
    }
}

func visionRequestDidComplete(request: VNRequest, error: Error?) { 
    /* ------------------------------------------------------ */
    /* something postprocessing what you want after inference */
    /* ------------------------------------------------------ */
}

3.4 Inference 🏃‍♂

guard let request = request else { fatalError() }
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer)
try? handler.perform([request])
You can’t perform that action at this time.