An example of use a Vision framework for face landmarks detection in iOS 11
Swift
Switch branches/tags
Nothing to show
Latest commit becca7f Oct 3, 2017 @PChmiel PChmiel Create LICENCE
Permalink
Failed to load latest commit information.
Vision Face Detection.xcodeproj Initial commit Jun 23, 2017
Vision Face Detection Refactor Jun 23, 2017
.gitignore Initial commit Jun 23, 2017
LICENCE Create LICENCE Oct 3, 2017
README.md Update README.md Jun 23, 2017

README.md

VisionFaceDetection

An example of use a Vision framework for face landmarks detection

Landmark detection needs to be divided in to two steps.

First one is face rectangle detection by using VNDetectFaceRectanglesRequest based on pixelBuffer provided by delegate function captureOutput.

Next we need to setup the property inputFaceObservations of VNDetectFaceLandmarksRequest object, to provide the input. Now we are redy to start landmarks detection.

It's possible to detects landmarks like: faceContour, leftEye, rightEye, nose, noseCrest, lips, outerLips, leftEyebrow, and rightEyebrow.

To display the results I'm using multiple CAShapeLayer with UIBezierPath. Landmarks detection is working on live front camera preview.

Example of detected landmarks

If you want more details about using Vision framework, go and check my blogpost about it