Detects faces using the Vision-API and runs the extracted face through a CoreML-model to identiy the specific persons.
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
faceIT.xcodeproj Initial commit Sep 3, 2017
faceIT.xcworkspace Initial commit Sep 3, 2017
faceIT Initial commit Sep 3, 2017
.gitignore Initial commit Sep 3, 2017
LICENSE Initial commit Sep 3, 2017
Podfile Initial commit Sep 3, 2017
Podfile.lock Initial commit Sep 3, 2017
README.md Reference blog-post with ml-model instructions Mar 13, 2018
demo.gif Initial commit Sep 3, 2017

README.md

FaceRecognition in ARKit

This is a simple showcase project, that detects faces using the Vision-API and runs the extracted face through a CoreML-model to identiy the specific persons.

image of scene with face recognition

Requirements

  • Xcode 9
  • iPhone 6s or newer
  • Machine-Learning model

Machine-Learning model

To create your own machine-learning model, you can read our blog post "How we created our Face-Recognition model"

The short version is:

  • We trained a model in the AWS using Nvidia DIGITS
  • Took a couple of hundred pictures of each person, and extracted the faces
  • Also added an "unknown" category with differnent faces.
  • Used a pretrained model fine-tuned for face-recognition.

Acknowledgements