An augmented reality based résumé with Face recognition. The iOS app recognizes the face and presents you with the AR view that contains 3D mock face and details of your resume.
Switch branches/tags
Nothing to show
Clone or download
scottdangelo Merge pull request #32 from IBM/readme-update
removed mention of unused files. Added context to CoreML.
Latest commit 9ca0334 Sep 20, 2018

Read this in other languages: 中国.

Use Watson Visual Recognition and Core ML to create an Augmented Reality based résumé

The easiest way to find and connect to people around the world is through social media apps like Facebook, Twitter and LinkedIn. These, however, only provide text based search capabilities. However, with the recently announced release of the iOS ARKit toolkit, search is now possible using facial recognition. Combining iOS face recognition using Vision API, classification using IBM Visual Recognition, and person identification using classified image and data, one can build an app to search faces and identify them. One of the use cases is to build a Augmented Reality based résumé using visual recognition.

The main purpose of this code pattern is to demonstrate how to identify a person and his details using Augmented Reality and Visual Recognition. The iOS app recognizes the face and presents you with the AR view that displays a résumé of the person in the camera view. The app classifies a person face with Watson Visual Recognition and Core ML. The images are classified offline using a deep neural network that is trained by Visual Recognition.

After completing this code pattern a user will know how to:

  • Configure ARKit
  • Use the iOS Vision module
  • Create a Swift iOS application that uses the Watson Swift SDK
  • Classify images with Watson Visual Recognition and Core ML


ARResume Architecture

  1. User opens the app on their mobile
  2. A face is detected using the iOS Vision module
  3. An image of the face is sent to Watson Visual Recognition to be classified
  4. Additional information about the person are retrieved from a Cloudant database based on the classification from Watson Visual Recognition
  5. The information from the database is placed in front of the original person's face in the mobile camera view

Included Components

  • ARKit: ARKit is an augmented reality framework for iOS applications.
  • Watson Visual Recognition: Visual Recognition understands the contents of images - visual concepts tag the image, find human faces, approximate age and gender, and find similar images in a collection.
  • Core ML: With Core ML, you can integrate trained machine learning models into your app.
  • Cloudant NoSQL DB: A fully managed data layer designed for modern web and mobile applications that leverages a flexible JSON schema.


  • Artificial Intelligence: Artificial intelligence can be applied to disparate solution spaces to deliver disruptive technologies.
  • Mobile: Systems of engagement are increasingly using mobile technology as the platform for delivery.

Watch the Video


  1. At a command line, clone this repo:
git clone
  1. Log into IBM Cloud account and create a Watson Visual Recognition service. Create a set of credentials and identify your API key.

  2. When the app loads, it also loads 3 Core ML models which is bundled part of the app. The models were trained using IBM Watson Visual Recognition Tool and downloaded as Core ML model.

To create a new classifier use the Watson Visual Recognition tool. A classifier will train the visual recognition service, it will be able to recognize different images of the same person. Use at least ten images of your head shot and also create a negative data set by using headshots that are not your own.

  1. Create an IBM Cloudant NoSQL database and save the credentials. Each JSON document in this database represents one person. The JSON schema can be found in schema.json. When the app loads, it will also create 3 documents for the 3 CoreML models which is bundled part of the app as mentioned in step 3.

To create new documents in the same database, use the schema.json provided to fill out the details. Replace the classificationId in the schema with the classificationId you receive from the classifier once the Watson Visual Recognition model has been successfully trained. This ID will be used to retrieve details about the classified person.

  1. Go to ios_swift directory and open the project using Xcode.

  2. Create BMSCredentials.plist in the project and replace the credentials. The plist file looks like below:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "">
<plist version="1.0">
  1. At a command line, run pod install to install the dependencies. Pod Install Output

  2. Run carthage bootstrap --platform iOS to install the Watson related dependencies. Carthage Install Output

  3. Once the previous steps are complete go back to Xcode and run the application by clicking the Build and Run menu options. Xcode Build and Run

NOTE: The training in Watson Visual Recognition might take couple of minutes. If the status is in training, then the AR will show Training in progress in your AR view. You can check the status of your classifier by using following curl command:

curl "{API_KEY}&verbose=true&version=2016-05-20"

Replace the API_KEY with the Watson Visual Recognition api key.

  1. To test you can use the test images provided in images/TestImages folder.

Adding to the database

To create a new entry in the database perform the following steps:

  1. Create a new Watson Visual Recognition classifier using the online tool for each person you want to be able to identify, use at least ten images of that person.

  2. Update the Cloudant database using the classifier ID from the previous step. To update the database perform a POST command like the following:

data='{"classificationId":"Watson_VR_Classifier_ID","fullname":"Joe Smith","linkedin":"jsmith","twitter":"jsmith","facebook":"jsmith","phone":"512-555-1234","location":"San Francisco"}'

curl -H "Content-Type: application/json" -X POST -d $data https://$$DATABASE

The $ACCOUNT variable is the URL which can be found in the credentials that you created when setting up Cloudant.

The $DATABASE variable is the database name you created in IBM Cloudant.

See ResumeAR/schema.json for additional information about the Cloudant database configuration.

  1. Run the app and point the camera view to your image.

Sample Output

Learn more

  • Artificial Intelligence Code Patterns: Enjoyed this Code Pattern? Check out our other AI Code Patterns.
  • AI and Data Code Pattern Playlist: Bookmark our playlist with all of our Code Pattern videos
  • With Watson: Want to take your Watson app to the next level? Looking to utilize Watson Brand assets? Join the With Watson program to leverage exclusive brand, marketing, and tech resources to amplify and accelerate your Watson embedded commercial solution.
  • Offline image classification using Watson Visual Recognition and Core ML Visual Recognition Example


  • In order to start from scratch you need to delete the Watson Visual Recognition trained models, delete the data from the Cloudant database and delete the app to delete downloaded models.



Apache 2.0