Skip to content
This repository has been archived by the owner on Dec 15, 2020. It is now read-only.
Steve Martinelli edited this page Mar 29, 2018 · 15 revisions

Short Name

Create an augmented reality application with facial detection

Short Description

Use Watson Visual Recognition to create an augmented reality application that displays details about a person.

Offering Type

Cognitive

Introduction

Augmented Reality provides an enhanced version of reality by superimposing virtual objects over a user’s view of the real world. ARKit blends digital objects and information with the environment around you, taking apps far beyond the screen and freeing them to interact with the real world in entirely new ways. This pattern combines ARKit with Watson Visual Recognition and a Cloudant database to give you a complete Augmented Reality experience.

Author

Code

Demo

  • N/A

Video

Overview

The easiest way to find and connect to people around the world is through social media apps like Facebook, Twitter and LinkedIn. These, however, only provide text based search capabilities. However, with the recently announced release of the iOS ARKit toolkit, search is now possible using facial recognition. Combining iOS face recognition using Vision API, classification using IBM Visual Recognition, and person identification using classified image and data, one can build an app to search faces and identify them. One of the use cases is to build a Augmented Reality based résumé using visual recognition.

In this code pattern, we will create augmented reality based résumés with Visual Recognition. The iOS app recognizes the face and presents you with the AR view that displays a résumé of the person in the camera view. The app utilizes IBM Visual Recognition to classify the image and uses that classification to get details about the person from data stored in an IBM Cloudant NoSQL database.

After completing this code pattern a user will know how to:

  • Configure ARKit
  • Use the iOS Vision module
  • Create a Swift iOS application that uses the Watson Swift SDK
  • Use the face classifier of Watson Visual Recognition
  • Classify images with Watson Visual Recognition and Core ML

Flow

  1. User opens the app on their mobile
  2. A face is detected using the iOS Vision module
  3. An image of the face is sent to Watson Visual Recognition to be classified
  4. Additional information about the person are retrieved from a Cloudant database based on the classification from Watson Visual Recognition
  5. The information from the database is placed in front of the original person's face in the mobile camera view

Included Components

  • Watson Visual Recognition: Visual Recognition understands the contents of images - visual concepts tag the image, find human faces, approximate age and gender, and find similar images in a collection.
  • Cloudant NoSQL DB: A fully managed data layer designed for modern web and mobile applications that leverages a flexible JSON schema.

Featured technologies

  • ARKit: ARKit is an augmented reality framework for iOS applications.
  • Artificial Intelligence: Artificial intelligence can be applied to disparate solution spaces to deliver disruptive technologies.
  • Mobile: Systems of engagement are increasingly using mobile technology as the platform for delivery.

Blog

Links