source code for SignAR
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
Assets
ProjectSettings
Assembly-CSharp.csproj
README.md

README.md

SignAR

HoloLens App for detecting, reading and displaying text in the environment

Requirements

This application is based of the MS HoloToolkit and in particular it uses the spatial mapping abilities. Basic familiarity with developing and deploying for MS HoloLens is assumed.

To get started with HoloLens, follow tutorials on the HoloAcademy website (Holograms 101, 212, and 230 in particular).

For ease of use, the necessary assets from the HoloToolkit are included.

This application uses the Google Vision API. The application does not run as is and you need an account for the Google Vision API. For more information see Google Vision.

Software

make sure project is set up to work with Github

Hardware

  • MS HoloLens enabled for developer mode
  • Bluetooth clicker (toggles augmentation on and off)

Deploying the app

  1. Clone repo to your computer
  2. Open project in Unity (make sure that the SignAR scene is loaded)
  3. Open ApiManager.cs and add your Google vision api account information into line 26
  4. Build and Deploy (see HoloAcademy for tutorials)

Using the app

General App description

The SignAR application will detect text in the direction the user is looking at and place spherical icons wherever text exists. The number of icons to display is restricted to 5 (can be adjusted in Unity).

Every icon represents a sign containing text in the real world. These icons are color-coded (green for confident, orange for semi-confident, and red for doubtful) and can be selected. Once selected, the application will read and display the text stored at that icon, and the icon will disappear. See below for a list of modes and voice commands.

The clicker can be used with the application. If the user is gazing at an object, a click will select the object. If the user is not gazing at an object and the user previously selected an icon, then the click will deselect the previous icon. Otherwise, a click will tell the HoloLens to detect text if it is manual mode.

Modes

  • Audio Only Mode (default): when the application detects text, the application will read all of the text in front of the user without showing any icons
  • Icon Mode: when the application detects text, the application will first show icons in the scene. The user can tap on any of the icons, and the application will read/display the text to the user

Voice Commands

  • What's here: Tell the application to search for text in the scene
  • Icon mode: Switch to mode to show icons
  • Audio only mode: Switch to audio-only mode
  • Clear Icons: Delete all icons in the scene (Icon mode only)
  • Show me: Reads and displays the text of the icon that is currently gazed at (Icon mode only)
  • Read all here: Reads text of all icons that are currently in front of the user
  • Hide words: Hides the text of the icon that is currently selected (Icon mode only)