Skip to content
Winner of "Best in Communication and Mobility" at Reality Virtually 2019. Speech-to-text and sign language-to-text augmented reality application for the hearing impaired developed for the Magic Leap One Mixed Reality Headset.
C# HTML C++ Other
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
Assets/Watson
TestML
heAR
.gitignore
LICENSE
README.md
heARLogo.png

README.md

heAR - Richard Gao, Mustafa Eyceoz, James Ma, Joanna Liu, and Devanshi Udeshi

Revolutionizing human communication for the Hearing Impaired.
App developed for the Magic Leap One Mixed Reality Headset.

Features

-Intuitive interface for speech to text conversion.
-Create speech bubbles for text reading through easy controller input.
-Instructions for use included on UI.
-Speech log (history of what has been said) is recorded and displayed on UI.
-UI can be hidden on demand for minimal intrusion.
-Can capture and translate sign language gestures instantly at the press of a button.
-Uses machine learning to train model to convert ASL sign language to text.

How it Was Built

-Magic Leap SDK for Unity and Lumin SDK to build the AR experience, including space scanning, object overlaying for our speech bubbles and ray-casting for human recognition.
-IBM Watson API and Watson's Unity SDK for speech to text recognition, and a custom CNN for gesture / sign language recognition.
-We host CNN on Google Cloud Services and use Restful API to communicate with the server.
-AfterEffect, Illustrator, C4D, and Blender to build custom assets and animation.

You can’t perform that action at this time.