Skip to content
Sign language to speech with Leap motion (@ Hack2Wear 2014)
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
leap_to_weki
static
.env
.gitignore
.gitmodules
.out
README.md
audio_utils.py
index.html
requirements.txt
server.py
sign_language.py

README.md

Sign-language

A sign-language to speech application that uses the LeapMotion to track gestures and the WekiMini to map gestures to sentences.

The project was originally created in the Hack2Wear hackathon (2014), together with Ilai Giloh and Mel.

How to run it?

  1. First, you will have to install the WekiMini.

  2. In addition, I use leap_python3 as a git submodule, so make sure to pull the submodule and compile it (see instruction in the link) before running.

  3. Install the rest of the dependencies with pip install -r requirements (preferably into a virtuel env).

  4. Make sure you have a voice rss key, to convert text to speech. Get one and export it with export VOICE_RSS_KEY=<YOUR KEY>.

If you are using autoenv you can write the above line to a secrets file and it will be sourced when you change into the project directory.

  1. Start the WekiMini. In the input section choose 30 inputs and in the output section choose the type: "All dynamic time warping" with X gesture types where X is the number of expected sentences you want to work with. Hit next.

  2. Run ./sign_language.py. It will start to pass data from the LeapMotion to the WekiMini and get data back, convert gesture IDs to text and play the sentences.

Some WekiMini functions are still not fully automated. For example, you will have to find the sweet spot of the threshold slider manually using the WekiMini GUI.

  1. Enjoy
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.