The project was originally created in the Hack2Wear hackathon (2014), together with Ilai Giloh and Mel.
How to run it?
First, you will have to install the WekiMini.
In addition, I use leap_python3 as a git submodule, so make sure to pull the submodule and compile it (see instruction in the link) before running.
Install the rest of the dependencies with
pip install -r requirements(preferably into a virtuel env).
Make sure you have a voice rss key, to convert text to speech. Get one and export it with
export VOICE_RSS_KEY=<YOUR KEY>.
If you are using autoenv you can write the above line to a
secretsfile and it will be sourced when you change into the project directory.
Start the WekiMini. In the input section choose 30 inputs and in the output section choose the type: "All dynamic time warping" with
Xgesture types where
Xis the number of expected sentences you want to work with. Hit next.
./sign_language.py. It will start to pass data from the LeapMotion to the WekiMini and get data back, convert gesture IDs to text and play the sentences.
Some WekiMini functions are still not fully automated. For example, you will have to find the sweet spot of the threshold slider manually using the WekiMini GUI.