This project is made during HackGSU 2016 and was awarded "Hackathon Winner - Organizer's Choice'
Link to Devpost: https://devpost.com/software/magichand
MagicHand helps translate sign language (by images) to voice in any language using K-Nearest Neighbors algorithms in OpenCV
MagicHand requires OpenCV 2.x and Python 2.7
First of all we'll be using the package manager Homebrew to simplify things. You can get it here: http://brew.sh/
Once you have brew installed you can go ahead and and add homebrew/science which is where OpenCV is located using:
$ brew tap homebrew/science
Go ahead and install OpenCV now
brew install opencv
You’re done! You can find OpenCV at
cd /usr/local/Cellar/opencv/2.x.x/
Python Setup:
Navigate to your python path, you can find it in your .bash_profile or using
cat ~/.bash_profile | grep PYTHONPATH
Your .bash_profile might not have the PYTHONPATH. In that case, it's dependent on each computer. For my Mac, it was:
cd /Library/Python/2.7/site-packages/
Once there we need to link our compiled OpenCV files, create a symlink using
$ ln -s /usr/local/Cellar/opencv/2.x.x/lib/python2.x/site-packages/cv.py cv.py
$ ln -s /usr/local/Cellar/opencv/2.x.x/lib/python2.x/site-packages/cv2.so cv2.so
- This program has been tested on Mac. os.say() will not work in Windows OS. You will need to modify os.say() to a similar window cmd in order for it to work
- Run train.py to train the data set for your hand gesture (you can only map numbers to the gestures, NOT characters.
- Map the number with associated words/phrases by modifying the words dict in main.py
- Run main.py to detect and say the gesture you've already mapped
words = {0: 'Hello', 1: 'I', 2: 'Love', 3: 'You', 5: 'Good Bye', 7: 'Hack GSU 2016'}
This project is made by VNBuzz - 4 Georgia Tech students: