You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We help the deaf and the dumb to communicate with normal people using hand gesture to speech conversion. In this code we use depth maps from the kinect camera and techniques like convex hull + contour mapping to recognise 5 hand signs
This project is investigating the use of various machine learning techniques with various gesture recognition devices to recognise gestures from the South African Sign Language alphabet. Three devices are being used, namely the Leap Motion Controller, Microsoft Kinect and Myo. This project is subject to the intellectual copyright terms stipulate…
An algorithm that facilitates communication between a speech-impaired person and someone who doesn't understand sign language using convolution neural networks
This is a demonstration of hand pose recognition implemented using a Flask backend and the Indian Sign Language Translator API. It is now hosted on an AWS instance at http://18.236.194.220:5000/
The project explores techniques, algorithms, and implementations for Sign Language Recogition. In particular, we are working with AUSLAN, Australia's sign language.
This project is a sign language alphabet recognizer using Python, openCV and tensorflow for training InceptionV3 model, a convolutional neural network model for classification.