You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this project, I built a system that can recognize words communicated using the American Sign Language (ASL). I was provided a preprocessed dataset of tracked hand and nose positions extracted from video. My goal was to train a set of Hidden Markov Models (HMMs) using part of this dataset to try and identify individual words from test sequences.
We help the deaf and the dumb to communicate with normal people using hand gesture to speech conversion. In this code we use depth maps from the kinect camera and techniques like convex hull + contour mapping to recognise 5 hand signs
This repo is the continuation of the Machine-Learning repo. Here I'm gonna to upload all the examples and exercises which I'll do to learn deep learning techniques and all the problems I'll solve using these last few.
Sign Language Alphabet Recognition System that automatically detects American Sign Language and convert gestures from live webcam into text and speech.