Skip to content

In this project, we have created a model to detect sign language using mediapipe holistic keypoints and LSTM layered model.

Notifications You must be signed in to change notification settings

RSBhoomika/Sign-Language-Recognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Sign-Language-Recognition

We developed a model to detect sign language using mediapipe holistic keypoints and an LSTM layered model in this project.Dumb people communicate using hand signs, so normal people have difficulty recognising their language through signs.As a result, systems that recognise various signs and convey information to ordinary people are required.

The goal of our project is to create a virtual talking system without sensors for people in need, with this concept achieving success through image processing and human hand gesture input.This primarily benefits people who are unable to communicate with others.

The following are the steps for implementation: Install and Import Dependencies

Detect Face,Hand and Pose Landmarks

Setup Folders for Data collection

Preprocess Data and Create Labels

Build and Train an LSTM Deep Learning Model

Make Sign Language Predictions

Save Model Weights

Evaluation using a Confusion Matrix

Test in Real Time

About

In this project, we have created a model to detect sign language using mediapipe holistic keypoints and LSTM layered model.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published