Skip to content

JoelJJoseph/Sign-Language_oneApi

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

Inspiration

Human interaction depends on communication, which many people take for granted. But, communication can be extremely difficult for persons who are deaf or have speech impairments. One means of communication for these people is sign language, but it can be challenging for those who do not know the language. Machine learning models can be useful in this situation. In this project, i want to develop a machine learning model for sign language that will help deaf and speech-impaired people communicate.

What it does

The deaf and speech-impaired communities frequently face significant communication challenges. When speaking with someone who does not understand sign language for the speech-impaired, communication issues and misunderstandings may occur. Similar difficulties can arise when deaf persons try to communicate with those who do not understand sign language, which can lead to social exclusion.

Despite their potential value, conventional assistive solutions like text-to-speech software may not always be the best option. The development of a machine learning model that can accurately recognise and decipher sign language motions may help close the communication gap between the deaf community and non-deaf people. Sign language is a vital means of communication for the deaf community. The sign language machine learning model translates sign language into English phrases by detecting the motions produced by the signer using a deep learning architecture. The programme predicts the appropriate English words and  organise the words into grammatically correct sentences. The resulting sentence can be displayed on a screen.

How i build it

✅ First I Import libraries such as tensorflow and keras

✅Understand the data

✅Uses Data Preprocessing method

✅Build the CNN model and train the model

image

✅Train the model using Intel ONEAPI to get better results

intel

✅Save the model

What I learned

✅Machine learning for sign language recognition and interpretation.

✅Training and fine-tuning convolutional neural networks for image and video classification.

✅Preprocessing and cleaning large datasets of sign language gestures.

✅Developing real-time applications for sign language recognition and interpretation.

✅Understanding the needs of the speech-impaired and deaf communities and the importance of accessibility in technology.

✅Implementing natural language processing techniques for sentence generation and translation.

✅Collaborating and communicating effectively in a team to deliver a complex project.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published