You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Indian Sign Language (ISL) is in its rudimentary stages of development and presents unique challenges compared to American Sign Language (ASL). While ASL has extensive research and abundant data, ISL faces limitations in standardized datasets due to India's diverse regions and cultures, resulting in variations and noise. This project is focused on classifying Indian Sign Language in real-time using machine learning models.
Indian Sign Language (ISL) gestures involve both hands, which obstructs feature recognition and hampers its development. To address this challenge, the project incorporates a combined dataset that includes images from an existing ISL dataset on Kaggle with a black background and images generated from the Google Teachable Machine platform using a webcam with a white background for alphabets and numeric signs. The collected segmented data undergoes image pre-processing and is analysed using a Bag of Visual Words model to extract features. By mapping alphabets with images, histograms are generated, and Support Vector Machine (SVM) is used for classification.
Indian Sign Language (ISL) plays a vital role in facilitating communication for deaf and mute individuals in India. This thesis provides a detailed implementation of Indian sign language recognition using the Bag of Words model and Support Vector Machine (SVM) for classification. The model was trained on a combined dataset that included images from an existing ISL dataset on Kaggle with a black background and images generated from the Google Teachable Machine platform using a webcam with a white background. This approach was employed to enhance the performance of the model. It achieved superior performance compared to similar state-of-the-art models, surpassing them with an accuracy of 94.14%. This outperforms previous work by incorporating a diverse range of backgrounds in the dataset, resulting in significant improvement. Notably, this project goes beyond static image recognition and includes real-time gesture recognition. The model accurately recognises and classifies the signs and gives the corresponding alphabet or digit as the output.
Furthermore, there is potential to extend this project to incorporate simple expressions and words in ISL, including alphabets and numbers.
The text was updated successfully, but these errors were encountered:
ketpals
changed the title
PROJECT PROPOSAL- Indian Sign Language Recognition in Real-time
[GSSOC-23] PROJECT PROPOSAL- Indian Sign Language Recognition in Real-time
Jul 23, 2023
Project Request
Indian Sign Language (ISL) is in its rudimentary stages of development and presents unique challenges compared to American Sign Language (ASL). While ASL has extensive research and abundant data, ISL faces limitations in standardized datasets due to India's diverse regions and cultures, resulting in variations and noise. This project is focused on classifying Indian Sign Language in real-time using machine learning models.
Indian Sign Language (ISL) gestures involve both hands, which obstructs feature recognition and hampers its development. To address this challenge, the project incorporates a combined dataset that includes images from an existing ISL dataset on Kaggle with a black background and images generated from the Google Teachable Machine platform using a webcam with a white background for alphabets and numeric signs. The collected segmented data undergoes image pre-processing and is analysed using a Bag of Visual Words model to extract features. By mapping alphabets with images, histograms are generated, and Support Vector Machine (SVM) is used for classification.
I am a GSSOC Participant
I am a Contributor
Indian Sign Language Recognition in Real Time
Description
Indian Sign Language (ISL) plays a vital role in facilitating communication for deaf and mute individuals in India. This thesis provides a detailed implementation of Indian sign language recognition using the Bag of Words model and Support Vector Machine (SVM) for classification. The model was trained on a combined dataset that included images from an existing ISL dataset on Kaggle with a black background and images generated from the Google Teachable Machine platform using a webcam with a white background. This approach was employed to enhance the performance of the model. It achieved superior performance compared to similar state-of-the-art models, surpassing them with an accuracy of 94.14%. This outperforms previous work by incorporating a diverse range of backgrounds in the dataset, resulting in significant improvement. Notably, this project goes beyond static image recognition and includes real-time gesture recognition. The model accurately recognises and classifies the signs and gives the corresponding alphabet or digit as the output.
Furthermore, there is potential to extend this project to incorporate simple expressions and words in ISL, including alphabets and numbers.
The text was updated successfully, but these errors were encountered: