Skip to content

Vrushabh182002/Sign-Language-Recognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Title: Sign Language Recognition using Long Short Term Memory (LSTM) Model

Description: This project focuses on developing a Sign Language Recognition system that can identify hand signs and gestures in real-time using a Long Short Term Memory (LSTM) Model. The primary objective is to interpret the meaning of the sign or gesture made by a user through a live camera feed. The system combines data from two sources: the live camera feed and a dataset from Kaggle, which includes images of the Indian Sign Language (ISL).

Technical Overview:

Live Camera Dataset: The live camera dataset is acquired in real time through the camera, and the video feed is processed and saved to a dedicated folder. This dataset serves as the primary source of input for the LSTM model, allowing it to recognize signs and gestures made during live interactions.

Kaggle Dataset - Indian Sign Language (ISL): The Kaggle dataset provides a valuable resource for training and testing the LSTM model. It contains 1200 high-quality images for each letter of the Indian Sign Language. This dataset is crucial for training the model to recognize and classify individual signs accurately.

LSTM Model: The Long Short Term Memory (LSTM) model is a type of recurrent neural network (RNN) that excels in capturing long-range dependencies and sequential patterns. In our project, the LSTM model is the core component responsible for interpreting sign language gestures. Its ability to remember and learn from past information is crucial in understanding the context of signs and gestures.

Data Preprocessing: Before feeding the data to the LSTM model, it undergoes preprocessing, which includes resizing, normalization, and data augmentation to enhance the model's robustness and generalization.

Training and Validation: The LSTM model is trained on the combined dataset, which includes both the live camera data and the Kaggle ISL dataset. A portion of the data is set aside for validation to ensure the model's accuracy and avoid overfitting.

Real-time Inference: Once the LSTM model is trained, it is deployed to perform real-time inference on live camera feeds. This allows the model to continuously recognize and interpret signs and gestures as they happen, making it a valuable tool for communication with sign language users.

Dependencies:

  • Python
  • TensorFlow or PyTorch (whichever is used for the LSTM model)
  • OpenCV for camera feed processing
  • Numpy for data manipulation
  • Data augmentation libraries (e.g., Augmentor) if used
  • Jupyter Notebook for model development and evaluation

Usage:

  1. Clone this repository.
  2. Install the required dependencies.
  3. Run the inference script to start real-time sign language recognition.

Future Enhancements:

  • Expand the dataset to include more diverse sign gestures.
  • Optimize the model for real-time performance on different hardware platforms.
  • Implement a user-friendly interface for improved accessibility.

Contributing: Contributions and feedback are welcome. Feel free to submit pull requests or open issues.

Acknowledgments: We would like to thank the Kaggle community for providing the Indian Sign Language (ISL) dataset and all contributors to the project.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages