Skip to content

Realtime Sign Language Detection: Deep learning model for accurate, real-time recognition of sign language gestures using Python and TensorFlow.

License

Notifications You must be signed in to change notification settings

AvishakeAdhikary/Realtime-Sign-Language-Detection-Using-LSTM-Model

Repository files navigation

Realtime Sign Language Detection Using LSTM Model

Mediapipe Detection

The Realtime Sign Language Detection Using LSTM Model is a deep learning-based project that aims to recognize and interpret sign language gestures in real-time. It utilizes a Long Short-Term Memory (LSTM) neural network architecture to learn and classify sign language gestures captured from a video feed. The project provides a user-friendly interface where users can perform sign language gestures in front of a camera, and the system will instantly detect and interpret the gestures. This can be used as an assistive technology for individuals with hearing impairments to communicate effectively. Key features of the project include real-time gesture detection, high accuracy in recognition, and the ability to add and train new sign language gestures. The system is built using Python, TensorFlow, OpenCV, and Numpy, making it accessible and easy to customize. With the Realtime Sign Language Detection Using LSTM Model, we aim to bridge the communication gap and empower individuals with hearing impairments

Table of Contents

About the Project

This section provides an overview of the Realtime Sign Language Detection Using LSTM Model project. It describes the project's purpose, which is to develop a system that can accurately detect and interpret sign language gestures in real time. It also highlights the use of LSTM (Long Short-Term Memory) models for this task and emphasizes the project's significance in improving communication accessibility for the deaf and hard of hearing community.

Demo

This section showcases a demonstration of the Realtime Sign Language Detection Using LSTM Model project.

Test.mp4

The demo allows viewers to see how the system accurately interprets sign language gestures and provides real-time results.

Features

model h5

  • Real-time sign language detection: The system can detect and interpret sign language gestures in real time, providing immediate results.
  • High accuracy: The LSTM (Long Short-Term Memory) model used in the project ensures accurate recognition of a wide range of sign language gestures.
  • Multi-gesture support: The system can recognize and interpret various sign language gestures, allowing for effective communication.
  • Easy integration: The project provides code snippets and examples for seamless integration into other applications or projects.
  • Accessibility improvement: The Realtime Sign Language Detection Using LSTM Model project contributes to enhancing communication accessibility for the deaf and hard of hearing community.
  • Customization options: The system supports customization of gestures, allowing users to adapt it to their specific needs.
  • Language flexibility: The model can be trained to recognize sign language gestures from different languages, making it adaptable to various communication contexts.
  • User-friendly interface: The project includes a user-friendly interface that simplifies the interaction with the system, ensuring a smooth user experience.
  • Open-source: The Realtime Sign Language Detection Using LSTM Model is an open-source project, encouraging contributions and fostering collaboration in the development community.

Neural Network

Getting Started

To get started with the Realtime Sign Language Detection Using LSTM Model, follow these steps:

Prerequisites

  • Python
  • TensorFlow
  • OpenCV
  • Numpy

Installation

  1. Clone the repository:
git clone https://github.com/AvhishekAdhikary/Realtime-Sign-Language-Detection-Using-LSTM-Model.git
  1. Install Dependencies:
pip install notebook
  1. Run Jupyter Notebook:
jupyter notebook

Usage

Simply run all the cells inside the 'RealTimeSignLanguageDetection.ipynb' file.

Contributing

Contributions are welcome! If you have any ideas, suggestions, or bug fixes, please open an issue or submit a pull request.

License

This section states that the Realtime Sign Language Detection Using LSTM Model project is released under the MIT License. It briefly describes the terms and conditions of the license, such as the permission to use, modify, and distribute the project, with appropriate attribution. It provides a link to the full text of the MIT License for further reference.

Contact

For any questions or inquiries, feel free to contact me at avhishe.adhikary11@gmail.com.