This repository contains the implementation of a Sign Language Detector Model using TensorFlow and Keras. The model is designed to recognize and classify different sign language gestures from images, making it a valuable tool for facilitating communication with the deaf and hard-of-hearing community.
- Image Classification: Classifies sign language gestures into 27 different classes, including all 26 letters of the English alphabet and a 'Blank' gesture.
- Deep Learning Model: Utilizes Convolutional Neural Networks (CNNs) for feature extraction and classification.
- Data Augmentation: Implements
ImageDataGeneratorfor image preprocessing and augmentation to improve model robustness.
- Python 3.x
- TensorFlow 2.x
- Keras
- NumPy
- Pandas
- Matplotlib
- Seaborn
- Clone the repository to your local machine.
- Install the required Python packages:
pip install numpy pandas matplotlib seaborn tensorflow keras
- Download the dataset for Sign Language (links can be provided in the dataset section).
The model is trained and validated on a dataset containing images of various sign language gestures. The dataset should be organized into three folders: Train_Alphabet, Test_Alphabet, and Validation_Alphabet.
- The model architecture includes multiple
Conv2DandMaxPooling2Dlayers, followed byFlatten,Dense, andDropoutlayers. - The model is compiled with Adam optimizer and categorical crossentropy loss function.
- Train the model using the
fitmethod on the training and validation data.
- Load the model using Keras.
- Preprocess the input image to the required format.
- Use the model to predict the sign language gesture.
Complete setting up the web app for the model.
Special thanks to all the contributors and researchers in the field of sign language recognition for their valuable insights and datasets.
