video of the project: https://github.com/user-attachments/assets/93a6b392-f9a2-4c27-b3cf-bdace8b68a92
This project focuses on recognizing American Sign Language using a deep learning model. The model was trained in Google Colab using a dataset sourced from Kaggle. The trained model is then used in the accompanying Python script (sing_language.py) for real-time prediction through webcam input.
The project leverages the following libraries:
MediaPipe for hand tracking and landmark detection
TensorFlow for building and training the model
OpenCV for real-time video capture and visualization