HAnd Sign Language Detection (ASL Gesture Detection) using TensorFlow and OpenCV Introduction This project aims to develop a machine learning model capable of recognizing American Sign Language (ASL) gestures in real-time. ASL is a vital means of communication for individuals with hearing impairments, and our project seeks to enhance accessibility through technology.
Features Real-time ASL gesture detection using a webcam or image input. Integration with TensorFlow and OpenCV for machine learning and computer vision tasks. Utilization of Convolutional Neural Networks (CNNs) for image classification. Seamless interpretation of ASL gestures for communication purposes. Installation Clone the repository: git clone https://github.com/ibitwarvedant/HandSignDetection.git
Navigate to the project directory: cd HandSignDetection
Install dependencies: pip install -r requirements.txt
Dataset The dataset used for training and testing the machine learning model is not included in this repository. However, you can use your own dataset or explore publicly available ASL datasets for model training.
Contributing Contributions are welcome! Feel free to open issues, submit feature requests, or propose improvements by creating pull requests. Please ensure adherence to the project's code of conduct.
Acknowledgments We acknowledge the creators of TensorFlow, OpenCV, and Mediapipe for their invaluable contributions to the field of machine learning and computer vision. Special thanks to the ASL community for their inspiration and support in making communication accessible to all. Contact For any inquiries or feedback, please contact [vedantibitwar@gmail.com].
Thank you for your interest in our ASL Gesture Detection project!