Sign2Speech is a project developed by students from United Technical College as part of the BIC V.3.0 Hackathon. The project aims to leverage computer vision and deep learning technologies, using libraries such as Mediapipe, OpenCV, and Scikit Learn, to create a sign language detection system. This system provides real-time audio and visual outputs in the form of text and speech, making communication more inclusive for individuals with hearing impairments.
- Communication Inclusivity:
- Facilitates effective communication for individuals with hearing impairments.
- Accessibility in Digital Spaces:
- Integrates sign language detection into digital platforms for inclusivity.
- Education and Learning:
- Enhances inclusivity in educational settings for deaf or hard-of-hearing students.
- Employment Opportunities:
- Creates a more inclusive work environment for individuals who use sign language.
- Healthcare Communication:
- Facilitates communication between healthcare providers and patients with hearing impairments.
- Emergency Situations:
- Enables real-time communication during emergency situations.
- Social Inclusion:
- Breaks down communication barriers for participation in social interactions.
- Advancement in Human-Computer Interaction:
- Contributes to advancements in intuitive and accessible interfaces.
- Legal and Civic Participation:
- Enhances participation in legal and civic processes for individuals with hearing impairments.
-
Clone the Repository:
git clone https://github.com/your-username/sign2speech.git cd sign2speech
-
Install Dependencies:
pip install -r requirements.txt
-
Run the Application:
python main.py
We welcome contributions and feedback. If you'd like to contribute to Sign2Speech, please follow our Contribution Guidelines.
This project is licensed under the MIT License.
We would like to express our gratitude to [United Technical College] for supporting us in this hackathon.
Note: This README is subject to change as the project progresses. Check back for updates!