Welcome to the Real-Time Sign Language Recognition System project, developed by Ezequiel Bellver. This project aims to develop a real-time sign language recognition (SLR) system specifically tailored for British Sign Language (BSL) gestures, focusing on a studio setup with optimal lighting conditions. The system will utilize computer vision and deep learning techniques to accurately interpret BSL gestures from live video input.
The project follows a standard directory structure:
- project_root/
- data/
- bsl_corpus/
- models/
- src/
- preprocessing.py
- feature_extraction.py
- model.py
- inference.py
- utils.py
- notebooks/
- tests/
- requirements.txt
- README.md
- LICENSE
- data/
To get started with the project, follow these steps:
- Clone the repository:
git clone <repository_url> - Install the required dependencies:
pip install -r requirements.txt - Explore the source code in the
src/directory to understand the project structure and functionality. - Experiment with Jupyter notebooks in the
notebooks/directory for data exploration and algorithm testing. - Run unit tests in the
tests/directory to ensure the correctness of the code.
Contributions to the project are welcome! If you find any issues or have suggestions for improvements, please open an issue or submit a pull request. Make sure to follow the project's coding conventions and guidelines.
This project is licensed under the MIT License.