Skip to content
An AI model that respond to sign language using your webcam/cameras (WIP)
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github/ISSUE_TEMPLATE
Trainer
archive
docs/gifs
.gitignore
LICENSE
README.md
VToF.py
data_preparer.py
find_total_frames.py
model.py
requirement.txt
sequence_creator.py
tohand.py

README.md

Universal Sign Language Translator

Introduction

We aim to raise our AI model using the Machine Learning, Image Processing which involved repeatedly gesturing in front of a camera to teach the system the fundamentals of sign language. This is to combat the differences between various Country’s sign languages and their regional dialects without compromising on accuracy while giving way to crowd-sourcing.

Intermediate Outputs

The outputs below were taken directly from our model.

RAW Feed (1920x1080 @ 60 fps)

Extraction of skin tones (Removal of unwanted entities)

Canny Edge Detection to extract only perimeter of change

Getting Started

Prerequisites

What things you need to run the program:

  • Python Compiler (3.7 Recommended)
  • A clone of this repository :P
  • Easy Way: Install all the necessary packages form pypi by using the following command:
    pip install -r requirement.txt
  • If you still prefer the old fashioned way, then use the following commands:
    pip install opencv-python
    pip install matplotlib
    pip install numpy
    pip install tqdm

Team/Organisation

Authors

Made with ❤️ by Axemhammer

wave

You can’t perform that action at this time.