Skip to content

benedictquartey/Chiromancer

Repository files navigation

Chiromancer [Open Source Sign Language Recognition Platform]

Hand gestures are a natural intuitive mode of communication, and for people with auditory impairments, gestures in the form of sign language, remain one of the main methods of communication. The potential of gesture recognition systems in Human-Computer Interaction and intuitive communication with robots/systems interfaces necessitates active research and development of this field.

A lot of work has been done in the field of personal AI assistants, resulting in useful technology such as Google Assistant, Siri and Cortana. However, this has been limited to speech based communication. In the spirit of inclusion and equal opportunities for all, it would be interesting to have an AI system that can recognise gestures and have a conversation with a person with an audial impairment through sign language.

Sufficient advancements in the field of machine learning and computer vision makes such an application possible. This project is a step towards that ultimate goal. The primary objective of this project is to use machine learning and computer vision to teach a robotic platform a set of gestures, such that the system is able to recognize these gestures from a live video feed and replicate them.

Contributing

Contributions are welcome and enccouraged, create a pull request to submit contributions.

Author

  • Benedict Quartey

Kindly acknowledge the creator when using all or any section of the code/design. If you have any inquiries or need any help, shoot me an email at benedict_quartey@brown.edu

License

This project is licensed under the MIT License. Refer to the LICENSE file for details