I am trying to capture live video data from camera feed in order to plot ASL (American Sign Language) signs. The goal is to be able to extract the wire models from different sources with annotations (such as YouTube, interpreting sessions, etc). The overreaching goal is to have the data commit somewhere in the fututre to help train ML models for ASL and other sign languages around the world.
Currently working with MediaPipe Holistic provided by Google, you can check it out [here]
To bring it into Python you will need:
Install: pip install mediapipe opencv-python
and
Documentation: MediaPipe Holistic Legacy
I haven't learned Python Environments yet (not on agenda at the moment), so this implementation is using Jupyter Notebook.
Here is a link to the latest Online Lite Lab (Browser-Version): Jupyter Notebook
Just copy and paste the included code into a cell and execute. If any problems arrise, split the code into smaller cells or walkthrough it one line at a time.
Much of the starter code is derived from Google Samples at the moment. I have added some custom stylizing and made some debugging efforts, but I don't take credit for technological implementation of Model Imports. I will be adding more to this repository over the next several months, with works to capture the data from MediaPipe. All of that programming will be of my works, but feel free to use it for whatever you may need. Credit me if you feel like it, but I won't be upset if not.