Skip to content

Implementation of media pipe, and using finger state to predict hand gesture

License

Notifications You must be signed in to change notification settings

normalclone/hand-parser

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

hand-parser

Introduction

This is an implementation of mediapipe in python and using finger state to predict hand gestures. You can read the detail about the approach used in this project in here. The difference is that I don't handle the finger state by using a trained network, I just do some math calculating.

Dependencies

opencv-python >= 4.0

TensorFlow2.0(GPU is unnecessary)

PyTorch >= 1.1

Numpy

Pillow

Usage

  1. Install required dependencies.
  2. run python app.py
  3. run python app-mouse.py to control mouse with your index finger. Use action "2" to click. Due to this project doesn't allow GPU, and low FPS, I'll stop developing this function

Custom

  • You can base on the points detected by mediapipe to predict or config the label

    e.g: The "CATCH" label is predicted using the distance between the index and the thumb finger while the others using the angle to be predicted.

  • Read more in utils/hand_track_utils.py and the paper(https://github.com/Prasad9/Classify-HandGesturePose), you will make it clear soon.

Demo

Demo

License

Apache License 2.0

Releases

No releases published

Packages

No packages published

Languages