hand pose control PC or some other thing (I mean anything)
- mediapipe(0.9.3.0)
- pyautogui(0.9.53)
hand landmark detect and gesture regression based on `mediapipe`
| python | tflite | onnx(tf2onnx) | TensorRT |
-----------------------------------------------------------------------------
operation based on hand pose & landmark
PC | PC with GPU | Jetson | RPI4 | some other device without python or linux
-----------------------------------------------------------------------------
mouse and keyboard simulation based on HID(Human Interface Device)
map Pose to Operation, based on a json config file
python + mediapipe
- Get hand landmark and gesture by
mediapipe
- Control mouse and keyboard using
pyautogui
Follow the instruction of readme/中文文档
# need conda, need a conda env called 'PoseHID'
conda activate PoseHID
pip install -r requirements.txt
python main.py
# need conda, need a conda env called 'PoseHID'
conda activate PoseHID
pip install -r requirements.txt
pytest
WIP