Play Mortal Kombat with the camera as your input device and your body/pose as a controller.
Video Tutorial: LRBY | YouTube
It converts given image of pose into keystroke (accorging to the given config). This is done broadly in 4 simple steps:
- OpenCV capture image if any motion is detected (via frame differencing)
- MediaPipe returns
(x, y)
coordinates of the each body part (if found) - We use Mathematics to determine what pose is and which key that pose is associated with
- PyAutoGUI to input the keystroke
Lets start the standard procedure for python project setup.
- Clone the repository
$ git clone https://github.com/ra101/Pose2Input-MK9.git
- Create the virtualenv and activate it
$ cd Pose2Input-MK9
$ virtualenv venv
$ source ./venv/bin/activate # unix
$ .\venv\Scripts\activate.bat # windows
- Install requirements
$ pip install -r requirements.txt
- copy content of .env.template into .env (one can use dump-env as well)
$ cat .env.template > .env
Env Help Guide:
Env Variables | Defination | Default Value |
---|---|---|
Pose2Input Variables: | ||
CAMERA_PORT | Camera Port for OpenCV | 0 |
DELAY_TIME | A Delay before Starting the Program | 0 |
LOG_FPS | FPS Setting for logs | 20 |
MOTION_THRESHOLD_FACTOR | More the value is, More the Motion is Captured | 64 |
PyAutoGUI Constants: | ||
PYAUTO_PAUSE | Time (sec) to pause after each PyAuto Function Call | 0.1 |
Input Config: | ||
UP | KeyStroke for UP (used by PyAuto) | up |
DOWN | KeyStroke for Down (used by PyAuto) | down |
<move> |
KeyStroke for <move> (used by PyAuto) |
<key> |
One can simply run the application by this
$ python run.py
but for calibration, optional arguments are provided
Argument | Alias | Purpose | Deafult |
---|---|---|---|
--help | --h | Shows the available options | - |
--debug_level <0, 1, 2, 3> | --d <0, 1, 2, 3> | Set Different Levels of Information for Logs or Live feed (explained below this table) | 0 |
--log_flag | --l | Stores the video_log in "logs" folder (.avi) |
False |
--live_flag | -L | Displays the Captured Video | False |
Debug:
- Levels:
- 0: Raw Video Footage
- 1:
0
+ FPS and Output Moves - 2:
1
+ Virtual Exoskeleton of Body Parts Found - 3:
2
+ Black Screen if no motion found
- If
debug_level
> 0 and no flag is selected, thenlog_flag
is automatically set toTrue
Example of all flags being used:
$ python run.py --debug_level 3 --live_flag --log_flag
Dependency | Usage |
---|---|
Python-DotENV | Reads the key-value pair from .env file and adds them to environment variable. |
OpenCV-Python | Image Processing library which uses NumPy containers |
MediaPipe | Offers cross-platform, customizable ML solutions for live and streaming media. |
PyAutoGUI | It can control the mouse and keyboard to automate interactions with other applications. |