Skip to content

WimPouw/TowardsMultimodalOpenScience

Repository files navigation

Masked Piper: Masking personal identities in visual recordings while preserving multimodal information

This python notebook runs you through the procedure of taking videos as inputs with a single person in the video, and outputting the 1) a masked video with facial, hand, and arm kinematics ovelayen, and 2) outputs the kinematic timeseries. This tool is a simple but effective modification of the the Holistic Tracking by Google's Mediapipe so that we can use it as a CPU-based light weigth tool to mask your video data while maintaining background information, and also preserving information about body kinematics.

Masked-Piper

Check out the notebook for direct inspection of the code: https://wimpouw.github.io/TowardsMultimodalOpenScience/Index

Quick run of the tool without coding

  • Install python (e.g., install anaconda) and pip (e.g., 'conda install pip' in your conda command prompt).
  • Then download the repository
  • Then you first make sure you have all the dependencies installed. First navigate via your conda/command prompt to the local folder where you have stored the repository (e.g., 'cd C:\TowardsMultimodalOpenScience'). Then you install the requirements with 'pip install -r requirements.txt', which will install the dependencies you need.
  • OPTIONAL: Then you can test whether the tool works by clicking on Masked-PiperSTART.bat in your folder (this will run the tool on the videos already present in the input folder)
  • You can now drop your videos into the Input_Videos folder (and delete the example videos), and and start processing the videos by clicking on Masked-PiperSTART.bat
  • Your masked videos will show up in (Output_MaskedVideos) and your kinematic time series will be in (Output_TimeSeries)

File structure

file Masked-Piper_Notebook.ipynb = is the notebook that you can run with jupyter notebook
file Masked-PiperPY.py = the python file that you can run directly in your console
file Masked-PiperSTART.bat = if you just want to run the tool and you have python installed, you can run this batch file and it will process all the videos in your input_Videos folder
file requirements.txt = if you want install all the dependencies in one go, you can run this files via pip install in your terminal like so: pip install -r requirements.txt
folder Output_TimeSeries = The stored kinematic timeseries for body, hand, and face
folder Input_Videos = You can drop your videos here to process them
folder Output_MaskedVideos = This is where your masked videos are stored
folder Example_combined = This is an after-edited video showing the pre-mask and masked video (but this is not generated by the code, only for demonstration purposes)
folder docs = location of the notebook html
folder images = location of images

Citation (status: Published)

Owoyele, B., Trujillo, J., De Melo, G., Pouw, W. (2022). Masked-Piper: Masking personal identities in visual recordings while preserving multimodal information. SoftwareX. doi: 10.1016/j.softx.2022.101236

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published