No description, website, or topics provided.
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.vscode
Sample_Code
notebooks
paper
pictures
report
src
.gitignore
LICENSE
README.md
environment.yml

README.md

Multi-modal Human Actions

In this repo, we would analyze the open source human action regonition data from UT Dallas

Setup

  1. Get Anaconda download
  2. Clone this repo
  3. Setup conda environment
conda env create -f=environment.yml
  1. Activate environment
source activate mmha
  1. Download data from here and save them under a folder named data at the project root directory
  2. change directory to notebooks and launch a jupyter notebook
cd notebooks
jupyter notebook

About the Dataset

The naming convention of a file is "ai_sj_tk_modality", where ai stands for action number i, sj stands for subject number j, tk stands for trial k, and modality corresponds to four data modalities (color, depth, skeleton, inertial).

Depth Data

The depth data is a (240, 320, 55) WidthxHeightxFrames matrix.
Each frame contains an image
Example of the depth data looks like this
depth_tennis_swing

Skeleton Data

Each skeleton data is a 20 x 3 x num_frame matrix.
Each row of a skeleton frame corresponds to three spatial coordinates of a joint.
The skeleton joint order in UTD-MAD dataset:

  1. head
  2. shoulder_center
  3. spine
  4. hip_center
  5. left_shoulder
  6. left_elbow
  7. left_wrist
  8. left_hand
  9. right_shoulder
  10. right_elbow
  11. right_wrist
  12. right_hand
  13. left_hip
  14. left_knee
  15. left_ankle
  16. left_foot
  17. right_hip
  18. right_knee
  19. right_ankle
  20. right_foot

Example of the skeleton data looks like this
skeleton_tennis_swing

Inertial Data

inertial_tennis_swing