Skip to content

jutanke/mocap

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

mocap

Helper library to handle mocap data. At the moment, the CMU Mocap dataset as well as the Mocap data from the Human3.6M dataset are used. If this library is helpful to you, please cite the following work:

@article{
   author        = {Tanke, Julian AND Zaveri, Chintan AND Gall, Juergen},
   title         = {{Intention-based Long-Term Human Motion Anticipation}},
   year          = {2021},
   booktitle     = {International Conference on 3D Vision}
}

Install

This library requires some external tools, such as:

  • matplotlib: for visualization. conda install matplotlib
  • numba: to speed-up performance. conda install numba
  • transforms3d: For handling translations between rotational data. pip install transforms3d
  • tqdm: for visualization. pip install tqdm
  • spacepy: Some datasets require to read the CDF file format from NASA. Install as follows (taken from stackoverflow).
wget -r -l1 -np -nd -nc http://cdaweb.gsfc.nasa.gov/pub/software/cdf/dist/latest-release/linux/ -A cdf*-dist-all.tar.gz
tar xf cdf*-dist-all.tar.gz -C ./
cd cdf*dist
apt install build-essential gfortran libncurses5-dev
make OS=linux ENV=gnu CURSES=yes FORTRAN=no UCOPTIONS=-O2 SHARED=yes -j4 all
make install #no sudo

Add to .bashrc:

export CDF_BASE=$HOME/Libraries/cdf/cdf38_1-dist
export CDF_INC=$CDF_BASE/include
export CDF_LIB=$CDF_BASE/lib
export CDF_BIN=$CDF_BASE/bin
export LD_LIBRARY_PATH=$CDF_BASE/lib:$LD_LIBRARY_PATH

Then install spacepy:

pip install git+https://github.com/spacepy/spacepy.git

Finally, the library can be installed as follows:

pip install git+https://github.com/jutanke/mocap.git

or locally by

python setup.py install

Usage

In case of Human3.6M, follow the steps below first and make sure that you have downloaded the dataset from the official website. In case of CMU, the data will be automatically downloaded.

Basic usage:

# ~~~~~~~~~~~~~~~~~~~~~~~
# using Human3.6M
# ~~~~~~~~~~~~~~~~~~~~~~~
import mocap.datasets.h36m as H36M

all_actors = H36M.ACTORS  # ['S1', 'S5', ..., 'S11']  total number: 7
all_actions = H36M.ACTIONS  # ['walking', ..., 'sittingdown']  total number: 15

ds = H36M.H36M(actors=all_actors)  # 32-joint 3D joint positions, in [m]
seq = ds[0]  # get the first sequence, {n_frames x 96}
print('number of sequences:', len(ds))

for seq in ds:  # loop over entire dataset
    print(seq.shape)  # {n_frames x 96}

# -- with activities --
# For our research we hand-labeled 11 activities
ds = H36M.H36M_withActivities(actors=['S1'])  # We provide 11 framewise activity labels
seq, labels = ds[0]  # get the first sequence, {n_frames x 96}, {n_frames x 11}

for seq, labels in ds:  # loop over entire dataset
    print(seq.shape)  # {n_frames x 96}
    print(labels.shape)  # {n_frames x 11}

# -- fixed skeleton --
# Initially, each skeleton has different dimensions due to the actors being of different
# height and size. However, we also provide processed data where the skeletons of all 
# actors are processed such that they only utilize the skeleton of actor "S1".
ds = H36M.H36M_FixedSkeleton(actor=all_actors)
ds = H36M.H36M_FixedSkeleton_withActivities(actors=all_actors)

# Simplify skeleton:
ds = H36M.H36M_Simplified(ds)
# ds can be used like any other dataset above, it just simplifies the skeleton to 17 joints

# ~~~~~~~~~~~~~~~~~~~~~~~
# using CMU mocap data
# ~~~~~~~~~~~~~~~~~~~~~~~
import mocap.datasets.cmu as CMU

all_subjects = CMU.ALL_SUBJECTS
# different subjects have different actions:
action_for_subject_01 = CMU.GET_ACTIONS('01')

ds = CMU.CMU(['01'])

Advanced iterations:

import mocap.datasets.h36m as H36M

# include the framerate for each sequence
ds = H36M.H36M(actors=['S1'], iterate_with_framerate=True)
for seq, framerate in ds:
    print(seq.shape)  # {n_frames x 96}
    print('framerate in Hz:', framerate)

# include the unique sequence key for each sequence
ds = H36M.H36M(actors=['S1'], iterate_with_keys=True)
for seq, key in ds:
    print(seq.shape)  # {n_frames x 96}
    print('key:', key)  # h36m: (actor, action, sid) || cmu: (subject, action)

# include both key and framerate per sequence:
ds = H36M.H36M(actors=['S1'],
               iterate_with_keys=True,
               iterate_with_framerate=True)
for seq, framerate, key in ds:
    print(seq.shape)  # {n_frames x 96}
    print('framerate in Hz:', framerate)
    print('key:', key)  # h36m: (actor, action, sid) || cmu: (subject, action)

# this also works with activity labels!

Normalization:

import mocap.datasets.h36m as H36M
import mocap.processing.normalize as norm

ds = H36M.H36M(actors=['S1'])

seq = ds[0]

# normalize the sequence at a given frame: at that frame, the root joint
# is centered at the origin and the person faces forward in positive x-direction.
# The facing direction is defined by the left and right hip joints.
# The preceeding and following frames are rotated and translated relative to the
# normalized frame.
normalization_frame = 15
seq_norm = norm.normalize_sequence_at_frame(seq, normalization_frame,
                                            j_root=ds.j_root,
                                            j_left=ds.j_left,
                                            j_right=ds.j_right)
# if seq is a batch of sequences, the following function can be used:
#     {norm.batch_normalize_sequence_at_frame}


# global rotation and translation can be removed completely for a sequence:
seq_norm = norm.remove_rotation_and_translation(seq,
                                                j_root=ds.j_root,
                                                j_left=ds.j_left,
                                                j_right=ds.j_right)

Visualization:

import mocap.datasets.h36m as H36M
from mocap.visualization.sequence import SequenceVisualizer

ds = H36M.H36M(actors=['S1'])

seq = ds[0]

vis_dir = '/dir/to/write/visualization/'
vis_name = 'any name'

vis = SequenceVisualizer(vis_dir, vis_name,  # mandatory parameters
                         plot_fn=None,  # TODO
                         vmin=-1, vmax=1,  # min and max values of the 3D plot scene
                         to_file=False,  # if True writes files to the given directory
                         subsampling=1,  # subsampling of sequences
                         with_pauses=False,  # if True pauses after each frame
                         fps=20,  # fps for visualization
                         mark_origin=False)  # if True draw cross at origin

# plot single sequence
vis.plot(seq,
         seq2=None,
         parallel=False,
         plot_fn1=None, plot_fn2=None,  # defines how seq/seq2 are drawn
         views=[(45, 45)],  # [(elevation, azimuth)]  # defines the view(s)
         lcolor='#099487', rcolor='#F51836',
         lcolor2='#E1C200', rcolor2='#5FBF43',
         noaxis=False,  # if True draw person against white background
         noclear=False, # if True do not clear the scene for next frame
         toggle_color=False,  # if True toggle color after each frame
         plot_cbc=None,  # alternatve plot function: fn(ax{matplotlib}, seq{n_frames x dim}, frame:{int})
         last_frame=None,  {int} define the last frame < len(seq)
         definite_cbc=None,  fn(ax{matplotlib}, iii{int}|enueration, frame{int})
         name='', 
         plot_jid=False,
         create_video=False,
         video_fps=25,
         if_video_keep_pngs=False)

Data

Human 3.6M

Default skeleton with 32 joints: Screenshot 2019-12-27 at 19 58 32

Removed duplicates

Default skeleton with 25 joints:

Simplified

Simplified skeleton with 17 joints: Screenshot 2019-12-28 at 11 13 00

Acitivity labels

We provide framewise activity labels for the entire Human3.6M dataset. The following 11 human-labeled acitivites are used: labels

CMU Mocap

Default skeleton with 31 joints: Screenshot 2020-01-10 at 15 23 35

CMU Mocap (Evaluation for Anticipation)

Default skeleton with 38 joints, obtained from ConvSeq2Seq.

Combined:

Combined skeleton that works for both CMU and h36m data with 14 joints: Screenshot 2020-02-29 at 16 43 30

AMASS

Download data

Human3.6M

For Human3.6M, we cannot directly provide a download link due to their distribution policy. Instead, you first have to download their dataset from the official website. Then, call the following script as follows to extract the data:

$ cd mocap/dataaquisition/scripts
$ python get_h36m_skeleton.py /path/to/h36m/folder/human3.6m

AMASS

For "Archive of Motion Capture as Surface Shapes", please download the preprocessed files.

CMU

The CMU mocap dataset is automatically downloaded once make use of it. This may take some time!