a. [Optional] Create a conda virtual environment and activate it:
conda create -n open-mmlab python=3.7 -y
conda activate open-mmlab
b. Install PyTorch and torchvision following the official instructions, e.g.,
conda install pytorch torchvision -c pytorch
c. Clone mmskeleton from github:
git clone https://github.com/open-mmlab/mmskeleton.git
cd mmskeleton
d. Install mmskeleton:
python setup.py develop
e. [Optional] Install mmdetection for person detection:
python setup.py develop --mmdet
In the event of a failure installation, please install mmdetection manually.
f. To verify that mmskeleton and mmdetection installed correctly, use:
python mmskl.py pose_demo [--gpus $GPUS]
# or "python mmskl.py pose_demo_HD [--gpus $GPUS]" for a higher accuracy
An generated video as below will be saved under the prompted path.
Any application in mmskeleton is described by a configuration file. That can be started by a uniform command:
python mmskl.py $CONFIG_FILE [--options $OPTHION]
which is equivalent to
mmskl $CONFIG_FILE [--options $OPTHION]
Optional arguments options
is defined in the configuration file.
You can check them via:
mmskl $CONFIG_FILE -h
See START_RECOGNITION.md for learning how to train a model for skeleton-based action recognitoin.
See CUSTOM_DATASET for building your own skeleton-based dataset.
See CREATE_APPLICATION for creating your own mmskeleton application.