Skip to content

A toolbox for skeleton-based action recognition.

License

Notifications You must be signed in to change notification settings

syedmustafa54/pyskl

 
 

Repository files navigation

PYSKL

PWC PWC

PYSKL is a toolbox focusing on action recognition based on SKeLeton data with PYTorch. Various algorithms will be supported for skeleton-based action recognition. We build this project based on the OpenSource Project MMAction2.

This repo is the official implementation of PoseConv3D and STGCN++.


Skeleton-base Action Recognition Results on NTU-RGB+D-120

News

  • We release the skeleton annotations (generated by HRNet), config files, and pre-trained checkpoints for Kinetics-400. Note that Kinetics-400 is a large-scale dataset (even for skeleton) and you should have memcached and pymemcache installed for efficient training and testing on Kinetics-400. <2022-05-01>
  • We provide an example for processing a custom video dataset (we use diving48), generating 2D skeleton annotations, and using PoseC3D for skeleton-based action recognition. The tutorial for skeleton extraction part is available in diving48_example. <2022-04-15>

Supported Algorithms

Supported Skeleton Datasets

For data pre-processing, we estimate 2D skeletons with a two-stage pose estimator (Faster-RCNN + HRNet). For 3D skeletons, we follow the pre-processing procedure of CTR-GCN. Currently, we do not provide the pre-processing scripts. Instead, we directly provide the processed skeleton data as pickle files, which can be directly used in training and evaluation. You can use vis_skeleton to visualize the provided skeleton data.

Installation

git clone https://github.com/kennymckormick/pyskl.git
cd pyskl
# Please first install pytorch according to instructions on the official website: https://pytorch.org/get-started/locally/. Please use pytorch with version smaller than 1.11.0 and larger (or equal) than 1.5.0
pip install -r requirements.txt
pip install -e .

Training & Testing

You can use following commands for training and testing. Basically, we support distribued training on a single server with multiple GPUs.

# Training
bash tools/dist_train.sh {config_name} {num_gpus} {other_options}
# Testing
bash tools/dist_test.sh {config_name} {checkpoint} {num_gpus} --out {output_file} --eval top_k_accuracy mean_class_accuracy

For specific examples, please go to the README for each specific algorithm we supported.

Citation

If you use PYSKL in your research or wish to refer to the baseline results published in the Model Zoo, please use the following BibTeX entry and the BibTex entry corresponding to the specific algorithm you used.

% Tech Report Coming Soon!
@misc{duan2022pyskl,
    title={PYSKL: a toolbox for skeleton-based video understanding},
    author={PYSKL Contributors},
    howpublished = {\url{https://github.com/kennymckormick/pyskl}},
    year={2022}
}

Contact

For any questions, feel free to contact: dh019@ie.cuhk.edu.hk

About

A toolbox for skeleton-based action recognition.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Languages

  • Python 99.8%
  • Shell 0.2%