Skip to content

paolomandica/HYSP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HYSP

This repository provides the Official PyTorch implementation of the paper "Hyperbolic Self-paced Learning for Self-supervised Skeleton-based Action Representations" (ICLR 2023).

Luca Franco † 1, Paolo Mandica † 1, Bharti Munjal 1,2, Fabio Galasso1
1 Sapienza University of Rome, 2 Technical University of Munich
Equal contribution


Requirements

Python >=3.8 PyTorch >=1.10

Environment Setup

  1. Create conda environment and activate it
conda create -n hysp python=3.9
conda activate hysp
  1. Install requirements using pip inside the conda env
pip install -r requirements.txt

Data Preparation

  • Download the raw data of NTU RGB+D and PKU-MMD.
  • For NTU RGB+D dataset, preprocess data with code/tools/ntu_gendata.py. For PKU-MMD dataset, preprocess data with code/tools/pku_part1_gendata.py.
  • Then downsample the data to 50 frames with code/feeder/preprocess_ntu.py and code/feeder/preprocess_pku.py.
  • If you don't want to process the original data, download the file folder action_dataset.

Self-supervised Pre-Training

Example of self-supervised pre-training on NTU-60 xview. You can change the hyperparameters by modifying the .yaml files in the config/DATASET/pretext folder.

python main_pretrain.py --config config/ntu60/pretext/pretext_xview.yaml

If you are using 2 or more gpus use the following launch script (substitute NUM_GPUS with the number of gpus):

torchrun --standalone --nproc_per_node=NUM_GPUS main_pretrain.py --config config/ntu60/pretext_xview.yaml

Evaluation

Example of evaluation of a model pre-trained on NTU-60 xview. You can change hyperparameters through .yaml files in config/DATASET/eval folder. For example, you can set the protocol to linear, semi or supervised depending on the type of evaluation you want to perform.

python main_eval.py --config config/ntu60/eval/eval_xview.yaml

3-stream Ensemble

Once a model has been pre-trained and evaluated on all 3 single streams (joint, motion, bone), you can compute the 3-stream ensemble performance by running the following script. Remember to substitute the correct paths inside the script.

python code/ensemble/ensemble_ntu.py

Training Precision

For linear evaluation you can set precision: 16 in the config file, while for pre-training, semi and supervised evaluation you should set precision: 32 for higher stability.

Acknowledgement

This project is based on the following open-source projects: AimCLR, ST-GCN. We sincerely thank the authors for making the source code publicly available.

Licence

This project is licensed under the terms of the MIT license.

Citation

If you find this repository useful, please consider giving a star ⭐ and citation:

@inproceedings{
  franco2023hyperbolic,
  title={Hyperbolic Self-paced Learning for Self-supervised Skeleton-based Action Representations},
  author={Luca Franco and Paolo Mandica and Bharti Munjal and Fabio Galasso},
  booktitle={The Eleventh International Conference on Learning Representations},
  year={2023},
  url={https://openreview.net/forum?id=3Bh6sRPKS3J}
}

About

Official PyTorch implementation of the paper "Hyperbolic Self-paced Learning for Self-supervised Skeleton-based Action Representations" (ICLR 2023)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages