This repo is the official implementation for Neuron: Learning context-aware evolving representations for zero-shot skeleton action recognition. The paper is accepted to CVPR 2025.
- NTU RGB+D 60 Skeleton
- NTU RGB+D 120 Skeleton
- PKU-MMD
- Request dataset here: https://rose1.ntu.edu.sg/dataset/actionRecognition
- Download the skeleton-only datasets:
nturgbd_skeletons_s001_to_s017.zip
(NTU RGB+D 60)nturgbd_skeletons_s018_to_s032.zip
(NTU RGB+D 120)- Extract above files to
./data/nturgbd_raw
- Request and download the dataset here
- Unzip all skeleton files from
Skeleton.7z
to./data/pkummd_raw/part1
- Unzip all label files from
Label_PKUMMD.7z
to./data/pkummd_raw/part1
- Unzip all skeleton files from
Skeleton_v2.7z
to./data/pkummd_raw/part2
- Unzip all label files from
Label_PKUMMD_v2.7z
to./data/pkummd_raw/part2
Put downloaded data into the following directory structure:
- data/
- NW-UCLA/
- all_sqe
... # raw data of NW-UCLA
- ntu/
- ntu120/
- nturgbd_raw/
- nturgb+d_skeletons/ # from `nturgbd_skeletons_s001_to_s017.zip`
...
- nturgb+d_skeletons120/ # from `nturgbd_skeletons_s018_to_s032.zip`
...
- Generate NTU RGB+D 60 or NTU RGB+D 120 dataset:
cd ./data/ntu60 # or cd ./data/ntu120
# Get skeleton of each performer
python get_raw_skes_data.py
# Remove the bad skeleton
python get_raw_denoised_data.py
# Transform the skeleton to the center of the first frame
python seq_transformation.py
- Generate PKU MMD I or PKU MMD II dataset:
cd ./data/pkummd/part1 # or cd ./data/pkummd/part2
mkdir skeleton_pku_v1 or mkdir skeleton_pku_v2
# Get skeleton of each performer
python pku_part1_skeleton.py or python pku_part2_skeleton.py
# Transform the skeleton to the center of the first frame
python pku_part1_gendata.py or python pku_part2_gendata.py
# Downsample the frame to 64
python preprocess_pku.py
# Concatenate train data and val data into one file
python pku_concat.py
If you would like to train Shift-GCN yourself, you may follow the procedure below:
- For NTU RGB+D 60 dataset (55/5 split):
cd Pretrain_Shift_GCN
python main.py --config config/ntu60_xsub_seen55_unseen5.yaml
- For PKU-MMD I dataset (46/5 split):
cd Pretrain_Shift_GCN
python main.py --config config/pkuv1_xsub_seen46_unseen5.yaml
For your convenience, pretrained weights for the Shift-GCN encoder are available for download from BaiduDisk or Google Drive, in case you’d prefer not to train it from scratch.
- For NTU RGB+D 60 dataset (55/5 split):
python main_match.py --config config/ntu60_xsub_55_5split/joint_shiftgcn_ViTL14@336px_match.yaml
- For PKU-MMD I dataset (46/5 split):
python main_match.py --config config/pkuv1_xsub_46_5split/joint_shiftgcn_ViTL14@336px_match.yaml
This repo is based on CTR-GCN, GAP, and STAR. The data processing is borrowed from CTR-GCN, AimCLR, and STAR.
Thanks to the original authors for their work!
Please cite this work if you find it useful:.
@inproceedings{chen2025neuron,
title={Neuron: Learning context-aware evolving representations for zero-shot skeleton action recognition},
author={Chen, Yang and Guo, Jingcai and Guo, Song and Tao, Dacheng},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
pages={8721--8730},
year={2025}
}