Efficient datasets and transforms for skeleton data
$ pip install torch_skeleton
https://torch-skeleton.readthedocs.io/en/latest/index.html#
Download and load raw dataset with preprocess
from torch_skeleton.datasets import NTU
import torch_skeleton.transforms as T
# dwonload ntu skeleton dataset
ntu = NTU(
root="data",
num_classes=60,
eval_type="subject",
split="train",
transform=T.Compose([
T.Denoise(),
T.CenterJoint(),
T.SplitFrames(),
]),
)
x, y = ntu[0]
Cache preprocessed samples to disk
from torch_skeleton.datasets import DiskCache
# cache preprocessing transforms to disk
cache = DiskCache(root="data/NTU", dataset=dataset)
x, y = cache[0]
Apply augmentations to a dataset
from torch_skeleton.datasets import Apply
# cache preprocessing transforms to disk
cache = Apply(
dataset=dataset,
transform=T.Compose([
T.SampleFrames(num_frames=20),
T.RandomRotate(degrees=17),
T.PadFrames(max_frames=20),
]),
)
x, y = cache[0]
Example Training code using torch_skeleton
is available under examples
Supported models:
- SGN
torch_skeleton
was created by Chanhyuk Jung. It is licensed under the terms
of the MIT license.