Skip to content

Annotated skeletal datasets [initial release]

Pre-release
Pre-release
Compare
Choose a tag to compare
@matyasbohacek matyasbohacek released this 09 Dec 17:03
· 21 commits to main since this release

As SPOTER works on top of sequences of signers' skeletal data extracted from videos, we wanted to eliminate the computational demands of such annotation for each training run by pre-collecting this. For this reason and reproducibility, we are open-sourcing this data along with the code as well.

This data is shared under the Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license allowing only for non-commercial usage only.

We employed the WLASL100 and LSA64 datasets for our experiments. Their corresponding citations can be found below:

@inproceedings{li2020word,
    title={Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison},
    author={Li, Dongxu and Rodriguez, Cristian and Yu, Xin and Li, Hongdong},
    booktitle={The IEEE Winter Conference on Applications of Computer Vision},
    pages={1459--1469},
    year={2020}
}
@inproceedings{ronchetti2016lsa64,
    title={LSA64: an Argentinian sign language dataset},
    author={Ronchetti, Franco and Quiroga, Facundo and Estrebou, C{\'e}sar Armando and Lanzarini, Laura Cristina and Rosete, Alejandro},
    booktitle={XXII Congreso Argentino de Ciencias de la Computaci{\'o}n (CACIC 2016).},
    year={2016}
}