Multi-DGI: Multi-head Pooling Deep Graph Infomax for Human Activity Recognition
Abstract Human Activity Recognition (HAR) is a crucial research domain with substantial real-world implications. Despite the extensive application of machine learning techniques in various domains, most traditional models neglect the inherent spatio-temporal relationships within time-series data. To address this limitation, we propose an unsupervised Graph Representation Learning (GRL) model named Multi-head Pooling Deep Graph Infomax (Multi-DGI), which is applied to reveal the spatio-temporal patterns from the graph-structured HAR data. By employing an adaptive Multi-head Pooling mechanism, Multi-DGI captures comprehensive graph summaries, furnishing general embeddings for downstream classifiers, thereby reducing dependence on graph constructions. Using the UCI WISDM dataset and three basic graph construction methods, Multi-DGI delivers a minimum enhancement of 2.9%, 1.0%, 7.5%, and 6.4% in Accuracy, Precision, Recall, and Macro-F1 scores, respectively. The demonstrated robustness of Multi-DGI in extracting intricate patterns from rudimentary graphs reduces the dependence of GRL on high-quality graphs, thereby broadening its applicability in time-series analysis. Our code and data are available at https://github.com/AnguoCYF/Multi-DGI
Keywords: Human Activity Recognition, Time-series Analysis, Spatio-temporal Relationships, Graph Representation Learning
If you find this code helpful, please kindly cite our work. Thank you~
Chen, Y., Zhu, H., & Chen, Z. (2024). Multi-DGI: Multi-head Pooling Deep Graph Infomax for Human Activity Recognition. Mobile Networks and Applications. https://doi.org/10.1007/s11036-024-02306-y