Contrasting Global and Local Representations for Human Activity Recognition using Graph Neural Networks
| Authors |
|
|---|---|
| Publication date | 2025 |
| Book title | The 40th Annual ACM Symposium on Applied Computing |
| Book subtitle | March 31-April 4, 2025 |
| ISBN (electronic) |
|
| Event | 40th Annual ACM Symposium on Applied Computing, SAC 2025 |
| Pages (from-to) | 630-637 |
| Publisher | New York, NY: Association for Computing Machinery |
| Organisations |
|
| Abstract |
Human Activity Recognition has achieved notable improvements with the emergence of Deep Learning models for automated feature extraction. Those models allow to extract complex translational-invariant features and to exploit the temporal dependencies from sensors' time series data. This work posits additional dependencies between sensors beyond the time dimension, such as physical proximity, which are equally important for the characterization of human activities. We leverage such spatial dependencies by modeling them as a graph. Using Graph Neural Networks (GNNs), we learn global and local representations of the intra- and intersensor dependencies. We empirically show that by maximizing the mutual information between the local and global representations, the performance of the recognition models can be significantly improved. Our results show a clear improvement over previous works based on CNNs, LSTMs, Attention-based and other more complex GNNs-based architectures. Our source code is available at: https://github.com/atello/GNNs4HAR.
|
| Document type | Conference contribution |
| Language | English |
| Published at | https://doi.org/10.1145/3672608.3707743 |
| Other links | https://github.com/atello/GNNs4HAR https://www.scopus.com/pages/publications/105006435927 |
| Downloads |
Contrasting Global and Local Representations for Human Activity Recognition
(Final published version)
|
| Permalink to this page | |
