University of Sussex
Browse

Movement representation learning for pain level classification

Download (1.7 MB)
journal contribution
posted on 2024-06-25, 11:53 authored by Temitayo OlugbadeTemitayo Olugbade, Amanda C de C Williams, Nicolas Gold, Nadia Bianchi-Berthouze

Self-supervised learning has shown value for uncovering informative movement features for human activity recognition. However, there has been minimal exploration of this approach for affect recognition where availability of large labelled datasets is particularly limited. In this paper, we propose a P-STEMR (Parallel Space-Time Encoding Movement Representation) architecture with the aim of addressing this gap and specifically leveraging the higher availability of human activity recognition datasets for pain-level classification. We evaluated and analyzed the architecture using three different datasets across four sets of experiments. We found statistically significant increase in average F1 score to 0.84 for pain level classification with two classes based on the architecture compared with the use of hand-crafted features. This suggests that it is capable of learning movement representations and transferring these from activity recognition based on data captured in lab settings to classification of pain levels with messier real-world data. We further found that the efficacy of transfer between datasets can be undermined by dissimilarities in population groups due to impairments that affect movement behaviour and in motion primitives (e.g. rotation versus flexion). Future work should investigate how the effect of these differences could be minimized so that data from healthy people can be more valuable for transfer learning.

History

Publication status

  • Published

File Version

  • Accepted version

Journal

IEEE Transactions on Affective Computing

ISSN

2371-9850

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Page range

1-12

Department affiliated with

  • Informatics Publications

Institution

University of Sussex

Full text available

  • Yes

Peer reviewed?

  • Yes