RESEARCH

HOME RESEARCH
Health Analytics
Clinical Attributes
States and Traits
Pain versus Affect? An Investigation in the Relationship between Observed Emotional States and Self-Reported Pain
Abstract
Pain is an internal sensation intricately intertwined with individual affect states resulting in a varied expressive behaviors multimodally. Past research have indicated that emotion is an important factor in shaping one’s painful experiences and behavioral expressions. In this work, we present a study into understanding the relationship between individual emotional states and self-reported pain-levels. The analyses show that there is a significant correlation between observed valence state of an individual and his/her own self-reported pain-levels. Furthermore, we propose an emotion-enriched multitask network (EEMN) to improve self-reported pain-level recognition by leveraging the rated emotional states using multimodal expressions computed from face and speech. Our framework achieves accuracy of 70.1% and 52.1% in binary and ternary classes classification. The method improves a relative of 6.6% and 13% over previous work on the same dataset. Further, our analyses not only show that an individual’s valence state is negatively correlated to the pain-level reported, but also reveal that asking observers to rate valence attribute could be related more to the self-reported pain than to rate directly on the pain intensity itself.
Figures
It shows the complete architecture of our emotion-enriched multitask network (eemn) used for automatic pain-level recognition: acoustic and facial feature extraction, training multitask network with NRS pain-levels as main task and affect state as auxiliary task, performing multimodal fusion with embeddings from EEMN using support vector classifier.
It shows the complete architecture of our emotion-enriched multitask network (eemn) used for automatic pain-level recognition: acoustic and facial feature extraction, training multitask network with NRS pain-levels as main task and affect state as auxiliary task, performing multimodal fusion with embeddings from EEMN using support vector classifier.
Authors
Chi-Chun Lee
Publication Date
2019/11/18
Conference
Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)
2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)
DOI
10.1109/apsipaasc47483.2019.9023134
Publisher
IEEE