RESEARCH

HOME RESEARCH
Multimedia Modeling
Affect
Content Analysis
Physiology
Cross Corpus Physiological-based Emotion Recognition Using a Learnable Visual Semantic Graph Convolutional Network
Abstract
Affective media videos have been used as stimulus to investigate an individual’s affective-physio responses. In this study, we aim to develop a network learning strategy for robust cross-corpus emotion recognition using physiological features jointly with affective video content. Specifically, we present a novel framework of Visual Semantic Graph Learning Convolutional Network (VGLCN) for individual emotional state recognition using physiology on transfer learning tasks. The stimulus of videos content is integrated into learnable graph structure toweight the importance ofphysiology on the two emotion dimensions, valence and arousal. Furthermore, we evaluate our proposed framework on two public emotion databases with a rigorous cross validation method, and our model achieves the best unweighted average recall (UAR), which is 67.9%, 56.9% for arousal and 79.8%, 70.4% for valence on the cross datasets recognition experiments respectively. Further analyses reveal that 1) VGLCN is especially effective on transfer valence binary-task, 2) the physiological features (ECG, EDA) are very informative features for emotion recognition and 3) the affective media videos are important constraint to be included in the framework to stabilize the performance power.
Figures
Our proposed Visual Semantic Graph Learning Convolution Network (VGLCN)on transfer learning.
Our proposed Visual Semantic Graph Learning Convolution Network (VGLCN)on transfer learning.
Keywords
affective multimedia | emotion recognition | transfer learning | physiology | graph convolution network
Authors
Woan-Shiuan Chien Hao-Chun Yang Chi-Chun Lee
Publication Date
2020/10/12
Conference
ACM Multimedia
MM '20: the 28th ACM International Conference on Multimedia
DOI
10.1145/3394171.3413552
Publisher
ACM