RESEARCH

HOME RESEARCH
Multimedia Modeling
Multimodal Model
Content Analysis
Physiology
A Media-Guided Attentive Graphical Network for Personality Recognition Using Physiology
Abstract
Physiological automatic personality recognition has been largely developed to model an individual’s personality trait from a variety of signals. However, few studies have tackled the problems of integration methodology from multiple observations into a single personality prediction. In this study, we focus on finding a novel learning architecture to model the personality trait under a Many-to-One scenario. We propose to integrate not only the information on the user but also consider the effect of the affective multimedia stimulus. Specifically, we present a novel Acoustic-Visual Guided Attentive Graph Convolutional Network for enhanced personality recognition. The emotional multimedia content guides the formation of the physiological responses into a graph-like structure to integrate latent inter-correlation among all responses toward affective multimedia. Then these graphs would be further processed by the Graph Convolutional Network (GCN) to jointly model instances and inter-correlation levels of the subject’s responses. We show that our model outperforms the current state of the art on two large public corpora for personality recognition. Further analysis reveals that there indeed exists a multimedia preference for inferring personality from physiology, and several frequency-domain descriptors in ECG and the tonic component in EDA are shown to be robust for automatic personality recognition.
Figures
Our proposed Acoustic-Visual Guided Attentive Graph Convolution Network.
Our proposed Acoustic-Visual Guided Attentive Graph Convolution Network.
Keywords
affective multimedia | personality recognition | physiology | graph convolution network
Authors
Publication Date
2021/06/23
Journal
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing
DOI
10.1109/taffc.2021.3090040
Publisher
IEEE