RESEARCH

HOME RESEARCH
Multimedia Modeling
States and Traits
Speech and Language
Learning with Rater-Expanded Label Space to Improve Speech Emotion Recognition
Abstract
Automatic sensing of emotional information in speech is important for numerous everyday applications. Conventional Speech Emotion Recognition (SER) models rely on averaging or consensus of human annotations for training, but emotions and raters’ interpretations are subjective in nature, leading to diverse variations in perceptions. To address this, our proposed approach integrates the rater's subjectivity by forming the Perception-Coherent Clusters (PCC) of raters to be used to derive expanded label space for learning to improve SER. We evaluate our method on the IEMOCAP and the MSP-Podcast corpora, considering scenarios of fixed and variable raters, respectively. The proposed architecture, Rater Perception Coherency (RPC)-based SER surpasses single-task models with consensus labels by achieving UAR improvements of 3.39% for the IEMOCAP and 2.03% for the MSP-Podcast. Further analysis provides comprehensive insights into the contributions of these perception consistency clusters in SER learning.
Authors
Publisher
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing