Abstract
Automatic sensing of emotional information in speech is important for numerous everyday applications. Conventional Speech Emotion Recognition (SER) models rely on averaging or consensus of human annotations for training, but emotions and raters’ interpretations are subjective in nature, leading to diverse variations in perceptions. To address this, our proposed approach integrates the rater’s subjectivity by forming the Perception-Coherent Clusters (PCC) of raters to be used to derive expanded label space for learning to improve SER. We evaluate our method on the IEMOCAP and MSP-Podcast corpora, considering scenarios of fixed and variable raters, respectively. The proposed architecture, Rater Perception Coherency (RPC)-based SER surpasses single-task models with consensus labels by achieving UAR improvements of 3.39% for IEMOCAP and 2.03% for MSP-Podcast. Further analysis providescomprehensive insights into the contributions of these perception consistency clusters in SER learning.