RESEARCH

HOME RESEARCH
Behavior Computing
States and Traits
Speech and Language
Other: Signal Modeling for Understanding
Exploiting Co-occurrence Frequency of Emotions in Perceptual Evaluations To Train A Speech Emotion Classifier
Abstract
Previous studies on speech emotion recognition (SER) with categorical emotions have often formulated the task as a singlelabel classification problem, where the emotions are considered orthogonal to each other. However, previous studies have indicated that emotions can co-occur, especially for more ambiguous emotional sentences (e.g., a mixture of happiness and surprise). Some studies have regarded SER problems as a multilabel task, predicting multiple emotional classes. However, this formulation does not leverage the relation between emotions during training, since emotions are assumed to be independent. This study explores the idea that emotional classes are not necessarily independent and its implications on training SER models. In particular, we calculate the frequency of cooccurring emotions from perceptual evaluations in the train set to generate a matrix with class-dependent penalties, punishing more mistakes between distant emotional classes. We integrate the penalization matrix into three existing label-learning approaches (hard-label, multi-label, and distribution-label learning) using the proposed modified loss. We train SER models using the penalty loss and commonly used cost functions for SER tasks. The evaluation of our proposed penalization matrix on the MSP-Podcast corpus shows important relative improvements in macro F1-score for hard-label learning (17.12%), multi-label learning (12.79%), and distribution-label learning (25.8%).
Figures
The figure shows the procedure to create the penalization matrix
The figure shows the procedure to create the penalization matrix
Keywords
Speech emotion recognition | emotion cooccurrence | multi-class classification | multi-label classification
Authors
Huang-Cheng Chou Chi-Chun Lee
Publication Date
2022/09/18
Conference
Interspeech
Interspeech 2022
DOI
10.21437/Interspeech.2022-11041
Publisher
ISCA