RESEARCH

HOME RESEARCH
Trustworthy AI
States and Traits
Speech and Language
An Investigation of Group versus Individual Fairness in Perceptually Fair Speech Emotion Recognition
Abstract
Speech emotion recognition (SER) has been extensively integrated into voice-centric applications. A unique fairness issue of SER stems from the naturally biased labels given by raters as ground truth. While existing efforts primarily aim to advance SER fairness through a group (i.e., gender) fairness standpoint, our analysis reveals that label biases arising from individual raters also persist and require equal attention. Our work presents a systematic analysis to determine the effect of enhanced group (gender) fairness on individual fairness. Specifically, by evaluating two datasets we demonstrate that there exists a trade-off between group and individual fairness when removing group information. Moreover, our results indicate that achieving group fairness results in diminished individual fairness, particularly when the attribute distributions of the two groups are significantly distant. This work brings initial insights into issues of group and individual fairness in the SER systems.
Authors
Conference