RESEARCH

HOME RESEARCH
Behavior Computing
States and Traits
Speech and Language
Emotion recognition using a hierarchical binary decision tree approach
Abstract
Automated emotion state tracking is a crucial element in the computational study ofhuman communication behaviors. It is important to design robust and reliable emotion recognition systems that are suitable for real-world applications both to enhance analytical abilities to support human decision making and to design human–machine interfaces that facilitate efficient communication. We introduce a hierarchical computational structure to recognize emotions. The proposed structure maps an input speech utterance into one of the multiple emotion classes through subsequent layers of binary classifications. The key idea is that the levels in the tree are designed to solve the easiest classification tasks first, allowing us to mitigate error propagation. We evaluated the classification framework on two different emotional databases using acoustic features, the AIBO database and the USC IEMOCAP database. In the case of the AIBO database, we obtain a balanced recall on each of the individual emotion classes using this hierarchical structure. The performance measure of the average unweighted recall on the evaluation data set improves by 3.37% absolute (8.82% relative) over a Support Vector Machine baseline model. In the USC IEMOCAP database, we obtain an absolute improvement of 7.44% (14.58%) over a baseline Support Vector Machine modeling. The results demonstrate that the presented hierarchical approach is effective for classifying emotional utterances in multiple database contexts.
Figures
Left: proposed hierarchical structure for the AIBO database. Right: hierarchical structure for the USC IEMOCAP database.
Left: proposed hierarchical structure for the AIBO database. Right: hierarchical structure for the USC IEMOCAP database.
Keywords
Emotion recognition | Hierarchical structure | Support Vector Machine | Bayesian Logistic Regression
Authors
Chi-Chun Lee
Publication Date
2011/11/01
Journal
Speech Communication 2011
DOI
10.1016/j.specom.2011.06.004
Publisher
Elsevier