RESEARCH

HOME RESEARCH
Behavior Computing
States and Traits
Speech and Language
Other: Signal Modeling for Understanding
Fusion of multiple emotion perspectives: Improving affect recognition through integrating cross-lingual emotion information
Abstract
Developing cross-corpus, cross-domain, and cross-language emotion recognition algorithm has becoming more prevalent recently to ensure the wide applicability of robust emotion recognizer. In this work, we propose a computational framework on fusing multiple emotion perspectives by integrating cross-lingual emotion information. By assuming that each data is ‘perceived’ not only by a main perspective but additional derived perspectives (from a corpus of a different language), we can then combine each of the perspectivedependent features via kernel fusion technique. In specifics, we utilize two emotional corpora of different languages (Chinese and English). Our experiments demonstrate that our proposed framework achieves significant improvement over single perspective baseline across both databases.
Figures
Illustration on the system architecture of our proposed multiple emotion perspective fusion framework.
Illustration on the system architecture of our proposed multiple emotion perspective fusion framework.
(Left) the CIT database (Right) the NTUA datbase
(Left) the CIT database (Right) the NTUA datbase
Keywords
speech emotion recognition | cross language | multi-task learning | affective computing
Authors
Chun-Min Chang Chi-Chun Lee
Publication Date
2017/03/05
Conference
2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
DOI
10.1109/icassp.2017.7953272
Publisher
IEEE