RESEARCH

HOME RESEARCH
Behavior Computing
Speech and Language
States and Traits
Other: Signal Modeling for Understanding
A bootstrapped multi-view weighted Kernel fusion framework for cross-corpus integration of multimodal emotion recognition
Abstract
Recently the development of robust emotion recognition has been increasingly emphasized in order to handle situations of different cultures and languages. This has become critical due to the potential applicability of emotion recognizers across a wide range of application scenarios. Instead of conventional approach in deriving a single universal emotion recognition module across all languages, we have previously demonstrated a method based on integrating other database’s useful information to improve the emotion recognition of the current data with fusion of multiple emotion perspectives. In this paper, we present an improved framework, i.e., a bootstrapped multi-view weighted kernel fusion, to further advance the recognition accuracies. We have also extended the modeling of speech-only modality to include video information. In specifics, we utilize two emotional corpora of different languages. Our proposed framework obtains improved recognition in regressing activation and valence attributes using audio and video modalities across both of the databases. We not only demonstrate that the weighted kernel fusion can provide additional modeling power but also present analyses on the complementary emotionally-relevant acoustic and visual behaviors computed from the multiple emotion perspectives.
Figures
Illustration on the system architecture of our proposed multiple emotion perspective fusion framework
Illustration on the system architecture of our proposed multiple emotion perspective fusion framework.
A snapshot of the two databases used: (left) the CIT database (right) the NNIME datbase
A snapshot of the two databases used: (left) the CIT database (right) the NNIME datbase.
Keywords
emotion recognition | sensor fusion | speech recognition | video signal processing
Authors
Chun-Min Chang Bo-Hao Su Jeng-Lin Li Chi-Chun Lee
Publication Date
2017/10/23
Conference
2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII)
2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII)
DOI
10.1109/acii.2017.8273627
Publisher
IEEE