Projects Projects

PROJECTS

HOME
Finished Projects
強健式多模態情感建模跨域應用系统
01
AUG
2014
Jul
2017
Chi-Chun Lee
Emotion is a fundamental attribute governing the underlying mechanism of human expressive behavior. Computational framework for understanding human emotion can benefit both the continuing progress in the system development of natural human-computer interface (HCI) and even further contribute significantly in the realm of behavior signal processing (BSP), where technologies are developed to facilitate progresses in different domains of behavior science. The proposal addresses a core obstacle hindering a widely-applicable general framework for automatically assessing emotion state of a person through behavior modeling, i.e., a robust multimodal emotion modeling for cross-domain applications. The proposal lays out a three-year research plan in the direction of developing a statistical emotion modeling framework focusing on mitigating the issue of dual uncertainties, i.e., database idiosyncrasies coupling with coding standard variations, exists in tasks of recognizing emotions across domains.

The proposal involves technical components of behavior feature analyses and extractions, e.g., prosody analysis from speech, text content analysis from transcription, and body gesture segmentation from video, and multimodal fusion of behavior cues for emotion recognition, e.g., coupled hidden Markov model or other variants. Furthermore, with access to multiple existing BSP databases, we plan on building upon our experiences and continuing to contribute to the research community by collecting and releasing an interaction-based dataset for the purpose of studying affective dynamics. Furthermore, the proposal also details an iterative computational procedure anticipating for uncovering the underlying primitive emotion states, i.e., the emotion description that is common across different coding standards, and formulating a statistical model, e.g., a Bayesian network, to jointly model the observed behavior dynamics and hidden primitive emotion states.

The expected research outcome of the proposal is two folds: one is developing new set of novel algorithms that can be adapted easily for robust human affective dynamics modeling across domains; another one is obtaining new insights into the coordinating process of multimodal affective behavior generation in different contexts of interactions. We anticipate the impact and contribution of such research effort can be broadly applicable to multiple fronts (including industry and academia): advancing existing speech and language processing technologies, sparking new capabilities of human-machine interaction, and even strengthening the bridge between behaviors science and engineering.
PARTNER