RESEARCH

HOME RESEARCH
Multimedia Modeling
Other: Media Processing for Interpretation
Improving Multimodal Movie Scene Segmentation Using Mixture of Acoustic Experts
Abstract
Scenes are the most basic semantic units of a movie that are important as pre-processing for various multimedia computing technology. Previous scene segmentation studies have introduced constraints and alignment mechanisms to cluster low-level frames and shots based on the visual features and temporal properties. Recent researchers have extended by using multimodal semantic representations with the acoustic representations blindly extracted by a universal pretrained model. They tend to ignore the semantic meaning of audio and complex interaction between the audio and visual representations for scene segmentation. In this work, we introduce a mixture-of-audioexperts (MOAE) framework to integrate acoustic experts and multimodal experts for scene segmentation. The acoustic expert is learned to model different acoustic semantics, including speaker, environmental sounds, and other events. The MOAE optimizes the weights delicately among various multimodal experts and achieves a state-of-the-art 61.89% F1-score for scene segmentation. We visualize the expert weights in our framework to illustrate the complementary properties among diverse experts, leading to improvements for segmentation results. 
Figures
The acoustic expert can perform boundary prediction using various acoustic features. The multimodal expert (MME) combines place and acoustic features.
The acoustic expert can perform boundary prediction using various acoustic features. The multimodal expert (MME) combines place and acoustic features.
Keywords
Movie | Scene Segmentation | Mixture of Experts | Multimodal Attention | Audio
Authors
Publication Date
2022/08/29
Conference
EUSIPCO
European Signal Processing Conference, EUSIPCO 2022
DOI
10.1016/j.patcog.2007.07.024
Publisher
European Association for Signal Processing (EURASIP)
European Association for Signal Processing (EURASIP)