Scenes are the most basic semantic units of a movie that are important as pre-processing for various multimedia computing technology. Previous scene segmentation studies have introduced constraints and alignment mechanisms to cluster low-level frames and shots based on the visual features and temporal properties. Recent researchers have extended by using multimodal semantic representations with the acoustic representations blindly extracted by a universal pretrained model. They tend to ignore the semantic meaning of audio and complex interaction between the audio and visual representations for scene segmentation. In this work, we introduce a mixture-of-audioexperts (MOAE) framework to integrate acoustic experts and multimodal experts for scene segmentation. The acoustic expert is learned to model different acoustic semantics, including speaker, environmental sounds, and other events. The MOAE optimizes the weights delicately among various multimodal experts and achieves a state-of-the-art 61.89% F1-score for scene segmentation. We visualize the expert weights in our framework to illustrate the complementary properties among diverse experts, leading to improvements for segmentation results.