RESEARCH

HOME RESEARCH
Behavior Computing
States and Traits
An Attribute-Aligned Strategy for Learning Speech Representation
Abstract
Advancement in speech technology has brought convenience to our life. However, the concern is on the rise as speech signal contains multiple personal attributes, which would lead to either sensitive information leakage or bias toward decision. In this work, we propose an attribute-aligned learning strategy to derive speech representation that can flexibly address these issues by attribute-selection mechanism. Specifically, we propose a layered-representation variational autoencoder (LRVAE), which factorizes speech representation into attributesensitive nodes, to derive an identity-free representation for speech emotion recognition (SER), and an emotionless representation for speaker verification (SV). Our proposed method achieves competitive performances on identity-free SER and a better performance on emotionless SV, comparing to the current state-of-the-art method of using adversarial learning applied on a large emotion corpora, the MSP-Podcast. Also, our proposed learning strategy reduces the model and training process needed to achieve multiple privacy-preserving tasks.
Figures
An illustration ofour proposed method for attribute-aligned representation learning. It includes three blocks: representation learning procedure, LR-VAE, and layered dropout. Notice that Zid−free stands for identity-free representation, Zemo−free stands for emotionless representation.
An illustration ofour proposed method for attribute-aligned representation learning. It includes three blocks: representation learning procedure, LR-VAE, and layered dropout. Notice that Zid−free stands for identity-free representation, Zemo−free stands for emotionless representation.
Keywords
speech representation | layered dropout | privacy | fair | attribute alignment
Authors
Publication Date
2021/08/30
Conference
Interspeech
Interspeech 2021
DOI
10.21437/Interspeech.2021-1341
Publisher
ISCA