Search papers, labs, and topics across Lattice.
This thesis explores acoustic and semantic modeling for emotion understanding and synthesis in spoken language, focusing on emotion-aware representation learning via pre-training and emotion recognition in conversations. The author introduces pre-training strategies with acoustic and semantic supervision, along with a speech-driven supervised pre-training framework for emotion-aware text modeling. The work also presents a textless speech-to-speech framework for emotion style transfer, demonstrating improved emotion transfer and the utility of style-transferred speech for data augmentation in emotion recognition.
Controllable emotion style transfer in speech is now possible without needing paired data, opening new avenues for data augmentation and expressive AI.
Emotions play a central role in human communication, shaping trust, engagement, and social interaction. As artificial intelligence systems powered by large language models become increasingly integrated into everyday life, enabling them to reliably understand and generate human emotions remains an important challenge. While emotional expression is inherently multimodal, this thesis focuses on emotions conveyed through spoken language and investigates how acoustic and semantic information can be jointly modeled to advance both emotion understanding and emotion synthesis from speech. The first part of the thesis studies emotion-aware representation learning through pre-training. We propose strategies that incorporate acoustic and semantic supervision to learn representations that better capture affective cues in speech. A speech-driven supervised pre-training framework is also introduced to enable large-scale emotion-aware text modeling without requiring manually annotated text corpora. The second part addresses emotion recognition in conversational settings. Hierarchical architectures combining cross-modal attention and mixture-of-experts fusion are developed to integrate acoustic and semantic information across conversational turns. Finally, the thesis introduces a textless and non-parallel speech-to-speech framework for emotion style transfer that enables controllable emotional transformations while preserving speaker identity and linguistic content. The results demonstrate improved emotion transfer and show that style-transferred speech can be used for data augmentation to improve emotion recognition.