Search papers, labs, and topics across Lattice.
This paper introduces a multi-task reinforcement learning (RL) approach to improve paralinguistic understanding and generation in speech LLMs. The method uses chain-of-thought prompting to encourage explicit affective reasoning and addresses data scarcity by jointly optimizing sentiment classification from audio and paralinguistics-aware response generation. Experiments demonstrate that this approach, embodied in a model called PALLM, significantly improves paralinguistics understanding compared to supervised baselines and strong proprietary models like Gemini-2.5-Pro and GPT-4o-audio.
Speech LLMs can now better understand your emotions: a new RL approach boosts paralinguistic understanding by 8-12% over state-of-the-art models.
Speech large language models (LLMs) observe paralinguistic cues such as prosody, emotion, and non-verbal sounds--crucial for intent understanding. However, leveraging these cues faces challenges: limited training data, annotation difficulty, and models exploiting lexical shortcuts over paralinguistic signals. We propose multi-task reinforcement learning (RL) with chain-of-thought prompting that elicits explicit affective reasoning. To address data scarcity, we introduce a paralinguistics-aware speech LLM (PALLM) that jointly optimizes sentiment classification from audio and paralinguistics-aware response generation via a two-stage pipeline. Experiments demonstrate that our approach improves paralinguistics understanding over both supervised baselines and strong proprietary models (Gemini-2.5-Pro, GPT-4o-audio) by 8-12% on Expresso, IEMOCAP, and RAVDESS. The results show that modeling paralinguistic reasoning with multi-task RL is crucial for building emotionally intelligent speech LLMs.