Search papers, labs, and topics across Lattice.
This paper introduces a multi-reward Reinforcement Learning from AI Feedback (RLAIF) framework for speech-in/speech-out dialogue systems (SDS) that addresses limitations of prior work using single semantic rewards. The framework incorporates semantic coherence, audio quality, and emotion consistency rewards to better capture the multi-dimensional nature of conversational quality. By applying turn-level preference sampling and aggregating per-block log-probabilities within a DPO objective, the method aligns utterance-level preferences with incremental, blockwise decoding in duplex models, demonstrating improved semantic quality and audio naturalness through multi-reward training.
Forget optimizing for just one thing: multi-reward RLAIF dramatically improves both semantic quality and audio naturalness in spoken dialogue systems, where single-reward methods fall flat.
Reinforcement learning from human or AI feedback (RLHF/RLAIF) for speech-in/speech-out dialogue systems (SDS) remains underexplored, with prior work largely limited to single semantic rewards applied at the utterance level. Such setups overlook the multi-dimensional and multi-modal nature of conversational quality, which encompasses semantic coherence, audio naturalness, speaker consistency, emotion alignment, and turn-taking behavior. Moreover, they are fundamentally mismatched with duplex spoken dialogue systems that generate responses incrementally, where agents must make decisions based on partial utterances. We address these limitations with the first multi-reward RLAIF framework for SDS, combining semantic, audio-quality, and emotion-consistency rewards. To align utterance-level preferences with incremental, blockwise decoding in duplex models, we apply turn-level preference sampling and aggregate per-block log-probabilities within a single DPO objective. We present the first systematic study of preference learning for improving SDS quality in both multi-turn Chain-of-Thought and blockwise duplex models, and release a multi-reward DPO dataset to support reproducible research. Experiments show that single-reward RLAIF selectively improves its targeted metric, while joint multi-reward training yields consistent gains across semantic quality and audio naturalness. These results highlight the importance of holistic, multi-reward alignment for practical conversational SDS.