Search papers, labs, and topics across Lattice.
This paper explores the use of online reinforcement learning (RL) with Group Relative Policy Optimization (GRPO) to improve text-to-audio (TTA) generation, addressing the limitations of prior offline RL methods. They adapt GRPO for Flow Matching-based audio models and incorporate rewards from Large Audio Language Models (LALMs) to provide fine-grained feedback. The resulting model, Resonate, achieves state-of-the-art performance on TTA-Bench, demonstrating the effectiveness of online RL and LALM-based rewards for TTA generation.
Online reinforcement learning with large audio language model rewards catapults text-to-audio generation to a new state-of-the-art, even with a relatively small 470M parameter model.
Reinforcement Learning (RL) has become an effective paradigm for enhancing Large Language Models (LLMs) and visual generative models. However, its application in text-to-audio (TTA) generation remains largely under-explored. Prior work typically employs offline methods like Direct Preference Optimization (DPO) and leverages Contrastive Language-Audio Pretraining (CLAP) models as reward functions. In this study, we investigate the integration of online Group Relative Policy Optimization (GRPO) into TTA generation. We adapt the algorithm for Flow Matching-based audio models and demonstrate that online RL significantly outperforms its offline counterparts. Furthermore, we incorporate rewards derived from Large Audio Language Models (LALMs), which can provide fine-grained scoring signals that are better aligned with human perception. With only 470M parameters, our final model, \textbf{Resonate}, establishes a new SOTA on TTA-Bench in terms of both audio quality and semantic alignment.