Search papers, labs, and topics across Lattice.
V2A-DPO is introduced as a Direct Preference Optimization framework for video-to-audio generation, using a new AudioScore to align generated audio with human preferences regarding semantic consistency, temporal alignment, and perceptual quality. The framework uses an automated AudioScore-driven pipeline to generate preference pairs for DPO and employs a curriculum learning strategy tailored for flow-based generative models. Experiments on VGGSound show V2A-DPO outperforms DDPO and other baselines, achieving state-of-the-art results.
Human-preference aligned audio generation from video is now possible, as V2A-DPO surpasses previous methods by directly optimizing for semantic consistency, temporal alignment, and perceptual quality.
This paper introduces V2A-DPO, a novel Direct Preference Optimization (DPO) framework tailored for flow-based video-to-audio generation (V2A) models, incorporating key adaptations to effectively align generated audio with human preferences. Our approach incorporates three core innovations: (1) AudioScore-a comprehensive human preference-aligned scoring system for assessing semantic consistency, temporal alignment, and perceptual quality of synthesized audio; (2) an automated AudioScore-driven pipeline for generating large-scale preference pair data for DPO optimization; (3) a curriculum learning-empowered DPO optimization strategy specifically tailored for flow-based generative models. Experiments on benchmark VGGSound dataset demonstrate that human-preference aligned Frieren and MMAudio using V2A-DPO outperform their counterparts optimized using Denoising Diffusion Policy Optimization (DDPO) as well as pre-trained baselines. Furthermore, our DPO-optimized MMAudio achieves state-of-the-art performance across multiple metrics, surpassing published V2A models.