Search papers, labs, and topics across Lattice.
This paper introduces two new paired audio-video datasets derived from video games and concert performances, each segmented into 34-second clips. The author trains a multimodal diffusion model (MM-Diffusion) on these datasets, demonstrating its ability to generate semantically coherent audio-video pairs and quantitatively evaluating alignment. Finally, the paper proposes and validates a sequential text-to-audio-video generation pipeline, first generating video and then conditioning audio synthesis on both the video and text prompt.
Forget painstakingly aligning audio and video – this diffusion model learns to generate them jointly, opening the door to more realistic and immersive multimodal experiences.
Multimodal generative models have shown remarkable progress in single-modality video and audio synthesis, yet truly joint audio-video generation remains an open challenge. In this paper, I explore four key contributions to advance this field. First, I release two high-quality, paired audio-video datasets. The datasets consisting on 13 hours of video-game clips and 64 hours of concert performances, each segmented into consistent 34-second samples to facilitate reproducible research. Second, I train the MM-Diffusion architecture from scratch on our datasets, demonstrating its ability to produce semantically coherent audio-video pairs and quantitatively evaluating alignment on rapid actions and musical cues. Third, I investigate joint latent diffusion by leveraging pretrained video and audio encoder-decoders, uncovering challenges and inconsistencies in the multimodal decoding stage. Finally, I propose a sequential two-step text-to-audio-video generation pipeline: first generating video, then conditioning on both the video output and the original prompt to synthesize temporally synchronized audio. My experiments show that this modular approach yields high-fidelity generations of audio video generation.