Search papers, labs, and topics across Lattice.
The authors introduce AVID, a large-scale benchmark designed to evaluate audio-visual inconsistency understanding in long-form videos, an area where current omni-modal LLMs struggle. AVID's construction pipeline uses temporal segmentation, an agent-driven strategy planner, and specialized injectors to create diverse audio-visual conflicts. Experiments on AVID reveal limitations in temporal grounding and reasoning for state-of-the-art models, while a fine-tuned baseline, AVID-Qwen, shows significant improvements, validating AVID's effectiveness.
Omni-modal LLMs can ace captioning and QA, but AVID reveals they're surprisingly bad at spotting audio-visual inconsistencies in videos, a crucial skill for trustworthy AI.
We present AVID, the first large-scale benchmark for audio-visual inconsistency understanding in videos. While omni-modal large language models excel at temporally aligned tasks such as captioning and question answering, they struggle to perceive cross-modal conflicts, a fundamental human capability that is critical for trustworthy AI. Existing benchmarks predominantly focus on aligned events or deepfake detection, leaving a significant gap in evaluating inconsistency perception in long-form video contexts. AVID addresses this with: (1) a scalable construction pipeline comprising temporal segmentation that classifies video content into Active Speaker, Voiceover, and Scenic categories; an agent-driven strategy planner that selects semantically appropriate inconsistency categories; and five specialized injectors for diverse audio-visual conflict injection; (2) 11.2K long videos (avg. 235.5s) with 39.4K annotated inconsistency events and 78.7K segment clips, supporting evaluation across detection, temporal grounding, classification, and reasoning with 8 fine-grained inconsistency categories. Comprehensive evaluations of state-of-the-art omni-models reveal significant limitations in temporal grounding and reasoning. Our fine-tuned baseline, AVID-Qwen, achieves substantial improvements over the base model (2.8$\times$ higher BLEU-4 in segment reasoning) and surpasses all compared models in temporal grounding (mIoU: 36.1\% vs 26.2\%) and holistic understanding (SODA-m: 7.47 vs 6.15), validating AVID as an effective testbed for advancing trustworthy omni-modal AI systems.