Search papers, labs, and topics across Lattice.
This paper presents a mechanistic interpretability study of Audio-Visual Large Language Models (AVLLMs), analyzing how audio and visual features are processed and fused across layers. The key finding is that while AVLLMs encode rich audio semantics in intermediate layers, these capabilities are suppressed in final text generation when audio conflicts with visual information. Further analysis traces this modality bias to the AVLLM's reliance on its vision-language base model, suggesting insufficient alignment to audio supervision during training.
AVLLMs may "hear" at intermediate layers, but they largely ignore audio cues in favor of vision when generating text, revealing a fundamental modality bias.
Audio-Visual Large Language Models (AVLLMs) are emerging as unified interfaces to multimodal perception. We present the first mechanistic interpretability study of AVLLMs, analyzing how audio and visual features evolve and fuse through different layers of an AVLLM to produce the final text outputs. We find that although AVLLMs encode rich audio semantics at intermediate layers, these capabilities largely fail to surface in the final text generation when audio conflicts with vision. Probing analyses show that useful latent audio information is present, but deeper fusion layers disproportionately privilege visual representations that tend to suppress audio cues. We further trace this imbalance to training: the AVLLM's audio behavior strongly matches its vision-language base model, indicating limited additional alignment to audio supervision. Our findings reveal a fundamental modality bias in AVLLMs and provide new mechanistic insights into how multimodal LLMs integrate audio and vision.