Search papers, labs, and topics across Lattice.
This paper introduces a logic-grounded evaluation framework to analyze cross-modal reasoning in MLLMs, categorizing interactions into six patterns based on fact distribution and logical combination. The study reveals that additional modalities only improve reasoning when providing independent reasoning paths, while redundancy or chained entailment can degrade performance due to modality conflicts and integration failures. The authors identify task-composition and fusion bottlenecks as key limitations, demonstrating that a two-step prompting strategy and softened early fusion can mitigate these issues.
Multimodal LLMs often perform worse with more modalities because they struggle to jointly recognize and reason across modalities, a problem solvable with simple prompting strategies.
Multimodal large language models (MLLMs) promise enhanced reasoning by integrating diverse inputs such as text, vision, and audio. Yet cross-modal reasoning remains underexplored, with conflicting reports on whether added modalities help or harm performance. These inconsistencies stem from a lack of controlled evaluation frameworks and analysis of models'internals to isolate when and why modality interactions support or undermine reasoning. We address this gap through a logic-grounded evaluation framework that categorizes multimodal reasoning into six interaction patterns, varying how facts are distributed across modalities and logically combined. Empirically, additional modalities enhance reasoning only when they provide independent and sufficient reasoning paths, while redundant or chained entailment support often hurts performance. Moreover, reasoning degrades in three systematic ways: weaker modalities drag down overall performance, conflicts bias preference toward certain modalities, and joint signals from different modalities fail to be integrated effectively. Therefore, we identify two core failures: task-composition bottleneck, where recognition and reasoning cannot be jointly executed in one pass, and fusion bottleneck, where early integration introduces bias. For further investigation, we find that attention patterns fail to encode fact usefulness, but a simple two-step prompting (recognize then reason) restores performance, confirming the task-composition bottleneck. Moreover, modality identity remains recoverable in early layers, and softening attention in early fusion improves reasoning, highlighting biased fusion as another failure mode. Overall, our findings show that integration, not perception, is the main barrier to multimodal reasoning, suggesting composition-aware training and early fusion control as promising directions.