Search papers, labs, and topics across Lattice.
This paper introduces a new benchmark to evaluate task interference in multimodal LLMs, focusing on how performance degrades when switching between tasks within a single conversation. The benchmark systematically varies history-target pairs across modality, reasoning, and answer format. Experiments on both open and closed-source models reveal that task interference is highly directional, with text-to-image transitions causing significantly more performance degradation than image-to-text.
Multimodal LLMs suffer a major performance hit when asked to switch from text-based to image-based tasks mid-conversation, revealing a surprising asymmetry in their ability to handle task interference.
Task interference, the performance degradation caused by task switches within a single conversation, has been studied exclusively in text-only settings despite the growing prevalence of multimodal dialogue systems. We introduce a benchmark for evaluating this phenomenon in multimodal LLMs, covering six tasks across text and vision with systematic variation of history-target along three axes: modality mismatch, reasoning mismatch, and answer format mismatch. Experiments on both open-weights and proprietary models reveal that task interference is highly directional: switching from text-only to image-based targets causes severe performance drops, while the reverse transition yields minimal degradation. Interference is further amplified when mismatches co-occur across multiple dimensions, and is driven most strongly by modality differences, followed by answer format, while reasoning requirement shifts cause minimal degradation.