Search papers, labs, and topics across Lattice.
The paper addresses the issue of discrepancies between textual reasoning and visual actions in Multimodal Large Language Models (MLLMs) using visual tools. They introduce Multimodal Agentic Policy Optimization (MAPO), which requires the model to generate textual descriptions of visual content obtained through tool use and couples the semantic alignment between these descriptions and the actual observations with the task reward in a novel advantage estimation. Experiments demonstrate that MAPO improves performance across multiple visual reasoning benchmarks by reducing the variance of gradients.
MLLMs can "think" with images, but their actions often don't match their reasoning, and this paper solves that with a new training method that forces them to explain what they see.
Recent advancements in Multimodal Large Language Models (MLLMs) have incentivized models to ``think with images''by actively invoking visual tools during multi-turn reasoning. The common Reinforcement Learning (RL) practice of relying on outcome-based rewards ignores the fact that textual plausibility often masks executive failure, meaning that models may exhibit intuitive textual reasoning while executing imprecise or irrelevant visual actions within their agentic reasoning trajectories. This reasoning-action discrepancy introduces noise that accumulates throughout the multi-turn reasoning process, severely degrading the model's multimodal reasoning capabilities and potentially leading to training collapse. In this paper, we introduce Multimodal Agentic Policy Optimization (MAPO), bridging the gap between textual reasoning and visual actions generated by models within their Multimodal Chain-of-Thought (MCoT). Specifically, MAPO mandates the model to generate explicit textual descriptions for the visual content obtained via tool usage. We then employ a novel advantage estimation that couples the semantic alignment between these descriptions and the actual observations with the task reward. Theoretical findings are provided to justify the rationale behind MAPO, which inherently reduces the variance of gradients, and extensive experiments demonstrate that our method achieves superior performance across multiple visual reasoning benchmarks.