Search papers, labs, and topics across Lattice.
The authors introduce BrowseComp-$V^3$, a new benchmark for evaluating multimodal browsing agents, designed to address limitations in existing benchmarks regarding task complexity, evidence accessibility, and evaluation granularity. This benchmark features 300 challenging questions requiring deep, multi-level, and cross-modal multi-hop reasoning across textual and visual modalities. Experiments using a proposed unified multimodal browsing agent framework, OmniSeeker, show that even state-of-the-art models achieve only 36% accuracy, highlighting significant gaps in multimodal information integration and fine-grained perception.
Current multimodal agents are surprisingly bad at web browsing, achieving only 36% accuracy on a new benchmark designed to test deep, multi-modal reasoning across web pages.
Multimodal large language models (MLLMs), equipped with increasingly advanced planning and tool-use capabilities, are evolving into autonomous agents capable of performing multimodal web browsing and deep search in open-world environments. However, existing benchmarks for multimodal browsing remain limited in task complexity, evidence accessibility, and evaluation granularity, hindering comprehensive and reproducible assessments of deep search capabilities. To address these limitations, we introduce BrowseComp-$V^3$, a novel benchmark consisting of 300 carefully curated and challenging questions spanning diverse domains. The benchmark emphasizes deep, multi-level, and cross-modal multi-hop reasoning, where critical evidence is interleaved across textual and visual modalities within and across web pages. All supporting evidence is strictly required to be publicly searchable, ensuring fairness and reproducibility. Beyond final-answer accuracy, we incorporate an expert-validated, subgoal-driven process evaluation mechanism that enables fine-grained analysis of intermediate reasoning behaviors and systematic characterization of capability boundaries. In addition, we propose OmniSeeker, a unified multimodal browsing agent framework integrating diverse web search and visual perception tools. Comprehensive experiments demonstrate that even state-of-the-art models achieve only 36% accuracy on our benchmark, revealing critical bottlenecks in multimodal information integration and fine-grained perception. Our results highlight a fundamental gap between current model capabilities and robust multimodal deep search in real-world settings.