Search papers, labs, and topics across Lattice.
PAPERMIND is introduced as a benchmark to evaluate multimodal LLMs on integrated scientific reasoning across seven domains, encompassing tasks like multimodal grounding, experimental interpretation, cross-source reasoning, and critical assessment. The benchmark uses real scientific papers and evaluates models in an agent-oriented manner, moving beyond isolated task evaluations. Experiments with both open and closed-source models reveal performance gaps across tasks, indicating challenges in integrated scientific reasoning and critique.
Current multimodal LLMs still struggle to integrate information and reason critically when assessed on real scientific papers, despite progress on isolated tasks.
Understanding scientific papers requires more than answering isolated questions or summarizing content. It involves an integrated reasoning process that grounds textual and visual information, interprets experimental evidence, synthesizes information across sources, and critically evaluates scientific claims. However, existing benchmarks typically assess these abilities in isolation, making it difficult to evaluate scientific paper understanding as a unified set of interacting cognitive abilities. In this work, we introduce PAPERMIND, a benchmark designed to evaluate integrated and agent-oriented scientific reasoning over research papers. PAPERMIND is constructed from real scientific papers across seven domains, including agriculture, biology, chemistry, computer science, medicine, physics, and economics. It comprises four complementary task families that collectively operationalize distinct cognitive facets of scientific paper reasoning, including multimodal grounding, experimental interpretation, cross-source evidence reasoning, and critical assessment. By analyzing model behavior across multiple tasks, PAPERMIND enables a diagnostic evaluation of integrated scientific reasoning behaviors that are difficult to assess through isolated task evaluations. Extensive experiments on both opensource and closed-source multimodal LLMs reveal consistent performance gaps across tasks, highlighting persistent challenges in integrated scientific reasoning and critique. Our benchmark and dataset are available at https:// github.com/Yanjun-Zhao/PaperMind.