Search papers, labs, and topics across Lattice.
MaterialFigBench, a new benchmark dataset, is introduced to evaluate multimodal LLMs on university-level materials science problems requiring figure interpretation. The dataset comprises 137 free-response questions adapted from textbooks, covering topics like phase diagrams and stress-strain curves, with expert-defined answer ranges to account for numerical ambiguity. Evaluation of state-of-the-art models like ChatGPT reveals that while performance improves with updates, LLMs still struggle with visual understanding and quantitative interpretation, often relying on memorized knowledge instead.
Multimodal LLMs can ace materials science problems, but MaterialFigBench reveals they're often just memorizing answers instead of actually "seeing" the figures.
We present MaterialFigBench, a benchmark dataset designed to evaluate the ability of multimodal large language models (LLMs) to solve university-level materials science problems that require accurate interpretation of figures. Unlike existing benchmarks that primarily rely on textual representations, MaterialFigBench focuses on problems in which figures such as phase diagrams, stress-strain curves, Arrhenius plots, diffraction patterns, and microstructural schematics are indispensable for deriving correct answers. The dataset consists of 137 free-response problems adapted from standard materials science textbooks, covering a broad range of topics including crystal structures, mechanical properties, diffusion, phase diagrams, phase transformations, and electronic properties of materials. To address unavoidable ambiguity in reading numerical values from images, expert-defined answer ranges are provided where appropriate. We evaluate several state-of-the-art multimodal LLMs, including ChatGPT and GPT models accessed via OpenAI APIs, and analyze their performance across problem categories and model versions. The results reveal that, although overall accuracy improves with model updates, current LLMs still struggle with genuine visual understanding and quantitative interpretation of materials science figures. In many cases, correct answers are obtained by relying on memorized domain knowledge rather than by reading the provided images. MaterialFigBench highlights persistent weaknesses in visual reasoning, numerical precision, and significant-digit handling, while also identifying problem types where performance has improved. This benchmark provides a systematic and domain-specific foundation for advancing multimodal reasoning capabilities in materials science and for guiding the development of future LLMs with stronger figure-based understanding.