Search papers, labs, and topics across Lattice.
The paper introduces ExStrucTiny, a new benchmark dataset designed to evaluate structured information extraction capabilities of Vision Language Models (VLMs) across diverse document types and flexible schemas. The dataset unifies key entity extraction, relation extraction, and visual question answering tasks, addressing limitations of existing benchmarks with narrow entity ontologies and homogeneous document types. Experiments with open and closed VLMs on ExStrucTiny reveal challenges in schema adaptation, query under-specification, and answer localization, highlighting areas for future research.
VLMs struggle with structured information extraction from documents when faced with diverse schemas and underspecified queries, as revealed by the new ExStrucTiny benchmark.
Enterprise documents, such as forms and reports, embed critical information for downstream applications like data archiving, automated workflows, and analytics. Although generalist Vision Language Models (VLMs) perform well on established document understanding benchmarks, their ability to conduct holistic, fine-grained structured extraction across diverse document types and flexible schemas is not well studied. Existing Key Entity Extraction (KEE), Relation Extraction (RE), and Visual Question Answering (VQA) datasets are limited by narrow entity ontologies, simple queries, or homogeneous document types, often overlooking the need for adaptable and structured extraction. To address these gaps, we introduce ExStrucTiny, a new benchmark dataset for structured Information Extraction (IE) from document images, unifying aspects of KEE, RE, and VQA. Built through a novel pipeline combining manual and synthetic human-validated samples, ExStrucTiny covers more varied document types and extraction scenarios. We analyze open and closed VLMs on this benchmark, highlighting challenges such as schema adaptation, query under-specification, and answer localization. We hope our work provides a bedrock for improving generalist models for structured IE in documents.