Search papers, labs, and topics across Lattice.
The paper introduces PolyChartQA, a new dataset for question answering over multi-chart images sourced from computer science publications, comprising 534 multi-chart images and 2,694 QA pairs. Evaluation of nine state-of-the-art Multimodal Language Models (MLMs) on this dataset reveals a significant performance gap between MLM-generated and human-authored questions. A prompting method is proposed that improves L-accuracy by 5.39%.
LLMs struggle to answer human-generated questions about multi-chart images, highlighting a critical gap in their ability to reason about real-world data visualizations.
Charts are widely used to present complex information. Deriving meaningful insights in real-world contexts often requires interpreting multiple related charts together. Research on understanding multi-chart images has not been extensively explored. We introduce PolyChartQA, a mid-scale dataset specifically designed for question answering over multi-chart images. PolyChartQA comprises 534 multi-chart images (with a total of 2,297 sub-charts) sourced from peer-reviewed computer science research publications and 2,694 QA pairs. We evaluate the performance of nine state-of-the-art Multimodal Language Models (MLMs) on PolyChartQA across question type, difficulty, question source, and key structural characteristics of multi-charts. Our results show a 27.4% LLM-based accuracy (L-Accuracy) drop on human-authored questions compared to MLM-generated questions, and a 5.39% L-accuracy gain with our proposed prompting method.