Search papers, labs, and topics across Lattice.
This paper investigates the use of LLMs as evaluators of natural language explanations for time series data, addressing the challenge of assessing factual correctness without ground truth references. They construct a synthetic benchmark of 350 time series cases with varying explanation correctness levels to evaluate LLMs across explanation generation, ranking, scoring, and anomaly detection. The key finding is that LLMs are more reliable as evaluators than generators of time series explanations, demonstrating stable performance in ranking and scoring even when their generated explanations are flawed.
LLMs can reliably judge the correctness of time series explanations, even when their own explanations are wrong, opening the door to reference-free evaluation.
Evaluating factual correctness of LLM generated natural language explanations grounded in time series data remains an open challenge. Although modern models generate textual interpretations of numerical signals, existing evaluation methods are limited: reference based similarity metrics and consistency checking models require ground truth explanations, while traditional time series methods operate purely on numerical values and cannot assess free form textual reasoning. Thus, no general purpose method exists to directly verify whether an explanation is faithful to underlying time series data without predefined references or task specific rules. We study large language models as both generators and evaluators of time series explanations in a reference free setting, where given a time series, question, and candidate explanation, the evaluator assigns a ternary correctness label based on pattern identification, numeric accuracy, and answer faithfulness, enabling principled scoring and comparison. To support this, we construct a synthetic benchmark of 350 time series cases across seven query types, each paired with correct, partially correct, and incorrect explanations. We evaluate models across four tasks: explanation generation, relative ranking, independent scoring, and multi anomaly detection. Results show a clear asymmetry: generation is highly pattern dependent and exhibits systematic failures on certain query types, with accuracies ranging from 0.00 to 0.12 for Seasonal Drop and Volatility Shift, to 0.94 to 0.96 for Structural Break, while evaluation is more stable, with models correctly ranking and scoring explanations even when their own outputs are incorrect. These findings demonstrate feasibility of data grounded LLM based evaluation for time series explanations and highlight their potential as reliable evaluators of data grounded reasoning in the time series domain.