Search papers, labs, and topics across Lattice.
The paper introduces STAR, a framework for predicting large language model performance from limited data by combining statistical methods with agentic reasoning. STAR uses specialized retrievers for external knowledge and embeds semantic features into Constrained Probabilistic Matrix Factorization (CPMF) to generate statistical expectations with uncertainty. A reasoning module based on Expectation Violation Theory (EVT) then refines these predictions, achieving a 14.46% improvement over statistical baselines under extreme data sparsity.
LLMs can now predict other LLMs' performance with 14% higher accuracy, even when only seeing one or two data points, by blending statistical priors with reasoning.
As comprehensive large model evaluation becomes prohibitively expensive, predicting model performance from limited observations has become essential. However, existing statistical methods struggle with pattern shifts, data sparsity, and lack of explanation, while pure LLM methods remain unreliable. We propose STAR, a framework that bridges data-driven STatistical expectations with knowledge-driven Agentic Reasoning. STAR leverages specialized retrievers to gather external knowledge and embeds semantic features into Constrained Probabilistic Matrix Factorization (CPMF) to generate statistical expectations with uncertainty. A reasoning module guided by Expectation Violation Theory (EVT) then refines predictions through intra-family analysis, cross-model comparison, and credibility-aware aggregation, producing adjustments with traceable explanations. Extensive experiments show that STAR consistently outperforms all baselines on both score-based and rank-based metrics, delivering a 14.46% gain in total score over the strongest statistical method under extreme sparsity, with only 1--2 observed scores per test model.