Search papers, labs, and topics across Lattice.
This paper analyzes and compares two approaches for correcting bias in LLM-as-a-judge evaluations: direct measurement error correction and surrogate-outcome approaches like prediction-powered inference (PPI). The authors derive efficient influence function (EIF)-based estimators to unify these approaches within a semiparametric efficiency framework, and characterize conditions where PPI-style estimators achieve lower asymptotic variance than measurement-error corrections. Empirical results on simulations and real-world data validate the theoretical findings, demonstrating the practical benefits of PPI under certain conditions.
Prediction-powered inference can beat direct error correction when using LLMs as judges, offering a more statistically efficient way to debias evaluation scores.
Large language models (LLMs) are increasingly used as automatic evaluators of generative AI outputs, a paradigm often referred to as"LLM-as-a-judge."In practice, LLM judges are imperfect predictions for the underlying truth and can exhibit systematic, non-random errors. Two main approaches have recently been proposed to address this issue: (i) direct measurementerror correction based on misclassification models such as Rogan-Gladen-style estimators, and (ii) surrogate-outcome approaches such as prediction-powered inference (PPI), which correct bias by calibrating prediction residuals on a small set of gold-standard human labels. In this paper, we systematically study the performance of these two approaches for estimating mean parameters (e.g., average benchmark scores or pairwise win rates). Leveraging tools from semiparametric efficiency theory, we unify the two classes of estimators by deriving explicit forms of efficient influence function (EIF)-based efficient estimators and characterize conditions under which PPI-style estimators attain strictly smaller asymptotic variance than measurement-error corrections. We verify our theoretical results in simulations and demonstrate the methods on real-data examples. We provide an implementation of the benchmarked methods and comparison utilities at https://github.com/yiqunchen/debias-llm-as-a-judge.