Search papers, labs, and topics across Lattice.
This paper introduces a computationally efficient method for quantifying epistemic and aleatoric uncertainty in LLMs using a first-order Taylor expansion and an isotropy assumption on the parameter covariance. The approach estimates epistemic uncertainty as the squared gradient norm and aleatoric uncertainty as the Bernoulli variance, requiring only a single forward-backward pass. Validated against MCMC estimates, the method demonstrates strong correspondence, and its application to question answering reveals that parameter-level uncertainty is most useful for questions involving conflicting plausible answers (TruthfulQA) but not factual recall (TriviaQA).
Forget ensembles and retraining: estimate LLM uncertainty with just a single forward-backward pass by assuming parameter covariance isotropy.
Existing methods for quantifying predictive uncertainty in neural networks are either computationally intractable for large language models or require access to training data that is typically unavailable. We derive a lightweight alternative through two approximations: a first-order Taylor expansion that expresses uncertainty in terms of the gradient of the prediction and the parameter covariance, and an isotropy assumption on the parameter covariance. Together, these yield epistemic uncertainty as the squared gradient norm and aleatoric uncertainty as the Bernoulli variance of the point prediction, from a single forward-backward pass through an unmodified pretrained model. We justify the isotropy assumption by showing that covariance estimates built from non-training data introduce structured distortions that isotropic covariance avoids, and that theoretical results on the spectral properties of large networks support the approximation at scale. Validation against reference Markov Chain Monte Carlo estimates on synthetic problems shows strong correspondence that improves with model size. We then use the estimates to investigate when each uncertainty type carries useful signal for predicting answer correctness in question answering with large language models, revealing a benchmark-dependent divergence: the combined estimate achieves the highest mean AUROC on TruthfulQA, where questions involve genuine conflict between plausible answers, but falls to near chance on TriviaQA's factual recall, suggesting that parameter-level uncertainty captures a fundamentally different signal than self-assessment methods.