Search papers, labs, and topics across Lattice.
The paper reinterprets the softmax classifier of LLMs as an Energy-Based Model (EBM) to track "energy spills" during decoding, which are shown to correlate with model errors. They introduce two training-free metrics, "spilled energy" and "marginalized energy," derived directly from output logits to quantify these energy discrepancies. Experiments across nine benchmarks and various LLMs (LLaMA, Mistral, Gemma, Qwen3) demonstrate competitive hallucination detection and cross-task generalization without requiring any training or probe classifiers.
Forget training probes – a simple energy discrepancy metric derived directly from LLM logits can pinpoint hallucinations with competitive accuracy.
We reinterpret the final Large Language Model (LLM) softmax classifier as an Energy-Based Model (EBM), decomposing the sequence-to-sequence probability chain into multiple interacting EBMs at inference. This principled approach allows us to track"energy spills"during decoding, which we empirically show correlate with factual errors, biases, and failures. Similar to Orgad et al. (2025), our method localizes the exact answer token and subsequently tests for hallucinations. Crucially, however, we achieve this without requiring trained probe classifiers or activation ablations. Instead, we introduce two completely training-free metrics derived directly from output logits: spilled energy, which captures the discrepancy between energy values across consecutive generation steps that should theoretically match, and marginalized energy, which is measurable at a single step. Evaluated on nine benchmarks across state-of-the-art LLMs (including LLaMA, Mistral, and Gemma) and on synthetic algebraic operations (Qwen3), our approach demonstrates robust, competitive hallucination detection and cross-task generalization. Notably, these results hold for both pretrained and instruction-tuned variants without introducing any training overhead. Code available at: github.com/OmnAI-Lab/spilled-energy