Search papers, labs, and topics across Lattice.
2
0
3
Whitening the embedding space of GPT-2-small exposes cluster commitment as the key geometric property separating different types of language model hallucinations.
LLM hallucinations aren't random errors, but instead manifest as distinct, geometrically measurable patterns in token embedding spaces, opening the door to targeted detection strategies.