Search papers, labs, and topics across Lattice.
1
0
3
Turns out, you can spot LLM hallucinations just by looking at how tightly their answers cluster in embedding space, enabling a surprisingly effective way to flag bad responses with minimal labeling.