Search papers, labs, and topics across Lattice.
3
0
6
Smaller language models can now achieve state-of-the-art claim verification performance by jointly optimizing for decomposition quality and verification accuracy using a novel reinforcement learning approach.
LLM hallucinations aren't just about the model – query complexity, ambiguity, and grounding are strong predictors of when models go off the rails.
Stop relying on static query rewrites for hallucination mitigation: QueryBandits shows that adaptively selecting rewrites based on semantic features slashes hallucination rates in closed-source LLMs, beating fixed strategies by up to 60%.