Search papers, labs, and topics across Lattice.
2
0
4
4
LLM hallucinations aren't just about the model – query complexity, ambiguity, and grounding are strong predictors of when models go off the rails.
Stop relying on static query rewrites for hallucination mitigation: QueryBandits shows that adaptively selecting rewrites based on semantic features slashes hallucination rates in closed-source LLMs, beating fixed strategies by up to 60%.