Search papers, labs, and topics across Lattice.
University of Amsterdam, Institute for Logic, Language and Computation
1
0
3
13
LLMs often hallucinate, but a simple probe can reveal whether the error stems from misusing the prompt or from faulty internal knowledge, paving the way for targeted mitigation strategies.