Search papers, labs, and topics across Lattice.
1
0
2
Long-form LLM factuality gets a boost: claim-response entailment and uncertainty-aware decoding are surprisingly effective at detecting hallucinations, outperforming more complex methods.