Search papers, labs, and topics across Lattice.
4
0
11
18
LLMs can be made 20% more reliable by attributing claims to their origins and verifying them, a strategy that beats verification alone.
Agentic RAG systems can be made significantly more efficient and accurate simply by adding a contextualization module and de-duplicating retrieved documents at test time.
Finally, AI can generate hour-long videos with consistent characters and backgrounds, thanks to a new framework that nails seamless transitions between shots.
LLM judges exhibit a surprising "blindness" to human-written summaries, increasingly preferring machine-generated content as the similarity to human references decreases, challenging their reliability in summarization tasks.