Search papers, labs, and topics across Lattice.
5
0
10
13
LLMs can be made 20% more reliable by attributing claims to their origins and verifying them, a strategy that beats verification alone.
Multimodal agents can now plan more coherently and solve complex tasks thanks to a new anticipatory reasoning framework that forecasts short-horizon trajectories before acting.
Agentic RAG systems can be made significantly more efficient and accurate simply by adding a contextualization module and de-duplicating retrieved documents at test time.
A unified benchmark reveals the fragmented landscape of RAG security, highlighting vulnerabilities to knowledge-extraction attacks and paving the way for robust defense strategies.
LLM judges exhibit a surprising "blindness" to human-written summaries, increasingly preferring machine-generated content as the similarity to human references decreases, challenging their reliability in summarization tasks.