Search papers, labs, and topics across Lattice.
4
1
6
4
Open-source LLM agents can get a 27% performance boost in tool use by strategically injecting context tailored to address common failure modes.
LLMs and memory agents struggle to keep memories fresh over weeks-long conversations, often relying on outdated information despite updates.
Despite decent question answering, today's multimodal LLMs are shockingly bad at pinpointing the exact table cells that support their answers, especially in textual formats like JSON.
LLMs struggle to consistently use tools in dynamic environments, but a simple input reformulation strategy can boost performance by over 16% compared to standard methods like ReAct.