Search papers, labs, and topics across Lattice.
4
1
7
11
Open-source LLM agents can get a 27% performance boost in tool use by strategically injecting context tailored to address common failure modes.
Forget scaling team size alone – smarter memory design lets smaller LLM agent teams beat larger ones in long-term performance.
LLMs can be taught to reason more faithfully and accurately by training them to produce reasoning steps that other models can easily follow.
LLMs struggle to consistently use tools in dynamic environments, but a simple input reformulation strategy can boost performance by over 16% compared to standard methods like ReAct.