Search papers, labs, and topics across Lattice.
2
0
5
LLMs can slash token usage by 80% and "thinking rate" by 95% without sacrificing accuracy, simply by learning when *not* to reason.
Ditch the deterministic databases: this LLM-driven simulation framework evaluates tool-calling agents with surprisingly reliable proxy states, offering a scalable alternative to costly benchmarks.