Search papers, labs, and topics across Lattice.
2
0
5
7
LLMs can now reason across long conversations without breaking the bank: StructMem slashes token usage and API calls while boosting temporal reasoning.
LLMs can slash token usage by 70% and boost reasoning accuracy by 14.8% in long-horizon tasks simply by learning when to remember and forget intermediate thoughts.