Search papers, labs, and topics across Lattice.
StructMem, a hierarchical memory framework, is introduced to address the limitations of flat and graph-based memory systems in long-term conversational agents. It preserves event-level bindings and induces cross-event connections by temporally anchoring dual perspectives and performing periodic semantic consolidation. Experiments on the LoCoMo benchmark demonstrate that StructMem improves temporal reasoning and multi-hop performance, while also substantially reducing token usage, API calls, and runtime compared to existing memory systems.
LLMs can now reason across long conversations without breaking the bank: StructMem slashes token usage and API calls while boosting temporal reasoning.
Long-term conversational agents need memory systems that capture relationships between events, not merely isolated facts, to support temporal reasoning and multi-hop question answering. Current approaches face a fundamental trade-off: flat memory is efficient but fails to model relational structure, while graph-based memory enables structured reasoning at the cost of expensive and fragile construction. To address these issues, we propose \textbf{StructMem}, a structure-enriched hierarchical memory framework that preserves event-level bindings and induces cross-event connections. By temporally anchoring dual perspectives and performing periodic semantic consolidation, StructMem improves temporal reasoning and multi-hop performance on \texttt{LoCoMo}, while substantially reducing token usage, API calls, and runtime compared to prior memory systems, see https://github.com/zjunlp/LightMem .