Search papers, labs, and topics across Lattice.
This paper introduces a Multi-Layer Memory Framework for dialogue agents that decomposes dialogue history into working, episodic, and semantic memory layers. Adaptive retrieval gating and retention regularization are used to control semantic drift and manage context growth. Experiments on LOCOMO, LOCCO, and LoCoMo show the framework improves long-term retention, achieving a 56.90% six-period retention rate and reducing false memory rate to 5.1%.
Dialogue agents can now remember what you told them six turns ago with 57% accuracy, thanks to a new memory architecture that selectively forgets less important details.
Long-horizon dialogue systems suffer from semanticdrift and unstable memory retention across extended sessions. This paper presents a Multi-Layer Memory Framework that decomposes dialogue history into working, episodic, and semantic layers with adaptive retrieval gating and retention regularization. The architecture controls cross-session drift while maintaining bounded context growth and computational efficiency. Experiments on LOCOMO, LOCCO, and LoCoMo show improved performance, achieving 46.85 Success Rate, 0.618 overall F1 with 0.594 multi-hop F1, and 56.90% six-period retention while reducing false memory rate to 5.1% and context usage to 58.40%. Results confirm enhanced long-term retention and reasoning stability under constrained context budgets.