Search papers, labs, and topics across Lattice.
The paper introduces Memora, a new benchmark for evaluating long-term memory in personalized agents across weeks-to-months-long conversations, focusing on remembering, reasoning, and recommending tasks. It also introduces the Forgetting-Aware Memory Accuracy (FAMA) metric to penalize the use of outdated information. Experiments on LLMs and memory agents using Memora reveal that agents frequently reuse invalid memories and struggle to reconcile evolving information, highlighting limitations in current long-term memory approaches.
LLMs and memory agents struggle to keep memories fresh over weeks-long conversations, often relying on outdated information despite updates.
Personalized agents that interact with users over long periods must maintain persistent memory across sessions and update it as circumstances change. However, existing benchmarks predominantly frame long-term memory evaluation as fact retrieval from past conversations, providing limited insight into agents'ability to consolidate memory over time or handle frequent knowledge updates. We introduce Memora, a long-term memory benchmark spanning weeks to months long user conversations. The benchmark evaluates three memory-grounded tasks: remembering, reasoning, and recommending. To ensure data quality, we employ automated memory-grounding checks and human evaluation. We further introduce Forgetting-Aware Memory Accuracy (FAMA), a metric that penalizes reliance on obsolete or invalidated memory when evaluating long-term memory. Evaluations of four LLMs and six memory agents reveal frequent reuse of invalid memories and failures to reconcile evolving memories. Memory agents offer marginal improvements, exposing shortcomings in long-term memory for personalized agents.