Search papers, labs, and topics across Lattice.
DeltaMem is introduced as a single-agent reinforcement learning framework for persona-centric memory management in conversational AI, addressing limitations of multi-agent systems like information loss and fragility. They synthesize a user-assistant dialogue dataset with operation-level memory update labels and introduce a Memory-based Levenshtein Distance to formalize the memory updating reward. Experiments demonstrate that both training-free and RL-trained DeltaMem outperform product-level baselines on long-term memory benchmarks.
Forget multi-agent complexity: a single RL agent can outperform product-level baselines in persona-centric memory management for conversational AI.
Recent advances in persona-centric memory have revealed the powerful capability of multi-agent systems in managing persona memory, especially in conversational scenarios. However, these complex frameworks often suffer from information loss and are fragile across varying scenarios, resulting in suboptimal performance. In this paper, we propose DeltaMem, an agentic memory management system that formulates persona-centric memory management as an end-to-end task within a single-agent setting. To further improve the performance of our agentic memory manager, we draw inspiration from the evolution of human memory and synthesize a user-assistant dialogue dataset along with corresponding operation-level memory updating labels. Building on this, we introduce a novel Memory-based Levenshtein Distance to formalize the memory updating reward, and propose a tailored reinforcement learning framework to further enhance the management capabilities of DeltaMem. Extensive experiments show that both training-free and RL-trained DeltaMem outperform all product-level baselines across diverse long-term memory benchmarks, including LoCoMo, HaluMem, and PersonaMem.