Search papers, labs, and topics across Lattice.
The paper introduces MemRerank, a preference memory framework that distills user purchase history into query-independent signals to improve personalized product reranking in LLM-based shopping agents. They construct an end-to-end benchmark using a 1-in-5 selection task to evaluate memory quality and reranking utility, and train the memory extractor using reinforcement learning with downstream reranking performance as the reward. Experiments demonstrate that MemRerank outperforms baselines, achieving up to a 10.61% absolute improvement in 1-in-5 accuracy with two LLM-based rerankers.
Forget clunky prompt engineering: distilling user history into a learned preference memory boosts LLM-based product reranking by over 10%.
LLM-based shopping agents increasingly rely on long purchase histories and multi-turn interactions for personalization, yet naively appending raw history to prompts is often ineffective due to noise, length, and relevance mismatch. We propose MemRerank, a preference memory framework that distills user purchase history into concise, query-independent signals for personalized product reranking. To study this problem, we build an end-to-end benchmark and evaluation framework centered on an LLM-based \textbf{1-in-5} selection task, which measures both memory quality and downstream reranking utility. We further train the memory extractor with reinforcement learning (RL), using downstream reranking performance as supervision. Experiments with two LLM-based rerankers show that MemRerank consistently outperforms no-memory, raw-history, and off-the-shelf memory baselines, yielding up to \textbf{+10.61} absolute points in 1-in-5 accuracy. These results suggest that explicit preference memory is a practical and effective building block for personalization in agentic e-commerce systems.