Search papers, labs, and topics across Lattice.
The paper introduces a layered framework for generating high-quality synthetic data to address noise, bias, and incompleteness in real-world user interaction data used for continual pre-training (CPT) of LLMs for recommendation. They demonstrate that sequential models trained on this synthetic data outperform those trained on real data by a large margin (+130% on recall@100 for SasRec), indicating improved learning of user preference patterns. Critically, the authors empirically demonstrate, for the first time, robust power-law scaling for an LLM continually pre-trained on this high-quality, recommendation-specific synthetic data.
Forget noisy user data: synthetic data unlocks predictable scaling laws for LLMs in recommendation, boosting recall by 130%.
Large Language Models (LLMs) represent a promising frontier for recommender systems, yet their development has been impeded by the absence of predictable scaling laws, which are crucial for guiding research and optimizing resource allocation. We hypothesize that this may be attributed to the inherent noise, bias, and incompleteness of raw user interaction data in prior continual pre-training (CPT) efforts. This paper introduces a novel, layered framework for generating high-quality synthetic data that circumvents such issues by creating a curated, pedagogical curriculum for the LLM. We provide powerful, direct evidence for the utility of our curriculum by showing that standard sequential models trained on our principled synthetic data significantly outperform ($+130\%$ on recall@100 for SasRec) models trained on real data in downstream ranking tasks, demonstrating its superiority for learning generalizable user preference patterns. Building on this, we empirically demonstrate, for the first time, robust power-law scaling for an LLM that is continually pre-trained on our high-quality, recommendation-specific data. Our experiments reveal consistent and predictable perplexity reduction across multiple synthetic data modalities. These findings establish a foundational methodology for reliable scaling LLM capabilities in the recommendation domain, thereby shifting the research focus from mitigating data deficiencies to leveraging high-quality, structured information.