Search papers, labs, and topics across Lattice.
This paper introduces Behavior-Aware Dual-Channel Preference Learning (BDPL), a framework for heterogeneous sequential recommendation that addresses data sparsity and noise by constructing behavior-aware subgraphs and using a cascade-structured GNN. BDPL employs a preference-level contrastive learning paradigm to model both long-term and short-term user preferences from diverse behaviors. Experiments on three real-world datasets demonstrate that BDPL outperforms state-of-the-art models, indicating improved recommendation accuracy in sparse, heterogeneous environments.
Overcome data sparsity in sequential recommendation with a new framework that learns fine-grained user preferences from diverse behaviors, leading to state-of-the-art performance.
Heterogeneous sequential recommendation (HSR) aims to learn dynamic behavior dependencies from the diverse behaviors of user-item interactions to facilitate precise sequential recommendation. Despite many efforts yielding promising achievements, there are still challenges in modeling heterogeneous behavior data. One significant issue is the inherent sparsity of a real-world data, which can weaken the recommendation performance. Although auxiliary behaviors (e.g., clicks) partially address this problem, they inevitably introduce some noise, and the sparsity of the target behavior (e.g., purchases) remains unresolved. Additionally, contrastive learning-based augmentation in existing methods often focuses on a single behavior type, overlooking fine-grained user preferences and losing valuable information. To address these challenges, we have meticulously designed a behavior-aware dual-channel preference learning framework (BDPL). This framework begins with the construction of customized behavior-aware subgraphs to capture personalized behavior transition relationships, followed by a novel cascade-structured graph neural network to aggregate node context information. We then model and enhance user representations through a preference-level contrastive learning paradigm, considering both long-term and short-term preferences. Finally, we fuse the overall preference information using an adaptive gating mechanism to predict the next item the user will interact with under the target behavior. Extensive experiments on three real-world datasets demonstrate the superiority of our BDPL over the state-of-the-art models.