| 摘 要: 图对比学习推荐系统面临以下关键挑战:首先,交互数据中的噪声会误导图神经网络的消息传播;其次,传统结构增强(如随机丢弃边/节点)易破坏图语义,特征增强则忽略节点差异性;最后,贝叶斯个性化排序(BPR)损失依赖负采样质量,易受流行度偏差影响。为解决上述问题,本文提出基于图去噪的双图对比学习推荐算法(GDDCL)。首先,与现有基于样本重加权的去噪方法不同,GDDCL首先通过去噪图生成模块计算边清洁度得分,并采用对比学习修正去噪图,系统性地生成高质量去噪图结构,为下游任务提供更干净的图结构。其次,相较于传统增强的对比学习方法,GDDCL采用动量更新的双编码器机制,无需显式增强即可生成稳定且多样的对比视图,避免语义破坏。此外,采用对齐性与均匀性作为优化目标,避免负采样偏差,加速收敛。在多个真实数据集上的实验表明,GDDCL在推荐效果与准确性上均优于基线方法。 |
| 关键词: 协同过滤 对比学习 结构去噪 图神经网络 |
|
中图分类号:
文献标识码:
|
| 基金项目: 山东省自然科学基金项目:面向电商平台的多视图深度学习推荐算法研究项目编号:ZR2022MF334,2023.01至2025.12,山东省科技厅,主持研究。 |
|
| Graph Denoising with Dual-Graph Contrastive Learning for Recommendation |
|
zhangruikai, YUAN Weihua, WANG Shaohua, MENG Guangting, WANG Guikai, DU Ruoqi
|
Shandong Jianzhu University
|
| Abstract: Graph contrastive learning-based recommendation systems face several key challenges: First, noise in interaction data can mislead the message propagation process of graph neural networks. Second, traditional structural augmentations (e.g., random edge/node dropout) easily disrupt the semantic structure of the user-item graph, while feature augmentations ignore node heterogeneity. Third, the Bayesian Personalized Ranking (BPR) loss heavily relies on the quality of negative sampling, making it susceptible to popularity bias. To address these issues, this paper proposes Graph Denoising with Dual-Graph Contrastive Learning for recommendation (GDDCL) . Firstly, unlike existing denoising approaches that rely on sample re-weighting based on model prediction discrepancies or loss values, GDDCL introduces a Denoised Graph Generation Module, which computes edge cleanliness scores and refines the denoised graph using contrastive learning. This process systematically produces a high-quality denoised graph structure, thereby providing cleaner graph inputs for downstream tasks.Secondly, compared to traditional augmentation-based contrastive learning methods, GDDCL employs a momentum-updated dual-encoder mechanism. This design eliminates the need for explicit augmentation strategies and generates stable yet diverse contrastive views without corrupting semantic structures.Moreover, GDDCL adopts alignment and uniformity as optimization objectives, which avoids negative sampling bias and accelerates model convergence.Extensive experiments on multiple real-world datasets demonstrate that GDDCL outperforms baseline methods in both recommendation effectiveness and accuracy. |
| Keywords: Collaborative Filtering Contrastive Learning Structure Denoising Graph Neural Network |