• 北大核心期刊(《中文核心期刊要目总览》2017版)
  • 中国科技核心期刊(中国科技论文统计源期刊)
  • JST 日本科学技术振兴机构数据库(日)收录期刊

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于知识图谱的双重感知网络推荐算法

韩晨 杨兴耀 于炯 郭亮 胡皓禹

韩晨, 杨兴耀, 于炯, 郭亮, 胡皓禹. 基于知识图谱的双重感知网络推荐算法[J]. 微电子学与计算机, 2022, 39(8): 11-20. doi: 10.19304/J.ISSN1000-7180.2022.0096
引用本文: 韩晨, 杨兴耀, 于炯, 郭亮, 胡皓禹. 基于知识图谱的双重感知网络推荐算法[J]. 微电子学与计算机, 2022, 39(8): 11-20. doi: 10.19304/J.ISSN1000-7180.2022.0096
HAN Chen, YANG Xingyao, YU Jiong, GUO Liang, HU Haoyu. Knowledge graph double perception network for recommendation algorithm[J]. Microelectronics & Computer, 2022, 39(8): 11-20. doi: 10.19304/J.ISSN1000-7180.2022.0096
Citation: HAN Chen, YANG Xingyao, YU Jiong, GUO Liang, HU Haoyu. Knowledge graph double perception network for recommendation algorithm[J]. Microelectronics & Computer, 2022, 39(8): 11-20. doi: 10.19304/J.ISSN1000-7180.2022.0096

基于知识图谱的双重感知网络推荐算法

doi: 10.19304/J.ISSN1000-7180.2022.0096
基金项目: 

国家自然科学基金 61862060

国家自然科学基金 61966035

国家自然科学基金 61562086

新疆维吾尔自治区教育厅项目 XJEDU2016S035

新疆大学博士科研启动基金项目 BS150257

详细信息
    作者简介:

    韩晨  男,(1997-),硕士研究生.研究方向为推荐算法

    于炯  男,(1964-),博士,教授.研究方向为网格计算和并行计算

    郭亮  男,(1997-),硕士研究生.研究方向为推荐算法

    胡皓禹  男,(1995-),硕士研究生.研究方向为推荐算法

    通讯作者:

    杨兴耀(通讯作者)  男,(1984-),博士,副教授.研究方向为推荐系统、大数据和可信计算.E-mail: yangxy@xju.edu.cn

  • 中图分类号: TP391

Knowledge graph double perception network for recommendation algorithm

  • 摘要:

    近年来,通过聚合知识图谱中附加的项目信息进行推荐取得了优异的成果,但用户信息来源相对较少,同时多重聚合会使项目自身特征表达不全,甚至发生噪音.针对以上两点,提出基于知识图谱的双重感知网络推荐算法KGDP.首先,从用户交互记录中随机选取部分项目作为用户相关项目,以及选取项目的邻居实体作为项目的相关实体;然后,将选取的用户相关项目经过深度神经网络融合为用户特征,丰富了用户特征,同时单独聚合项目的相关实体;其次,经过两个深度神经网络使用户分别感知项目特征和邻居特征,即非线性交互;最后,通过一个单层感知机调节交互特征的输出权重进行评分预测.在推荐算法常用的两个真实数据集上进行实验,较基线模型AUC指标分别提升了9.2%、2.4%;ACC指标提升了6.6%、1.9%,F1指标分别提升了7.0%、1.1%;Precision@N指标分别提升了28.8%、6.5%;Recall@N分别提升了4.0%、23.7%;F1@N指标分别提升了43.3%、8.4%.

     

  • 图 1  深度神经网络框架,黑色箭头表示用于计算网络的正向传播.

    Figure 1.  Structure of the Deep Neural Network. Black arrows indicate the forward propagation used to calculate the network

    图 2  KGDP模型框架,黑色箭头表示用于计算预测的正向传播.⊗表示内积运算,⊕表示向量拼接,表示聚合操作.

    Figure 2.  The Structure of the KGDP model. Black arrows indicate the forward propagation for calculating the predictions. ⊗denotes the element-wise inner product. ⊕ denotes the element-wise concatenation, and denotes the aggregation operation

    图 3  特征匹配示意图,单项箭头为特征融合,双向箭头为特征相同或相似.

    Figure 3.  Schematic diagram of feature matching, single arrow is feature integration, and two-way arrow is the same or similar feature

    图 4  KGDP算法流程图.

    Figure 4.  KGDP algorithm flow chart

    图 5  Book-Crossing数据集上Top-N推荐的Precision@N,Recall@N,F1@N

    Figure 5.  Result of Precision@N, Recall@N, F1@N in TOP-N recommendation on Book-Crossing dataset

    图 6  Last.FM数据集上Top-N推荐的Precision@N,Recall@N,F1@N

    Figure 6.  Result of Precision@N, Recall@N, F1@N in TOP-N recommendation on Last.FM dataset

    图 7  不同采样大小k-u and k-i的AUC结果

    Figure 7.  the results of AUC in differentsize of k-u and k-i

    表  1  数据集

    Table  1.   data set

    Book-Crossing Last.FM
    #User 17 860 1 872
    #Item 14 967 2 445
    #Interactions 139 746 753 772
    #Entities 77 903 182 011
    #Relations 25 60
    #KG Triples 151 500 15 518
    #Density 5.2×10-4 3.9×10-2
    下载: 导出CSV

    表  2  超参数设置

    Table  2.   hyper-parameter settings

    参数 Book-Crossing Last.FM
    k-u 24 16
    k-i 4 8
    d 64 16
    λ 2x10-5 10-4
    η 2×10-4 5×10-4
    batch size 256 128
    下载: 导出CSV

    表  3  CTR预测的AUC、ACC和F1结果

    Table  3.   the results of AUC、ACC and F1

    Model Book-Crossing Last.FM
    AUC ACC F1 AUC ACC F1
    RippleNet 0.701 9 0.656 6 0.644 1 0.7990 0.738 1 0.730 9
    KGNN-LS 0.690 6 0.634 3 0.642 1 0.796 2 0.718 7 0.710 1
    KGCN-sum 0.691 5 0.633 4 0.642 4 0.799 7 0.719 9 0.713 5
    KGDP 0.766 5 0.699 9 0.688 9 0.819 2 0.752 1 0.739 1
    下载: 导出CSV

    表  4  不同嵌入维度d的AUC结果

    Table  4.   the results of AUC in differentd

    d 16 32 64 128 256
    Book-Crossing 0.764 5 0.765 7 0.766 5 0.767 4 0.766 2
    Last.FM 0.813 4 0.818 3 0.819 2 0.818 7 0.817 3
    下载: 导出CSV

    表  5  不同层数MU网络的AUC结果

    Table  5.   the results of AUC in different MU layer

    layer 1 2 3 4
    Book-Crossing 0.766 5 0.764 1 0.763 8 0.765 6
    Last.FM 0.818 0 0.819 2 0.816 6 0.816 2
    下载: 导出CSV

    表  6  不同层数DP网络的AUC结果

    Table  6.   the results of AUC in different DP layer

    layer 1 2 3 4
    Book-Crossing 0.767 4 0.766 5 0.762 1 0.500 0
    Last.FM 0.819 2 0.618 9 0.500 0 0.500 0
    下载: 导出CSV

    表  7  不同正则化权重的ACU结果

    Table  7.   the results of AUC in differentpart regularization

    Part All
    Book-Crossing 0.766 5 0.761 2
    Last.FM 0.819 2 0.815 2
    下载: 导出CSV

    表  8  单元测试的ACU结果

    Table  8.   the results of AUC in differentunit

    Book-Crossing Last.FM
    KGCN-sum 0.691 5 0.799 7
    KGDP-MU 0.759 9 0.800 6
    KGDP-DP 0.740 0 0.800 2
    KGDP 0.766 5 0.819 2
    下载: 导出CSV
  • [1] ZHAO H S, JIA J Y, KOLTUN V. Exploring self-attention for image recognition[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USA: IEEE, 2020: 10073-10082. DOI: 10.1109/CVPR42600.2020.01009.
    [2] NASSIF A B, SHAHIN I, ATTILI I, et al. Speech recognition using deep neural networks: A systematic review[J]. IEEE Access, 2019, 7: 19143-19165. DOI: 10.1109/ACCESS.2019.2896880.
    [3] KOWSARI K, JAFARI MEIMANDI K, HEIDARYSAFA M, et al. Text classification algorithms: A survey[J]. Information, 2019, 10(4): 150. DOI: 10.3390/info10040150.
    [4] XUE H J, DAI X Y, ZHANG J B, et al. Deep matrix factorization models for recommender systems[C]//Proceedings of the 26th International Joint Conference on Artificial Intelligence. Melbourne, Australia: AAAI Press, 2017: 3203-3209.
    [5] HE X N, LIAO L Z, ZHANG H W, et al. Neural collaborative filtering[C]//Proceedings of the 26th International Conference on World Wide Web. Perth, Australia: International World Wide Web Conferences Steering Committee, 2017: 173-182. DOI: 10.1145/3038912.3052569.
    [6] WANG H W, ZHANG F Z, XIE X, et al. DKN: Deep knowledge-aware network for news recommendation[C]//Proceedings of the 2018 World Wide Web Conference. Lyon, France: International World Wide Web Conferences Steering Committee, 2018: 1835-1844. DOI: 10.1145/3178876.3186175.
    [7] HUANG J, ZHAO W X, DOU H J, et al. Improving sequential recommendation with knowledge-enhanced memory networks[C]//The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. Ann Arbor, MI, USA: ACM, 2018: 505-514. DOI: 10.1145/3209978.3210017.
    [8] YANG D Q, GUO Z K, WANG Z Y, et al. A knowledge-enhanced deep recommendation framework incorporating GAN-based models[C]//2018 IEEE International Conference on Data Mining (ICDM). Singapore: IEEE, 2018: 1368-1373. DOI: 10.1109/ICDM.2018.00187.
    [9] ZHANG F Z, YUAN N J, LIAN D F, et al. Collaborative knowledge base embedding for recommender systems[C]//Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco, California, USA: ACM, 2016: 353-362. DOI: 10.1145/2939672.2939673.
    [10] LI Q Y, TANG X L, WANG T Y, et al. Unifying task-oriented knowledge graph learning and recommendation[J]. IEEE Access, 2019, 7: 115816-115828. DOI: 10.1109/ACCESS.2019.2932466.
    [11] YE Y T, WANG X W, YAO J C, et al. Bayes EMbedding (BEM): Refining representation by integrating knowledge graphs and behavior-specific networks[C]//Proceedings of the 28th ACM International Conference on Information and Knowledge Management. Beijing, China: ACM, 2019: 679-688. DOI: 10.1145/3357384.3358014.
    [12] ZHAO J, ZHOU Z, GUAN Z Y, et al. IntentGC: a scalable graph convolution framework fusing heterogeneous information for recommendation[C]//Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Anchorage, AK, USA: ACM, 2019: 2347-2357. DOI: 10.1145/3292500.3330686.
    [13] HU B B, SHI C, ZHAO W X, et al. Leveraging meta-path based context for top-N recommendation with a neural co-attention model[C]//Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. London, United Kingdom: ACM, 2018: 1531-1540. DOI: 10.1145/3219819.3219965.
    [14] XIAN Y K, FU Z H, MUTHUKRISHNAN S, et al. Reinforcement knowledge graph reasoning for explainable recommendation[C]//Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. Paris, France: ACM, 2019: 285-294. DOI: 10.1145/3331184.3331203.
    [15] MA W Z, ZHANG M, CAO Y, et al. Jointly learning explainable rules for recommendation with knowledge graph[C]//The World Wide Web Conference. San Francisco, CA, USA: ACM, 2019: 1210-1221. DOI: 10.1145/3308558.3313607.
    [16] WANG H W, ZHANG F Z, ZHAO M, et al. Multi-task feature learning for knowledge graph enhanced recommendation[C]//The World Wide Web Conference. San Francisco, CA, USA: ACM, 2019: 2000-2010. DOI: 10.1145/3308558.3313411.
    [17] CAO Y X, WANG X, HE X N, et al. Unifying knowledge graph learning and recommendation: Towards a better understanding of user preferences[C]//The World Wide Web Conference. San Francisco, CA, USA: ACM, 2019: 151-161. DOI: 10.1145/3308558.3313705.
    [18] XIN X, HE X N, ZHANG Y F, et al. Relational collaborative filtering: Modeling multiple item relations for recommendation[C]//Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. Paris, France: ACM, 2019: 125-134. DOI: 10.1145/3331184.3331188.
    [19] TANG X L, WANG T Y, YANG H Z, et al. AKUPM: Attention-enhanced knowledge-aware user preference model for recommendation[C]//Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Anchorage, AK, USA: ACM, 2019: 1891-1899. DOI: 10.1145/3292500.3330705.
    [20] WANG H W, ZHANG F Z, WANG J L, et al. RippleNet: Propagating user preferences on the knowledge graph for recommender systems[C]//Proceedings of the 27th ACM International Conference on Information and Knowledge Management. Torino, Italy: ACM, 2018: 417-426. DOI: 10.1145/3269206.3271739.
    [21] KIPF T N, WELLING M. Semi-supervised classification with graph convolutional networks[Z]. arXiv: 1609.02907, 2017.
    [22] WANG H W, ZHAO M, XIE X, et al. Knowledge graph convolutional networks for recommender systems[C]//The World Wide Web Conference. San Francisco, CA, USA: ACM, 2019: 3307-3313. DOI: 10.1145/3308558.3313417.
    [23] WANG H W, ZHANG F Z, ZHANG M D, et al. Knowledge-aware graph neural networks with label smoothness regularization for recommender systems[C]//Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Anchorage, AK, USA: ACM, 2019: 968-977. DOI: 10.1145/3292500.3330836.
  • 加载中
图(7) / 表(8)
计量
  • 文章访问数:  83
  • HTML全文浏览量:  63
  • PDF下载量:  17
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-02-11
  • 修回日期:  2022-03-02
  • 网络出版日期:  2022-08-15

目录

    /

    返回文章
    返回