北京邮电大学学报

  • EI核心期刊

北京邮电大学学报 ›› 2023, Vol. 46 ›› Issue (2): 22-28.

• 算力网络与分布式云 • 上一篇    下一篇

雾计算网络中联邦学习协同的内容缓存方案

黄晓舸,王凡,陈志,陈前斌   

  1. 重庆邮电大学
  • 收稿日期:2022-04-20 修回日期:2022-07-03 出版日期:2023-04-28 发布日期:2023-05-14
  • 通讯作者: 黄晓舸 E-mail:19202915@qq.com
  • 基金资助:
    重庆市科委重庆市基础研究与前沿探索项目

Content Caching Scheme Based on Federated Learning in Fog Computing Networks

, , ,   

  • Received:2022-04-20 Revised:2022-07-03 Online:2023-04-28 Published:2023-05-14

摘要: 随着物联网技术的快速发展,爆发式的终端用户业务需求给5G网络带来了巨大的挑战。为减小内容获取时延,保护用户隐私并提高用户体验,提出一种基于联邦学习的雾计算网络中内容缓存方案。首先,提出端到端(Device-To-Device, D2D)协作的雾计算网络模型,用户可通过D2D和无线链路从用户端、雾节点和云端获取内容;其次,用户在本地建立深度神经网络模型,利用历史请求数据训练本地模型,FN聚合本地模型,预测全局内容流行度;同时,向用户提供个性化内容推荐列表,提高缓存命中率。最后,基于真实数据集,仿真结果表明本方案能有效降低内容获取时延,提升缓存命中率。

关键词: 边缘缓存, 联邦学习, 内容推荐, 雾计算网络

Abstract: With the rapid development of Internet of Things technology, explosive end-user business demands have brought great challenges to 5G networks. In order to reduce the delay of content acquisition, while protecting user privacy and improving user experience, this paper proposes a content caching scheme based on federated learning in fog computing networks to reduce the content acquisition delay. Firstly, a Device-To-Device (D2D) collaborative fog computing network model is proposed. Users can obtain content from the user, fog node and cloud through D2D and wireless link; Secondly, the user builds a deep neural network model locally, trains the local model using the historical request data, and FN aggregates the local models to predict the global content popularity; At the same time, it provides users with personalized content recommendation list to improve cache hit rate. Finally, based on the real data set, the simulation results show that this scheme can effectively reduce the content acquisition delay and improve the cache hit rate.

Key words: Edge Caching, Federated Learning, Content Recommendation, Fog computing network

中图分类号: