Journal of Beijing University of Posts and Telecommunications

  • EI核心期刊

Journal of Beijing University of Posts and Telecommunications ›› 2024, Vol. 47 ›› Issue (2): 81-89.

Previous Articles     Next Articles

Deep reinforcement learning-based power allocation in vehicular edge computing networks

1,Yunxiao Wang1,Hailin Xiao   

  • Received:2023-03-01 Revised:2023-03-27 Online:2024-04-28 Published:2024-01-24
  • Contact: Hailin Xiao E-mail:xhl_xiaohailin@163.com

Abstract: A deep reinforcement learning-based computation offloading and power allocation algorithm is proposed to address the time-varying channel and stochastic task arrival problems caused by the mobility of vehicle in the vehicular edge computing environment. In this paper, we first build a three-layer system model for end-edge-cloud orchestrated computing based on non-orthogonal multiple access in a two-way lane scenario. By combining the communication, computing, cache resources and the mobility of vehicle, a joint optimization problem is designed to minimize the long-term cumulative total system cost consisting of power consumption and cache latency. Furthermore, in view of the dynamics, time-varying and stochastic characteristics in vehicular edge computing networks, a decentralized intelligent algorithm based on deep deterministic policy gradient (DDPG) is proposed for obtaining the power allocation optimization. Compared with conventional baseline algorithms, the simulation results illustrate that the proposed algorithm can achieve a superior performance in reducing the system total cost.

Key words: vehicular edge computing, computation offloading, power allocation, service caching, DDPG

CLC Number: