D2D-Assisted Multi-User Cooperative Partial Offloading in MEC Based on Deep Reinforcement Learning.

Sensors (Basel)

School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing 100081, China.

Published: September 2022

Mobile edge computing (MEC) and device-to-device (D2D) communication can alleviate the resource constraints of mobile devices and reduce communication latency. In this paper, we construct a D2D-MEC framework and study the multi-user cooperative partial offloading and computing resource allocation. We maximize the number of devices under the maximum delay constraints of the application and the limited computing resources. In the considered system, each user can offload its tasks to an edge server and a nearby D2D device. We first formulate the optimization problem as an NP-hard problem and then decouple it into two subproblems. The convex optimization method is used to solve the first subproblem, and the second subproblem is defined as a Markov decision process (MDP). A deep reinforcement learning algorithm based on a deep Q network (DQN) is developed to maximize the amount of tasks that the system can compute. Extensive simulation results demonstrate the effectiveness and superiority of the proposed scheme.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9502189PMC
http://dx.doi.org/10.3390/s22187004DOI Listing

Publication Analysis

Top Keywords

multi-user cooperative
8
cooperative partial
8
partial offloading
8
based deep
8
deep reinforcement
8
reinforcement learning
8
d2d-assisted multi-user
4
offloading mec
4
mec based
4
learning mobile
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!