Joint Optimization of Bandwidth and Power Allocation in Uplink Systems with Deep Reinforcement Learning.

Sensors (Basel)

Shaanxi Key Laboratory of Information Communication Network and Security, School of Communications and Information Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China.

Published: July 2023

AI Article Synopsis

  • Future communication relies on optimizing wireless resource utilization to address issues like inter-cell interference in multi-user systems.
  • The proposed joint-priority-based reinforcement learning (JPRL) approach aims to enhance both bandwidth and transmit power allocation to improve system throughput while ensuring quality of service (QoS).
  • Results indicate that JPRL significantly outperforms other methods, achieving 10.4-15.5% higher average throughput than homogeneous-learning benchmarks and 17.3% better than the genetic algorithm.

Article Abstract

Wireless resource utilizations are the focus of future communication, which are used constantly to alleviate the communication quality problem caused by the explosive interference with increasing users, especially the inter-cell interference in the multi-cell multi-user systems. To tackle this interference and improve the resource utilization rate, we proposed a joint-priority-based reinforcement learning (JPRL) approach to jointly optimize the bandwidth and transmit power allocation. This method aims to maximize the average throughput of the system while suppressing the co-channel interference and guaranteeing the quality of service (QoS) constraint. Specifically, we de-coupled the joint problem into two sub-problems, i.e., the bandwidth assignment and power allocation sub-problems. The multi-agent double deep Q network (MADDQN) was developed to solve the bandwidth allocation sub-problem for each user and the prioritized multi-agent deep deterministic policy gradient (P-MADDPG) algorithm by deploying a prioritized replay buffer that is designed to handle the transmit power allocation sub-problem. Numerical results show that the proposed JPRL method could accelerate model training and outperform the alternative methods in terms of throughput. For example, the average throughput was approximately 10.4-15.5% better than the homogeneous-learning-based benchmarks, and about 17.3% higher than the genetic algorithm.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10422299PMC
http://dx.doi.org/10.3390/s23156822DOI Listing

Publication Analysis

Top Keywords

power allocation
16
reinforcement learning
8
transmit power
8
average throughput
8
allocation sub-problem
8
allocation
5
joint optimization
4
bandwidth
4
optimization bandwidth
4
power
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!