The use of multi-connectivity has become a useful tool to manage the traffic in heterogeneous cellular network deployments, since it allows a device to be simultaneously connected to multiple cells. The proper exploitation of this technique requires to adequately configure the traffic sent through each cell depending on the experienced conditions. This motivates this work, which tackles the problem of how to optimally split the traffic among the cells when the multi-connectivity feature is used. To this end, the paper proposes the use of a deep reinforcement learning solution based on a Deep Q-Network (DQN) in order to determine the amount of traffic of a device that needs to be delivered through each cell, making the decision as a function of the current traffic and radio conditions. The obtained results show a near-optimal performance of the DQN-based solution with an average difference of only 3.9% in terms of reward with respect to the optimum strategy. Moreover, the solution clearly outperforms a reference scheme based on Signal to Interference Noise Ratio (SINR) with differences of up to 50% in terms of reward and up to 166% in terms of throughput for certain situations. Overall, the presented results show the promising performance of the DQN-based approach that establishes a basis for further research in the topic of multi-connectivity and for the application of this type of techniques in other problems of the radio access network.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9414990PMC
http://dx.doi.org/10.3390/s22166179DOI Listing

Publication Analysis

Top Keywords

performance dqn-based
8
terms reward
8
traffic
5
deep q-network-based
4
q-network-based algorithm
4
multi-connectivity
4
algorithm multi-connectivity
4
multi-connectivity optimization
4
optimization heterogeneous
4
heterogeneous cellular-networks
4

Similar Publications

This paper presents a novel algorithm to address resource allocation and network-slicing challenges in multiaccess edge computing (MEC) networks. Network slicing divides a physical network into virtual slices, each tailored to efficiently allocate resources and meet diverse service requirements. To maximize the completion rate of user-computing tasks within these slices, the problem is decomposed into two subproblems: efficient core-to-edge slicing (ECS) and autonomous resource slicing (ARS).

View Article and Find Full Text PDF

An adaptive testing item selection strategy via a deep reinforcement learning approach.

Behav Res Methods

December 2024

Department of Psychology, Tufts University, 419 Boston Avenue, Medford, Massachusetts, 02155, USA.

Computerized adaptive testing (CAT) aims to present items that statistically optimize the assessment process by considering the examinee's responses and estimated trait levels. Recent developments in reinforcement learning and deep neural networks provide CAT with the potential to select items that utilize more information across all the items on the remaining tests, rather than just focusing on the next several items to be selected. In this study, we reformulate CAT under the reinforcement learning framework and propose a new item selection strategy based on the deep Q-network (DQN) method.

View Article and Find Full Text PDF

Flying foxes optimization with reinforcement learning for vehicle detection in UAV imagery.

Sci Rep

September 2024

Department of Industrial Engineering, College of Engineering, King Khalid University, 61421, Abha, Saudi Arabia.

Intelligent transportation systems (ITS) are globally installed in smart cities, which enable the next generation of ITS depending on the potential integration of autonomous and connected vehicles. Both technologies are being tested widely in various cities across the world. However, these two developing technologies are vital in allowing a fully automatic transportation system; it is necessary to automate other transportation and road components.

View Article and Find Full Text PDF

Deep reinforcement learning (RL) has been widely applied to personalized recommender systems (PRSs) as they can capture user preferences progressively. Among RL-based techniques, deep Q-network (DQN) stands out as the most popular choice due to its simple update strategy and superior performance. Typically, many recommendation scenarios are accompanied by the diminishing action space setting, where the available action space will gradually decrease to avoid recommending duplicate items.

View Article and Find Full Text PDF

Accurate cephalometric landmark detection leads to accurate analysis, diagnosis, and surgical planning. Many studies on automated landmark detection have been conducted, however reinforcement learning-based networks have not yet been applied. This is the first study to apply deep Q-network (DQN) and double deep Q-network (DDQN) to automated cephalometric landmark detection to the best of our knowledge.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!