Reinforcement learning is a remarkable aspect of the artificial intelligence field with many applications. Reinforcement learning facilitates learning new tasks based on action and reward principles. Motion planning addresses the navigation problem for robots. Current motion planning approaches lack support for automated, timely responses to the environment. The problem becomes worse in a complex environment cluttered with obstacles. Reinforcement learning can increase the capacity of robotic systems due to the reward system's capability and feedback to the environment. This could help deal with a complex environment. Existing algorithms for path planning are slow, computationally expensive, and less responsive to the environment, which causes late convergence to a solution. Furthermore, they are less efficient for task learning due to post-processing requirements. Reinforcement learning can address these issues using its action feedback and reward policies. This research presents a novel Q-learning-based reinforcement algorithm with deep learning integration. The proposed approach is evaluated in a narrow and cluttered passage environment. Further, improvements in the convergence of reinforcement learning-based motion planning and collision avoidance are addressed. The proposed approach's agent converged in 210th episodes in a cluttered environment and 400th episodes in a narrow passage environment. A state-of-the-art comparison shows that the proposed approach outperformed existing approaches based on the number of turns and convergence of the path by the planner.
Download full-text PDF |
Source |
---|---|
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0312559 | PLOS |
PLoS One
January 2025
Department of Electrical Engineering, Shiraz Branch, Islamic Azad University, Shiraz, Iran.
CNN is considered an efficient tool in brain image segmentation. However, neonatal brain images require specific methods due to their nature and structural differences from adult brain images. Hence, it is necessary to determine the optimal structure and parameters for these models to achieve the desired results.
View Article and Find Full Text PDFJ Cell Mol Med
January 2025
Cancer Biology Research Center, Cancer Institute, Tehran University of Medical Sciences, Tehran, Iran.
This study identifies microRNAs (miRNAs) with significant discriminatory power in distinguishing melanoma from nevus, notably hsa-miR-26a and hsa-miR-211, which have exhibited diagnostic potential with accuracy of 81% and 78% respectively. To enhance diagnostic accuracy, we integrated miRNAs into various machine-learning (ML) models. Incorporating miRNAs with AUC scores above 0.
View Article and Find Full Text PDFPLoS One
January 2025
Department of Computer Science and Information Systems, College of Applied Sciences, AlMaarefa University, Diriyah, Riyadh, Saudi Arabia.
Reinforcement learning is a remarkable aspect of the artificial intelligence field with many applications. Reinforcement learning facilitates learning new tasks based on action and reward principles. Motion planning addresses the navigation problem for robots.
View Article and Find Full Text PDFIntroduction: Simulation has become an integral part of health care education curricula that is used to teach a variety of topics, from emergency situations to physical diagnoses. Without further reinforcement, the skills learned through the simulation are subject to deterioration over time. Rapid Cycle Deliberate Practice (RCDP) is a teaching method that was developed to resist this deterioration and achieve mastery of skills.
View Article and Find Full Text PDFIntroduction: Simulation has become an integral part of healthcare education. Studies demonstrate rapid knowledge and skill acquisition with the use of simulation and rapid knowledge degradation if it is not further reinforced. Effect of simulation on metacognitive processes, or the ability to understand one's own knowledge, is not well-investigated yet.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!