Existing deep learning methods have achieved significant success in medical image segmentation. However, this success largely relies on stacking advanced modules and architectures, which has created a path dependency. This path dependency is unsustainable, as it leads to increasingly larger model parameters and higher deployment costs. To break this path dependency, we introduce deep reinforcement learning to enhance segmentation performance. However, current deep reinforcement learning methods face challenges such as high training cost, independent iterative processes, and high uncertainty of segmentation masks. Consequently, we propose a Pixel-level Deep Reinforcement Learning model with pixel-by-pixel Mask Generation (PixelDRL-MG) for more accurate and robust medical image segmentation. PixelDRL-MG adopts a dynamic iterative update policy, directly segmenting the regions of interest without requiring user interaction or coarse segmentation masks. We propose a Pixel-level Asynchronous Advantage Actor-Critic (PA3C) strategy to treat each pixel as an agent whose state (foreground or background) is iteratively updated through direct actions. Our experiments on two commonly used medical image segmentation datasets demonstrate that PixelDRL-MG achieves more superior segmentation performances than the state-of-the-art segmentation baselines (especially in boundaries) using significantly fewer model parameters. We also conducted detailed ablation studies to enhance understanding and facilitate practical application. Additionally, PixelDRL-MG performs well in low-resource settings (i.e., 50-shot or 100-shot), making it an ideal choice for real-world scenarios.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1038/s41598-025-92117-2 | DOI Listing |
Front Artif Intell
February 2025
Department of Computer Science & Engineering, Indian Institute of Technology Ropar, Rupnagar, India.
Machine learning techniques have emerged as a promising tool for efficient cache management, helping optimize cache performance and fortify against security threats. The range of machine learning is vast, from reinforcement learning-based cache replacement policies to Long Short-Term Memory (LSTM) models predicting content characteristics for caching decisions. Diverse techniques such as imitation learning, reinforcement learning, and neural networks are extensively useful in cache-based attack detection, dynamic cache management, and content caching in edge networks.
View Article and Find Full Text PDFInd Eng Chem Res
March 2025
Department of Chemical Engineering, Imperial College London, London, South Kensington SW7 2AZ, U.K.
This work proposes a control-informed reinforcement learning (CIRL) framework that integrates proportional-integral-derivative (PID) control components into the architecture of deep reinforcement learning (RL) policies, incorporating prior knowledge from control theory into the learning process. CIRL improves performance and robustness by combining the best of both worlds: the disturbance-rejection and set point-tracking capabilities of PID control and the nonlinear modeling capacity of deep RL. Simulation studies conducted on a continuously stirred tank reactor system demonstrate the improved performance of CIRL compared to both conventional model-free deep RL and static PID controllers.
View Article and Find Full Text PDFJ Reconstr Microsurg
March 2025
Division of Plastic and Reconstructive Surgery, University of Colorado Anschutz Medical Campus, Aurora, United States.
Background Abdominal wall bulges and hernias are not uncommon complications following deep inferior epigastric perforator (DIEP) flap harvest. Abdominal wall reinforcement using synthetic meshes has been found to decrease bulges by up to 70%; however, such meshes can be associated with other issues such as seromas and infections. Reinforced tissue matrix (RTM) mesh can be used for abdominal wall reinforcement due to its ability to recruit fibroblasts and provide a scaffold for cellular proliferation.
View Article and Find Full Text PDFJ Chromatogr A
March 2025
Vrije Universiteit Brussel, Department of Chemical Engineering, Pleinlaan 2, 1050 Brussel, Belgium. Electronic address:
Chromatographic problem solving, commonly referred to as method development (MD), is hugely complex, given the many operational parameters that must be optimized and their large effect on the elution times of individual sample compounds. Recently, the use of reinforcement learning has been proposed to automate and expedite this process for liquid chromatography (LC). This study further explores deep reinforcement learning (RL) for LC method development.
View Article and Find Full Text PDFSci Rep
March 2025
Department of Clinical Laboratory Sciences, College of Applied Medical Science, King Khalid University, Abha, Saudi Arabia.
This scholarly paper explores the utilization of Machine Learning (ML) and Deep Learning (DL) methodologies to enhance the cybersecurity aspects of script development. Given the increasing panorama of threats in contemporary software creation, cybersecurity has ascended to a critical realm of concern. Traditional security measures frequently prove inadequate in countering complex breaches.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!