Control-Informed Reinforcement Learning for Chemical Processes.

Ind Eng Chem Res

Department of Chemical Engineering, Imperial College London, London, South Kensington SW7 2AZ, U.K.

Published: March 2025

This work proposes a control-informed reinforcement learning (CIRL) framework that integrates proportional-integral-derivative (PID) control components into the architecture of deep reinforcement learning (RL) policies, incorporating prior knowledge from control theory into the learning process. CIRL improves performance and robustness by combining the best of both worlds: the disturbance-rejection and set point-tracking capabilities of PID control and the nonlinear modeling capacity of deep RL. Simulation studies conducted on a continuously stirred tank reactor system demonstrate the improved performance of CIRL compared to both conventional model-free deep RL and static PID controllers. CIRL exhibits better set point-tracking ability, particularly when generalizing to trajectories containing set points outside the training distribution, suggesting enhanced generalization capabilities. Furthermore, the embedded prior control knowledge within the CIRL policy improves its robustness to unobserved system disturbances. The CIRL framework combines the strengths of classical control and reinforcement learning to develop sample-efficient and robust deep reinforcement learning algorithms with potential applications in complex industrial systems.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11891910PMC
http://dx.doi.org/10.1021/acs.iecr.4c03233DOI Listing

Publication Analysis

Top Keywords

reinforcement learning
20
control-informed reinforcement
8
cirl framework
8
pid control
8
deep reinforcement
8
set point-tracking
8
learning
6
cirl
6
control
5
learning chemical
4

Similar Publications

Two mechanisms that have been used to study the evolution of cooperative behavior are altruistic punishment, in which cooperative individuals pay additional costs to punish defection, and multilevel selection, in which competition between groups can help to counteract individual-level incentives to cheat. Boyd, Gintis, Bowles, and Richerson have used simulation models of cultural evolution to suggest that altruistic punishment and pairwise group-level competition can work in concert to promote cooperation, even when neither mechanism can do so on its own. In this paper, we formulate a PDE model for multilevel selection motivated by the approach of Boyd and coauthors, modeling individual-level birth-death competition with a replicator equation based on individual payoffs and describing group-level competition with pairwise conflicts based on differences in the average payoffs of the competing groups.

View Article and Find Full Text PDF

Of rats and robots: A mutual learning paradigm.

J Exp Anal Behav

March 2025

Behavioral Neuroscience Laboratory, Department of Psychology, Boğaziçi University, Istanbul, Turkey.

Robots are increasingly used alongside Skinner boxes to train animals in operant conditioning tasks. Similarly, animals are being employed in artificial intelligence research to train various algorithms. However, both types of experiments rely on unidirectional learning, where one partner-the animal or the robot-acts as the teacher and the other as the student.

View Article and Find Full Text PDF

Maintaining Baby-Friendly Hospital Initiative (BFHI) standards within a complex healthcare system presents unique challenges. This case study from a regional perinatal center in the northeast United States details the design and implementation of a program to address BFHI Step 2, which requires ongoing competency assessment and team member training to ensure breastfeeding support. The shift of BFHI competencies to continuous professional development introduced logistical challenges, compounded by staff turnover and budget constraints.

View Article and Find Full Text PDF

Machine learning techniques have emerged as a promising tool for efficient cache management, helping optimize cache performance and fortify against security threats. The range of machine learning is vast, from reinforcement learning-based cache replacement policies to Long Short-Term Memory (LSTM) models predicting content characteristics for caching decisions. Diverse techniques such as imitation learning, reinforcement learning, and neural networks are extensively useful in cache-based attack detection, dynamic cache management, and content caching in edge networks.

View Article and Find Full Text PDF

Control-Informed Reinforcement Learning for Chemical Processes.

Ind Eng Chem Res

March 2025

Department of Chemical Engineering, Imperial College London, London, South Kensington SW7 2AZ, U.K.

This work proposes a control-informed reinforcement learning (CIRL) framework that integrates proportional-integral-derivative (PID) control components into the architecture of deep reinforcement learning (RL) policies, incorporating prior knowledge from control theory into the learning process. CIRL improves performance and robustness by combining the best of both worlds: the disturbance-rejection and set point-tracking capabilities of PID control and the nonlinear modeling capacity of deep RL. Simulation studies conducted on a continuously stirred tank reactor system demonstrate the improved performance of CIRL compared to both conventional model-free deep RL and static PID controllers.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!