This work proposes a control-informed reinforcement learning (CIRL) framework that integrates proportional-integral-derivative (PID) control components into the architecture of deep reinforcement learning (RL) policies, incorporating prior knowledge from control theory into the learning process. CIRL improves performance and robustness by combining the best of both worlds: the disturbance-rejection and set point-tracking capabilities of PID control and the nonlinear modeling capacity of deep RL. Simulation studies conducted on a continuously stirred tank reactor system demonstrate the improved performance of CIRL compared to both conventional model-free deep RL and static PID controllers.
View Article and Find Full Text PDFThe novel coronavirus SARS-CoV-2 and resulting COVID-19 disease have had an unprecedented spread and continue to cause an increasing number of fatalities worldwide. While vaccines are still under development, social distancing, extensive testing, and quarantining of confirmed infected subjects remain the most effective measures to contain the pandemic. These measures carry a significant socioeconomic cost.
View Article and Find Full Text PDFWe review the impact of control systems and strategies on the energy efficiency of chemical processes. We show that, in many ways, good control performance is a necessary but not sufficient condition for energy efficiency. The direct effect of process control on energy efficiency is manyfold: Reducing output variability allows for operating chemical plants closer to their limits, where the energy/economic optima typically lie.
View Article and Find Full Text PDF