This article investigates model-free reinforcement learning (RL)-based H control problem for discrete-time 2-D Markov jump Roesser systems ( 2 -D MJRSs) with optimal disturbance attenuation level. This is compared to existing studies on H control of 2-D MJRSs with optimal disturbance attenuation levels that are off-line and use full system dynamics. We design a comprehensive model-free RL algorithm to solve optimal H control policy, optimize disturbance attenuation level, and search for the initial stabilizing control policy, via online horizontal and vertical data along 2-D MJRSs trajectories. The optimal disturbance attenuation level is obtained by solving a set of linear matrix inequalities based on online measurement data. The initial stabilizing control policy is obtained via a data-driven parallel value iteration (VI) algorithm. Besides, we further certify the performance including the convergence of the RL algorithm and the asymptotic mean-square stability of the closed-loop systems. Finally, simulation results and comparisons demonstrate the effectiveness of the proposed algorithms.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TNNLS.2024.3487760 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!