This article investigates model-free reinforcement learning (RL)-based H control problem for discrete-time 2-D Markov jump Roesser systems ( 2 -D MJRSs) with optimal disturbance attenuation level. This is compared to existing studies on H control of 2-D MJRSs with optimal disturbance attenuation levels that are off-line and use full system dynamics. We design a comprehensive model-free RL algorithm to solve optimal H control policy, optimize disturbance attenuation level, and search for the initial stabilizing control policy, via online horizontal and vertical data along 2-D MJRSs trajectories. The optimal disturbance attenuation level is obtained by solving a set of linear matrix inequalities based on online measurement data. The initial stabilizing control policy is obtained via a data-driven parallel value iteration (VI) algorithm. Besides, we further certify the performance including the convergence of the RL algorithm and the asymptotic mean-square stability of the closed-loop systems. Finally, simulation results and comparisons demonstrate the effectiveness of the proposed algorithms.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2024.3487760DOI Listing

Publication Analysis

Top Keywords

disturbance attenuation
20
optimal disturbance
16
attenuation level
12
control policy
12
control 2-d
8
2-d markov
8
markov jump
8
jump roesser
8
roesser systems
8
mjrss optimal
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!