This paper provides necessary and sufficient conditions for the existence of the static output-feedback (OPFB) solution to the H control problem for linear discrete-time systems. It is shown that the solution of the static OPFB H control is a Nash equilibrium point. Furthermore, a Q-learning algorithm is developed to find the H OPFB solution online using data measured along the system trajectories and without knowing the system matrices. This is achieved by solving a game algebraic Riccati equation online and using the measured data. A simulation example shows the effectiveness of the proposed method.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2019.2901889DOI Listing

Publication Analysis

Top Keywords

static output-feedback
8
discrete-time systems
8
opfb solution
8
output-feedback control
4
control design
4
design discrete-time
4
systems reinforcement
4
reinforcement learning
4
learning paper
4
paper sufficient
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!