Model-Free Optimal Tracking Control via Critic-Only Q-Learning. Academic Article uri icon

abstract

  • Model-free control is an important and promising topic in control fields, which has attracted extensive attention in the past few years. In this paper, we aim to solve the model-free optimal tracking control problem of nonaffine nonlinear discrete-time systems. A critic-only Q-learning (CoQL) method is developed, which learns the optimal tracking control from real system data, and thus avoids solving the tracking Hamilton-Jacobi-Bellman equation. First, the Q-learning algorithm is proposed based on the augmented system, and its convergence is established. Using only one neural network for approximating the Q-function, the CoQL method is developed to implement the Q-learning algorithm. Furthermore, the convergence of the CoQL method is proved with the consideration of neural network approximation error. With the convergent Q-function obtained from the CoQL method, the adaptive optimal tracking control is designed based on the gradient descent scheme. Finally, the effectiveness of the developed CoQL method is demonstrated through simulation studies. The developed CoQL method learns with off-policy data and implements with a critic-only structure, thus it is easy to realize and overcome the inadequate exploration problem.

published proceedings

  • IEEE Trans Neural Netw Learn Syst

author list (cited authors)

  • Luo, B., Liu, D., Huang, T., & Wang, D.

citation count

  • 213

complete list of authors

  • Luo, Biao||Liu, Derong||Huang, Tingwen||Wang, Ding

publication date

  • October 2016