Data-based approximate policy iteration for affine nonlinear continuous-time optimal control design Academic Article uri icon

abstract

  • 2014 Elsevier Ltd. All rights reserved. This paper addresses the model-free nonlinear optimal control problem based on data by introducing the reinforcement learning (RL) technique. It is known that the nonlinear optimal control problem relies on the solution of the Hamilton-Jacobi-Bellman (HJB) equation, which is a nonlinear partial differential equation that is generally impossible to be solved analytically. Even worse, most practical systems are too complicated to establish an accurate mathematical model. To overcome these difficulties, we propose a data-based approximate policy iteration (API) method by using real system data rather than a system model. Firstly, a model-free policy iteration algorithm is derived and its convergence is proved. The implementation of the algorithm is based on the actor-critic structure, where actor and critic neural networks (NNs) are employed to approximate the control policy and cost function, respectively. To update the weights of actor and critic NNs, a least-square approach is developed based on the method of weighted residuals. The data-based API is an off-policy RL method, where the "exploration" is improved by arbitrarily sampling data on the state and input domain. Finally, we test the data-based API control design method on a simple nonlinear system, and further apply it to a rotational/translational actuator system. The simulation results demonstrate the effectiveness of the proposed method.

published proceedings

  • AUTOMATICA

author list (cited authors)

  • Luo, B., Wu, H., Huang, T., & Liu, D.

citation count

  • 197

complete list of authors

  • Luo, Biao||Wu, Huai-Ning||Huang, Tingwen||Liu, Derong

publication date

  • December 2014