Discrete-Time Nonzero-Sum Games for Multiplayer Using Policy-Iteration-Based Adaptive Dynamic Programming Algorithms. Academic Article uri icon

abstract

  • In this paper, we investigate the nonzero-sum games for a class of discrete-time (DT) nonlinear systems by using a novel policy iteration (PI) adaptive dynamic programming (ADP) method. The main idea of our proposed PI scheme is to utilize the iterative ADP algorithm to obtain the iterative control policies, which not only ensure the system to achieve stability but also minimize the performance index function for each player. This paper integrates game theory, optimal control theory, and reinforcement learning technique to formulate and handle the DT nonzero-sum games for multiplayer. First, we design three actor-critic algorithms, an offline one and two online ones, for the PI scheme. Subsequently, neural networks are employed to implement these algorithms and the corresponding stability analysis is also provided via the Lyapunov theory. Finally, a numerical simulation example is presented to demonstrate the effectiveness of our proposed approach.

published proceedings

  • IEEE Trans Cybern

altmetric score

  • 3

author list (cited authors)

  • Zhang, H., Jiang, H. e., Luo, C., & Xiao, G.

citation count

  • 68

complete list of authors

  • Zhang, Huaguang||Jiang, He||Luo, Chaomin||Xiao, Geyang

publication date

  • October 2017