Multiresolution State-Space Discretization Method for Q-Learning with Function Approximation and Policy Iteration Conference Paper uri icon

abstract

  • A multiresolution state-space discretization method is developed for the episodic unsupervised learning method of Q-Learning. In addition, a genetic algorithm is used periodically during learning to approximate the action-value function. Policy iteration is added as a stopping criterion for the algorithm. For large scale problems Q-Learning often suffers from the Curse of Dimensionality due to large numbers of possible state-action pairs. This paper develops a method whereby a state-space is adaptively discretized by progressively finer grids around the areas of interest within the state or learning space. Policy iteration is added to prevent unnecessary episodes at each level of discretization once the learning has converged. Utility of the method is demonstrated with application to the problem of a morphing airfoil with two morphing parameters (two state variables). By setting the multiresolution method to define the area of interest by the goal the agent seeks, it is shown that this method can learn a specific goal within 0.002, while reducing the total number episodes needed to converge by 85% from the allotted total possible episodes. It is also shown that a good approximation of the action-value function is produced with 80% agreement between the tabulated and approximated policy, though empirically the approximated policy appears to be superior. 2009 IEEE.

name of conference

  • 2009 IEEE International Conference on Systems, Man and Cybernetics

published proceedings

  • 2009 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2009), VOLS 1-9

author list (cited authors)

  • Lampton, A., & Valasek, J.

citation count

  • 0

complete list of authors

  • Lampton, Amanda||Valasek, John

publication date

  • October 2009