Improved Adaptive-Reinforcement Learning Control for morphing unmanned air vehicles. uri icon

abstract

  • This paper presents an improved Adaptive-Reinforcement Learning Control methodology for the problem of unmanned air vehicle morphing control. The reinforcement learning morphing control function that learns the optimal shape change policy is integrated with an adaptive dynamic inversion control trajectory tracking function. An episodic unsupervised learning simulation using the Q-learning method is developed to replace an earlier and less accurate Actor-Critic algorithm. Sequential Function Approximation, a Galerkin-based scattered data approximation scheme, replaces a K-Nearest Neighbors (KNN) method and is used to generalize the learning from previously experienced quantized states and actions to the continuous state-action space, all of which may not have been experienced before. The improved method showed smaller errors and improved learning of the optimal shape compared to the KNN.

published proceedings

  • IEEE Trans Syst Man Cybern B Cybern

author list (cited authors)

  • Valasek, J., Doebbler, J., Tandale, M. D., & Meade, A. J.

citation count

  • 53

complete list of authors

  • Valasek, John||Doebbler, James||Tandale, Monish D||Meade, Andrew J

publication date

  • August 2008