Improved Adaptive-Reinforcement Learning Control for Morphing Unmanned Air Vehicles Conference Paper uri icon

abstract

  • This paper presents an improved Adaptive-Reinforcement Learning Control methodology to the problem of unmanned air vehicle morphing. Sequential Function Approximation, a Galerkin based scattered data approximation scheme, is used to generalize the learning from previously experienced quantized states and actions to the continuous state-action space, all of which may not have been experienced before. The reinforcement learning morphing control function is integrated with an adaptive dynamic inversion control trajectory tracking function. An episodic unsupervised learning simulation using the Q-Learning method is developed to learn the optimal shape change policy, and optimality is addressed by cost functions representing optimal shapes corresponding to flight conditions. The methodology is demonstrated with a numerical example of a hypothetical 3-D smart unmanned air vehicle that can morph in all three spatial dimensions, tracking a specified trajectory and autonomously morphing over a set of shapes corresponding to flight conditions along the trajectory. Results presented in the paper show that this methodology is capable of learning the required shape and morphing into it, and accurately tracking the reference trajectory in the presence of parametric uncertainties, unmodeled dynamics and disturbances.

name of conference

  • Infotech@Aerospace

published proceedings

  • Infotech@Aerospace

author list (cited authors)

  • Doebbler, J., Tandale, M., Valasek, J., & Meade, A.

citation count

  • 3

complete list of authors

  • Doebbler, James||Tandale, Monish||Valasek, John||Meade, Andrew

publication date

  • September 2005