Deep Reinforcement Learning on Intelligent Motion Video Guidance for Unmanned Air System Ground Target Tracking Conference Paper uri icon

abstract

  • 2019 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved. Tracking motion of ground targets based on aerial images can benefit commercial, civilian, and military applications. On small fixed-wing unmanned air systems that carry strapdown instead of gimbaled cameras, it is a challenging problem since the aircraft must maneuver to keep the ground targets in the image frame of the camera. Previous approaches for strapdown cameras achieved satisfactory tracking performance using standard reinforcement learning algorithms. However, these algorithms assumed constant airspeed and constant altitude because the number of states and actions was restricted. This paper presents an approach to solve the ground target tracking problem by proposing the Policy Gradient Deep Reinforcement Learning controller. The learning is based on the continuous full-state aircraft states and uses multiple states and actions. Compared to previous approaches, the major advantage of this controller is the ability to handle the full-state ground target tracking case. Policies are trained for three different target cases: static, constant linear motion, and random motion. Results presented in the paper on a simulated environment show that the trained Policy Gradient Deep Reinforcement Learning controller is able to consistently keep a randomly maneuvering target in the camera image frame. Learning algorithm sensitivity to hyperparameters selection is investigated in the paper, since this can drastically impact the tracking performance.

name of conference

  • AIAA Scitech 2019 Forum

published proceedings

  • AIAA Scitech 2019 Forum

author list (cited authors)

  • Goecks, V. G., & Valasek, J.

citation count

  • 2

complete list of authors

  • Goecks, Vinicius G||Valasek, John

publication date

  • January 2019