Intelligent motion video guidance for Unmanned Air System ground target surveillance
Advances in unmanned flight have led to the development of Unmanned Air Systems that are capable of carrying state-of-the-art video capturing systems for the intended purpose of surveillance and tracking. These vehicles have the capability to fly through a target area with a mounted camera and allow humans to operate both the UAS and the camera to attempt to survey any objects that are deemed targets. These systems have worked well when controlled by humans, but having them operate autonomously to reduce operator workload and manpower is even more challenging when the camera is fixed to the airframe instead of being mounted on a gimbal, so that the aircraft must be steered in order to steer the camera. The presence of winds must also be accounted for. This paper develops an algorithm for surveillance of ground targets by UAS with fixed pan and tilt cameras, in the presence of winds. This paper develops an algorithm for surveillance of ground targets by UAS. The specific RL algorithm used is Q-learning, and the objective of the approach is to bring any target located in an image captured by a camera into the center of the image using the learned control policy. The learning agent determines offline (initially) how to control the UAS and camera to get a target from any point in the image to the center and hold it there. A feature of this approach is that the learning agent will continue to learn and refine and update the previously offline learned control policy, during actual operation. Results presented in the paper demonstrate that the approach has merit for autonomous surveillance of ground targets. 2012 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.