Terrain Mapping and Landing Operations Using Vision Based Navigation Systems
- View All
This paper documents our recent end-to-end demonstration of a unique simultaneous localization and mapping experiment. As a part of the experiment, an omni-directional robotic system equipped with a stereo-camera acts as scout agent. The image streams thus acquired are autonomously acquired and processed by a state of the art computational vision pipeline. The pipeline subsequently generates three dimensional models of the unstructured terrain in question to designed accuracies. A rigorously linear algorithm is proposed to attain fast and efficient computations of relative navigation hypotheses. The hypotheses are subsequently used in an outer-loop statistical decision process to compute the best relative motion model while simultaneously deriving error metrics of the motion model. The model data thus obtained is used in the determination of a "safe" landing area for an unmanned air vehicle (quadroter). Upon relays, this information is used for safe landing of a quadroter, concluding the experiment. Three dimensional models of the scene are rendered along with the relative navigation solutions of the platform motion. Experimental results obtained in the experiment indicate a high degree of optimism on the realization of vision based navigation systems (passive) in routine state of practice for autonomous landing and navigation. © 2011 by Manoranjan Majji and John Junkins.
author list (cited authors)
Majji, M., Davis, J., Doebbler, J., Junkins, J., Macomber, B., Vavrina, M., & Vian, J.