IMAGE PROCESSING OF EARTH AND MOON IMAGES FOR OPTICAL NAVIGATION SYSTEMS
- Additional Document Info
- View All
This paper presents a summary of methods for processing real and synthetic images of the Moon and Earth for the purposes of Optical Navigation of spacecraft. They were developed in order to comply with autonomous navigation capabilities requirements for NASA's Orion missions, however their application may be applied to a broad range of optical navigation problems. Using a pinhole camera taking images of a celestial body the image processing provides estimate of the observer position using knowledge of time, attitude, camera parameters, and a rough estimation of the observer position to identify the body observed and the sun illumination. Image processing follows a multi-step process which produces an estimate for the relative position between observer and observed body. Preliminary steps remove image distortion and select high contrast pixel from the gradient of the image. Then, edge detection schemes attempt to select only pixels belonging to the edge of the target body and use those pixels to obtain a first estimation of body centroid and distance. This estimation is then refined using a 2-Dimensional model (Gaussian) modeling the gradient behavior of a set of pixels selected around the illuminated hard edge. These methods have been applied to synthetic images generated using the NASA's EDGE software as well as to real images of the Moon taken from on board the ISS by a Nikon camera. Results from each of the image sets are presented and the strengths of the algorithm are evaluated against the Orion mission requirements. Areas of future work are suggested as well.
author list (cited authors)
Borissov, S., & Mortari, D.