PARALLEL GENERATION OF EXTREMAL FIELD MAPS FOR OPTIMAL MULTI-REVOLUTION CONTINUOUS THRUST ORBIT TRANSFERS
Conference Paper
Overview
Identity
Additional Document Info
View All
Overview
abstract
We simulate hybrid thrust transfers to rendezvous with space debris in orbit about the Earth. The hybrid thrust transfer consists of a two-impulse maneuver at the terminal boundaries, which is augmented with continuous low-thrust that is sustained for the duration of the flight. This optimal control problem is formulated using the path approximation numerical integration method, Modified Chebyshev Picard Iteration (MCPI). This integration method can be formulated for solving initial and boundary value problems. The boundary value problem formulation does not require a shooting method and converges over about 1/3 of an orbit. This interval can be extended to about 95% of an orbit with regularization. In order to increase this domain even further, to multiple revolution capability, we implement a shooting method known as the Method of Particular Solutions (MPS), and utilize the MCPI initial value problem implementation for integrating the state and costate equations. The p-iteration Keplerian Lambert solver is used to provide an initial guess for solving the optimal control problem. When continuous thrust is "turned off", we find that the solution to the optimal control formulation reduces to the two-impulse two-point boundary value problem, with zero thrust coast. For some transfers we observe a reduced terminal AV cost for the hybrid thrust relative to the two-impulse, and for others it may be increased. This depends on the relative orbits and the initial phasing of the satellites. Determining the globally optimal sequence of maneuvers for retrieving orbital debris can require simulating thousands of feasible transfer trajectories. We utilize a parallel architecture on our cluster at the LASR Lab (Texas A&M), for computing the AV cost for each transfer trajectory, and display the results on an extremal field map. Both MCPI and MPS afford several layers of parallelization, and taking advantage of this reduces the computation time by at least an order of magnitude compared with the serial implementation.