Pursuit-evasion games reside at the intersection of game theory and optimal control theory. They are often referred to as differential games because the dynamics of the relative system are modeled by the pursuer and evader differential equations of motion. Pursuit-evasion games diverge from traditional optimal control problems due to the participation of multiple intelligent agents with conflicting goals. Individual goals of each agent are defined through multiple cost functions and determine how each player will behave throughout the game. The optimal performance of each player is dependent upon how much knowledge they have about themselves, their opponent, and the system. Complete information games represent the ideal case in which each player can truly play optimally because all pertinent information about the game is readily available to each player. Player performance in a pursuit-evasion game greatly diminishes as information availability moves further from the ideal case and approaches the most realistic scenarios. Methods to maintain satisfactory performance in the presence of incomplete, imperfect, and uncertain information games is very desirable due to the application of optimal pursuit-evasion solutions to high-risk missions including spacecraft rendezvous and missile interception. Behavior learning techniques can be used to estimate the strategy of an opponent and augment the pursuit-evasion game into a one-sided optimal control problem. The application of behavior learning is identified in final-time-fixed, in finite-horizon, and final-time-free situations. A twostep dynamic inversion process is presented to fit systems with nonlinear kinematics and dynamics into the behavior learning framework for continuous, linear-quadratic games. These techniques are applied to minimum-time, spacecraft reorientation, and missile interception examples to illustrate the advantage of these techniques in real-world applications when essential information is unavailable.