Genetic algorithms (GA), like NNs, can be used to fit highly nonlinear functional forms, such as empirical interatomic potentials from a large ensemble of data. Briefly, a genetic algorithm uses a stochastic global search method that mimics the process of natural biological evolution. GAs operate on a population of potential solutions applying the principle of survival of the fittest to generate progressively better approximations to a solution. A new set of approximations is generated in each iteration (also known as generation) of a GA through the process of selecting individuals from the solution space according to their fitness levels, and breeding them together using operators borrowed from natural genetics. This process leads to the evolution of populations of individuals that have a higher probability of being “fitter,” i.e., better approximations of the specified potential values, than the individuals they were created from, just as in natural adaptation. The most time-consuming part in implementing a GA is often the evaluation of the objective or the fitness function. The objective function O[P] is expressed as sum squared error computed over a given large ensemble of data. Consequently, the time required for evaluating the objective function becomes an important factor. Since a GA is well suited for implementing on parallel computers, the time required for evaluating the objective function can be reduced significantly by parallel processing. A better approach would be to map out the objective function using several possible solutions concurrently or beforehand to improve computational efficiency of the GA prior to its execution, and using this information to implement the GA. This will obviate the need for cumbersome direct evaluation of the objective function. Neural networks may be best suited to map the functional relationship between the objective function and the various parameters of the specific functional form. This study presents an approach that combines the universal function approximation capability of multilayer neural networks to accelerate a GA for fitting atomic system potentials. The approach involves evaluating the objective function, which for the present application is the mean squared error (MSE) between the computed and model-estimated potential, and training a multilayer neural network with decision variables as input and the objective function as output.