A new method, which optimizes the best fitting of numerical data, is here presented as applied to m-degree polynomial and exponential functions. Optimization, which is obtained through a linear variable change, is defined as that which minimizes the condition number of the matrix to be inverted. The optimal limits for the new abscissas range are obtained using the steepest descent method, whose starting point is the optimal solution for data sampled with a constant step. This starting point, which is dependent on the polynomial degree m and on the number of data n, is provided in a closed-form solution for m=1 and m=2, while, for m>2, its value is provided by a best-fitting function approximating numerical optimal solutions. A numerical example shows the substantial improvement of this method with respect to the classic best-fitting method. The resulting Optimal Best Fitting is not restricted to polynomial or exponential functions, but it might be applied to other different fitting functions, such as the Legendre orthogonal polynomials and the trigonometric functions, at least.