Learning by canonical smooth estimation. II. Learning and choice of model complexity Academic Article uri icon

abstract

  • This paper analyzes the properties of a procedure for learning from examples. This "canonical learner" is based on a canonical error estimator developed in a companion paper. In learning problems one can observe data that consists of labeled sample points, and the goal is to find a model or "hypothesis" from a set of candidates that will accurately predict the labels of new sample points. The expected mismatch between a hypothesis' prediction and the actual label of a new sample point is called the hypothesis' "generalization error." We compare the canonical learner with the traditional technique of finding hypotheses that minimize the relative frequency-based empirical error estimate. It is shown that for a broad class of learning problems, the set of cases for which such empirical error minimization works is a proper subset of the cases for which the canonical learner works. We derive bounds to show that the number of samples required by these two methods is comparable. We also address the issue of how to determine the appropriate complexity for the class of candidate hypotheses. Many existing techniques solve this problem at the expense of requiring the evaluation of an absolute, a priori measure of each hypothesis' complexity. The method we present does not. It is based on an extension of the canonical learner and uses a natural, relative measure of each model's complexity. We show that this method will learn for a variety of common parametric hypothesis classes. Also, for a broad class of learning problems, we show that this method works whenever a certain conventional method for choosing model complexity does. 1996 IEEE.

published proceedings

  • IEEE Transactions on Automatic Control

author list (cited authors)

  • Buescher, K. L., & Kumar, P. R.

citation count

  • 22

complete list of authors

  • Buescher, KL||Kumar, PR

publication date

  • April 1996