Universal algorithms for learning theory part I: Piecewise constant functions Academic Article uri icon

abstract

  • This paper is concerned with the construction and analysis of a universal estimator for the regression problem in supervised learning. Universal means that the estimator does not depend on any a priori assumptions about the regression function to be estimated. The universal estimator studied in this paper consists of a least-square fitting procedure using piecewise constant functions on a partition which depends adaptively on the data. The partition is generated by a splitting procedure which differs from those used in CART algorithms. It is proven that this estimator performs at the optimal convergence rate for a wide class of priors on the regression function. Namely, as will be made precise in the text, if the regression function is in any one of a certain class of approximation spaces (or smoothness spaces of order not exceeding one - a limitation resulting because the estimator uses piecewise constants) measured relative to the marginal measure, then the estimator converges to the regression function (in the least squares sense) with an optimal rate of convergence in terms of the number of samples. The estimator is also numerically feasible and can be implemented on-line. distribution-free learning theory, nonparametric regression, universal algorithms, adaptive approximation, on-line algorithms.

published proceedings

  • Journal of Machine Learning Research

author list (cited authors)

  • Binev, P., Cohen, A., Dahmen, W., DeVore, R., & Temlyakov, V.

complete list of authors

  • Binev, P||Cohen, A||Dahmen, W||DeVore, R||Temlyakov, V

publication date

  • September 2005