Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets Academic Article uri icon

abstract

  • We present attribute bagging (AB), a technique for improving the accuracy and stability of classifier ensembles induced using random subsets of features. AB is a wrapper method that can be used with any learning algorithm. It establishes an appropriate attribute subset size and then randomly selects subsets of features, creating projections of the training set on which the ensemble classifiers are built. The induced classifiers are then used for voting. This article compares the performance of our AB method with bagging and other algorithms on a hand-pose recognition dataset. It is shown that AB gives consistently better results than bagging, both in accuracy and stability. The performance of ensemble voting in bagging and the AB method as a function of the attribute subset size and the number of voters for both weighted and unweighted voting is tested and discussed. We also demonstrate that ranking the attribute subsets by their classification accuracy and voting using only the best subsets further improves the resulting performance of the ensemble. 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

published proceedings

  • PATTERN RECOGNITION

altmetric score

  • 6

author list (cited authors)

  • Bryll, R., Gutierrez-Osuna, R., & Quek, F.

citation count

  • 340

complete list of authors

  • Bryll, R||Gutierrez-Osuna, R||Quek, F

publication date

  • June 2003