Reporting bias when using real data sets to analyze classification performance.
- Additional Document Info
- View All
MOTIVATION: It is commonplace for authors to propose a new classification rule, either the operator construction part or feature selection, and demonstrate its performance on real data sets, which often come from high-dimensional studies, such as from gene-expression microarrays, with small samples. Owing to the variability in feature selection and error estimation, individual reported performances are highly imprecise. Hence, if only the best test results are reported, then these will be biased relative to the overall performance of the proposed procedure. RESULTS: This article characterizes reporting bias with several statistics and computes these statistics in a large simulation study using both modeled and real data. The results appear as curves giving the different reporting biases as functions of the number of samples tested when reporting only the best or second best performance. It does this for two classification rules, linear discriminant analysis (LDA) and 3-nearest-neighbor (3NN), and for filter and wrapper feature selection, t-test and sequential forward search. These were chosen on account of their well-studied properties and because they were amenable to the extremely large amount of processing required for the simulations. The results across all the experiments are consistent: there is generally large bias overriding what would be considered a significant performance differential, when reporting the best or second best performing data set. We conclude that there needs to be a database of data sets and that, for those studies depending on real data, results should be reported for all data sets in the database. AVAILABILITY: Companion web site at http://gsp.tamu.edu/Publications/supplementary/yousefi09a/
author list (cited authors)
Yousefi, M. R., Hua, J., Sima, C., & Dougherty, E. R.
complete list of authors
Yousefi, Mohammadmahdi R||Hua, Jianping||Sima, Chao||Dougherty, Edward R