Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap. Kim, J. Computational Statistics & Data Analysis, 53(11):3735–3745, September, 2009.
Paper doi abstract bibtex We consider the accuracy estimation of a classifier constructed on a given training sample. The naive resubstitution estimate is known to have a downward bias problem. The traditional approach to tackling this bias problem is cross-validation. The bootstrap is another way to bring down the high variability of cross-validation. But a direct comparison of the two estimators, cross-validation and bootstrap, is not fair because the latter estimator requires much heavier computation. We performed an empirical study to compare the .632+ bootstrap estimator with the repeated 10-fold cross-validation and the repeated one-third holdout estimator. All the estimators were set to require about the same amount of computation. In the simulation study, the repeated 10-fold cross-validation estimator was found to have better performance than the .632+ bootstrap estimator when the classifier is highly adaptive to the training sample. We have also found that the .632+ bootstrap estimator suffers from a bias problem for large samples as well as for small samples.
@article{kim_estimating_2009,
title = {Estimating classification error rate: {Repeated} cross-validation, repeated hold-out and bootstrap},
volume = {53},
issn = {0167-9473},
url = {http://www.sciencedirect.com/science/article/pii/S0167947309001601},
doi = {10.1016/j.csda.2009.04.009},
abstract = {We consider the accuracy estimation of a classifier constructed on a given training sample. The naive resubstitution estimate is known to have a downward bias problem. The traditional approach to tackling this bias problem is cross-validation. The bootstrap is another way to bring down the high variability of cross-validation. But a direct comparison of the two estimators, cross-validation and bootstrap, is not fair because the latter estimator requires much heavier computation. We performed an empirical study to compare the .632+ bootstrap estimator with the repeated 10-fold cross-validation and the repeated one-third holdout estimator. All the estimators were set to require about the same amount of computation. In the simulation study, the repeated 10-fold cross-validation estimator was found to have better performance than the .632+ bootstrap estimator when the classifier is highly adaptive to the training sample. We have also found that the .632+ bootstrap estimator suffers from a bias problem for large samples as well as for small samples.},
number = {11},
journal = {Computational Statistics \& Data Analysis},
author = {Kim, Ji-Hyun},
month = sep,
year = {2009},
pages = {3735--3745},
}
Downloads: 0
{"_id":"537wHrCSRFcHGhhc7","bibbaseid":"kim-estimatingclassificationerrorraterepeatedcrossvalidationrepeatedholdoutandbootstrap-2009","author_short":["Kim, J."],"bibdata":{"bibtype":"article","type":"article","title":"Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap","volume":"53","issn":"0167-9473","url":"http://www.sciencedirect.com/science/article/pii/S0167947309001601","doi":"10.1016/j.csda.2009.04.009","abstract":"We consider the accuracy estimation of a classifier constructed on a given training sample. The naive resubstitution estimate is known to have a downward bias problem. The traditional approach to tackling this bias problem is cross-validation. The bootstrap is another way to bring down the high variability of cross-validation. But a direct comparison of the two estimators, cross-validation and bootstrap, is not fair because the latter estimator requires much heavier computation. We performed an empirical study to compare the .632+ bootstrap estimator with the repeated 10-fold cross-validation and the repeated one-third holdout estimator. All the estimators were set to require about the same amount of computation. In the simulation study, the repeated 10-fold cross-validation estimator was found to have better performance than the .632+ bootstrap estimator when the classifier is highly adaptive to the training sample. We have also found that the .632+ bootstrap estimator suffers from a bias problem for large samples as well as for small samples.","number":"11","journal":"Computational Statistics & Data Analysis","author":[{"propositions":[],"lastnames":["Kim"],"firstnames":["Ji-Hyun"],"suffixes":[]}],"month":"September","year":"2009","pages":"3735–3745","bibtex":"@article{kim_estimating_2009,\n\ttitle = {Estimating classification error rate: {Repeated} cross-validation, repeated hold-out and bootstrap},\n\tvolume = {53},\n\tissn = {0167-9473},\n\turl = {http://www.sciencedirect.com/science/article/pii/S0167947309001601},\n\tdoi = {10.1016/j.csda.2009.04.009},\n\tabstract = {We consider the accuracy estimation of a classifier constructed on a given training sample. The naive resubstitution estimate is known to have a downward bias problem. The traditional approach to tackling this bias problem is cross-validation. The bootstrap is another way to bring down the high variability of cross-validation. But a direct comparison of the two estimators, cross-validation and bootstrap, is not fair because the latter estimator requires much heavier computation. We performed an empirical study to compare the .632+ bootstrap estimator with the repeated 10-fold cross-validation and the repeated one-third holdout estimator. All the estimators were set to require about the same amount of computation. In the simulation study, the repeated 10-fold cross-validation estimator was found to have better performance than the .632+ bootstrap estimator when the classifier is highly adaptive to the training sample. We have also found that the .632+ bootstrap estimator suffers from a bias problem for large samples as well as for small samples.},\n\tnumber = {11},\n\tjournal = {Computational Statistics \\& Data Analysis},\n\tauthor = {Kim, Ji-Hyun},\n\tmonth = sep,\n\tyear = {2009},\n\tpages = {3735--3745},\n}\n\n","author_short":["Kim, J."],"key":"kim_estimating_2009","id":"kim_estimating_2009","bibbaseid":"kim-estimatingclassificationerrorraterepeatedcrossvalidationrepeatedholdoutandbootstrap-2009","role":"author","urls":{"Paper":"http://www.sciencedirect.com/science/article/pii/S0167947309001601"},"metadata":{"authorlinks":{}},"html":""},"bibtype":"article","biburl":"https://bibbase.org/zotero/kjelljorner","dataSources":["64RFiGdCa5JLg5kLH"],"keywords":[],"search_terms":["estimating","classification","error","rate","repeated","cross","validation","repeated","hold","out","bootstrap","kim"],"title":"Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap","year":2009}