An Empirical Comparison of Model Validation Techniques for Defect Prediction Models. Tantithamthavorn, C., McIntosh, S., Hassan, A. E., & Matsumoto, K. 43(1):1–18.
An Empirical Comparison of Model Validation Techniques for Defect Prediction Models [link]Paper  doi  abstract   bibtex   
Defect prediction models help software quality assurance teams to allocate their limited resources to the most defect-prone modules. Model validation techniques, such as k -fold cross-validation, use historical data to estimate how well a model will perform in the future. However, little is known about how accurate the estimates of model validation techniques tend to be. In this paper, we investigate the bias and variance of model validation techniques in the domain of defect prediction. Analysis of 101 public defect datasets suggests that 77 percent of them are highly susceptible to producing unstable results- - selecting an appropriate model validation technique is a critical experimental design choice. Based on an analysis of 256 studies in the defect prediction literature, we select the 12 most commonly adopted model validation techniques for evaluation. Through a case study of 18 systems, we find that single-repetition holdout validation tends to produce estimates with 46-229 percent more bias and 53-863 percent more variance than the top-ranked model validation techniques. On the other hand, out-of-sample bootstrap validation yields the best balance between the bias and variance of estimates in the context of our study. Therefore, we recommend that future defect prediction studies avoid single-repetition holdout validation, and instead, use out-of-sample bootstrap validation.
@article{tantithamthavornEmpiricalComparisonModel2017,
  title = {An Empirical Comparison of Model Validation Techniques for Defect Prediction Models},
  author = {Tantithamthavorn, Chakkrit and McIntosh, Shane and Hassan, Ahmed E. and Matsumoto, Kenichi},
  date = {2017-01},
  journaltitle = {IEEE Transactions on Software Engineering},
  volume = {43},
  pages = {1--18},
  issn = {0098-5589},
  doi = {10.1109/tse.2016.2584050},
  url = {https://doi.org/10.1109/tse.2016.2584050},
  abstract = {Defect prediction models help software quality assurance teams to allocate their limited resources to the most defect-prone modules. Model validation techniques, such as k -fold cross-validation, use historical data to estimate how well a model will perform in the future. However, little is known about how accurate the estimates of model validation techniques tend to be. In this paper, we investigate the bias and variance of model validation techniques in the domain of defect prediction. Analysis of 101 public defect datasets suggests that 77 percent of them are highly susceptible to producing unstable results- - selecting an appropriate model validation technique is a critical experimental design choice. Based on an analysis of 256 studies in the defect prediction literature, we select the 12 most commonly adopted model validation techniques for evaluation. Through a case study of 18 systems, we find that single-repetition holdout validation tends to produce estimates with 46-229 percent more bias and 53-863 percent more variance than the top-ranked model validation techniques. On the other hand, out-of-sample bootstrap validation yields the best balance between the bias and variance of estimates in the context of our study. Therefore, we recommend that future defect prediction studies avoid single-repetition holdout validation, and instead, use out-of-sample bootstrap validation.},
  keywords = {*imported-from-citeulike-INRMM,~INRMM-MiD:c-14389824,~to-add-doi-URL,bootstrapping,cross-validation,modelling-uncertainty,out-of-bag,resampling,software-quality,uncertainty,validation},
  number = {1}
}

Downloads: 0