Bug Classification Using Program Slicing Metrics. Pan, K., Kim, S., & Whitehead, E. In Sixth IEEE International Workshop on Source Code Analysis and Manipulation, 2006. SCAM '06, pages 31--42, September, 2006.
doi  abstract   bibtex   
In this paper, we introduce 13 program slicing metrics for C language programs. These metrics use program slice information to measure the size, complexity, coupling, and cohesion properties of programs. Compared with traditional code metrics based on code statements or code structure, program slicing metrics involve measures for program behaviors. To evaluate the program slicing metrics, we compare them with the Understand for C++ suite of metrics, a set of widely-used traditional code metrics, in a series of bug classification experiments. We used the program slicing and the Understand for C++ metrics computed for 887 revisions of the Apache HTTP project and 76 revisions of the Latex2rtf project to classify source code files or functions as either buggy or bug-free. We then compared their classification prediction accuracy. Program slicing metrics have slightly better performance than the Understand for C++ metrics in classifying buggy/bug-free source code. Program slicing metrics have an overall 82.6% (Apache) and 92% (Latex2rtf) accuracy at the file level, better than the Understand for C++ metrics with an overall 80.4% (Apache) and 88% (Latex2rtf) accuracy. The experiments illustrate that the program slicing metrics have at least the same bug classification performance as the Understand for C++ metrics.
@inproceedings{ pan_bug_2006,
  title = {Bug {Classification} {Using} {Program} {Slicing} {Metrics}},
  doi = {10.1109/SCAM.2006.6},
  abstract = {In this paper, we introduce 13 program slicing metrics for C language programs. These metrics use program slice information to measure the size, complexity, coupling, and cohesion properties of programs. Compared with traditional code metrics based on code statements or code structure, program slicing metrics involve measures for program behaviors. To evaluate the program slicing metrics, we compare them with the Understand for C++ suite of metrics, a set of widely-used traditional code metrics, in a series of bug classification experiments. We used the program slicing and the Understand for C++ metrics computed for 887 revisions of the Apache HTTP project and 76 revisions of the Latex2rtf project to classify source code files or functions as either buggy or bug-free. We then compared their classification prediction accuracy. Program slicing metrics have slightly better performance than the Understand for C++ metrics in classifying buggy/bug-free source code. Program slicing metrics have an overall 82.6% (Apache) and 92% (Latex2rtf) accuracy at the file level, better than the Understand for C++ metrics with an overall 80.4% (Apache) and 88% (Latex2rtf) accuracy. The experiments illustrate that the program slicing metrics have at least the same bug classification performance as the Understand for C++ metrics.},
  booktitle = {Sixth {IEEE} {International} {Workshop} on {Source} {Code} {Analysis} and {Manipulation}, 2006. {SCAM} '06},
  author = {Pan, Kai and Kim, Sunghun and Whitehead, E.J.},
  month = {September},
  year = {2006},
  keywords = {Accuracy, Computer bugs, Computer science, Lab-on-a-chip, Maintenance engineering, Size measurement, Software Quality, Software systems, _done, _naming_fault_as_bug_model, performance evaluation, software maintenance},
  pages = {31--42}
}

Downloads: 0