Learning to Combine Multiple Ranking Metrics for Fault Localization. Xuan, J. & Monperrus, M. In 2014 IEEE International Conference on Software Maintenance and Evolution, pages 191–200, September, 2014. ISSN: 1063-6773
doi  abstract   bibtex   
Fault localization is an inevitable step in software debugging. Spectrum-based fault localization consists in computing a ranking metric on execution traces to identify faulty source code. Existing empirical studies on fault localization show that there is no optimal ranking metric for all faults in practice. In this paper, we propose Multric, a learning-based approach to combining multiple ranking metrics for effective fault localization. In Multric, a suspiciousness score of a program entity is a combination of existing ranking metrics. Multric consists two major phases: learning and ranking. Based on training faults, Multric builds a ranking model by learning from pairs of faulty and non-faulty source code elements. When a new fault appears, Multric computes the final ranking with the learned model. Experiments are conducted on 5386 seeded faults in ten open-source Java programs. We empirically compare Multric against four widely-studied metrics and three recently-proposed one. Our experimental results show that Multric localizes faults more effectively than state-of-art metrics, such as Tarantula, Ochiai, and Ample.
@inproceedings{xuan_learning_2014,
	title = {Learning to {Combine} {Multiple} {Ranking} {Metrics} for {Fault} {Localization}},
	doi = {10.1109/ICSME.2014.41},
	abstract = {Fault localization is an inevitable step in software debugging. Spectrum-based fault localization consists in computing a ranking metric on execution traces to identify faulty source code. Existing empirical studies on fault localization show that there is no optimal ranking metric for all faults in practice. In this paper, we propose Multric, a learning-based approach to combining multiple ranking metrics for effective fault localization. In Multric, a suspiciousness score of a program entity is a combination of existing ranking metrics. Multric consists two major phases: learning and ranking. Based on training faults, Multric builds a ranking model by learning from pairs of faulty and non-faulty source code elements. When a new fault appears, Multric computes the final ranking with the learned model. Experiments are conducted on 5386 seeded faults in ten open-source Java programs. We empirically compare Multric against four widely-studied metrics and three recently-proposed one. Our experimental results show that Multric localizes faults more effectively than state-of-art metrics, such as Tarantula, Ochiai, and Ample.},
	booktitle = {2014 {IEEE} {International} {Conference} on {Software} {Maintenance} and {Evolution}},
	author = {Xuan, Jifeng and Monperrus, Martin},
	month = sep,
	year = {2014},
	note = {ISSN: 1063-6773},
	keywords = {Computational modeling, Debugging, Fault localization, Java, Measurement, Object oriented modeling, Training, Training data, learning to rank, multiple ranking metrics},
	pages = {191--200},
}

Downloads: 0