An Empirical Study of the Effectiveness of an Ensemble of Stand-alone Sentiment Detection Tools for Software Engineering Datasets. Uddin, G., Gu�h�neuc, Y., Khomh, F., & Roy, C. Transactions on Software Engineering and Methodology (TOSEM), 31(3):1–38, ACM Press, April, 2022. 37 pages.
Paper abstract bibtex Sentiment analysis in software engineering (SE) has shown promise to analyze and support diverse development activities. Recently, several tools are proposed to detect sentiments in software artifacts. While the tools improve accuracy over off-the-shelf tools, recent research shows that their performance could still be unsatisfactory. A more accurate sentiment detector for SE can help reduce noise in analysis of software scenarios where sentiment analysis is required. Recently, combinations, i.e., hybrids of stand-alone classifiers are found to offer better performance than the stand-alone classifiers for fault detection. However, we are aware of no such approach for sentiment detection for software artifacts. We report the results of an empirical study that we conducted to determine the feasibility of developing an ensemble engine by combining the polarity labels of stand-alone SE-specific sentiment detectors. Our study has two phases. In the first phase, we pick five SE-specific sentiment detection tools from two recently published papers by Lin et al., who first reported negative results with stand alone sentiment detectors and then proposed an improved SE-specific sentiment detector, POME. We report the study results on 17,581 units (sentences/documents) coming from six currently available sentiment benchmarks for software engineering. We find that the existing tools can be complementary to each other in 85-95\NOof the cases, i.e., one is wrong but another is right. However, a majority voting-based ensemble of those tools fails to improve the accuracy of sentiment detection. We develop Sentisead, a supervised tool by combining the polarity labels and bag of words as features. Sentisead improves the performance (F1-score) of the individual tools by 4\NO(over Senti4SD) – 100\NO(over POME). The initial development of Sentisead occurred before we observed the use of deep learning models for SE-specific sentiment detection. In particular, recent papers show the superiority of advanced language-based pre-trained transformer models (PTM) over rule-based and shallow learning models. Consequently, in a second phase, we compare and improve Sentisead infrastructure using the PTMs. We find that a Sentisead infrastructure with RoBERTa as the ensemble of the five stand-alone rule-based and shallow learning SE-specific tools from Lin et al.\ offers the best F1-score of 0.805 across the six datasets, while a stand-alone RoBERTa shows an F1-score of 0.801.
@ARTICLE{Uddin22-TOSEM-CombinedSentiments,
AUTHOR = {Gias Uddin and Yann-Ga�l Gu�h�neuc and Foutse Khomh and
Chanchal Roy},
JOURNAL = {Transactions on Software Engineering and Methodology (TOSEM)},
TITLE = {An Empirical Study of the Effectiveness of an Ensemble
of Stand-alone Sentiment Detection Tools for Software Engineering
Datasets},
YEAR = {2022},
MONTH = {April},
NOTE = {37 pages.},
NUMBER = {3},
PAGES = {1--38},
VOLUME = {31},
EDITOR = {Mauro Pezze},
KEYWORDS = {Topic: <b>Program comprehension</b>,
Venue: <b>TOSEM</b>},
PUBLISHER = {ACM Press},
URL = {http://www.ptidej.net/publications/documents/TOSEM21.doc.pdf},
ABSTRACT = {Sentiment analysis in software engineering (SE) has
shown promise to analyze and support diverse development activities.
Recently, several tools are proposed to detect sentiments in software
artifacts. While the tools improve accuracy over off-the-shelf tools,
recent research shows that their performance could still be
unsatisfactory. A more accurate sentiment detector for SE can help
reduce noise in analysis of software scenarios where sentiment
analysis is required. Recently, combinations, i.e., hybrids of
stand-alone classifiers are found to offer better performance than
the stand-alone classifiers for fault detection. However, we are
aware of no such approach for sentiment detection for software
artifacts. We report the results of an empirical study that we
conducted to determine the feasibility of developing an ensemble
engine by combining the polarity labels of stand-alone SE-specific
sentiment detectors. Our study has two phases. In the first phase, we
pick five SE-specific sentiment detection tools from two recently
published papers by Lin et al., who first reported negative results
with stand alone sentiment detectors and then proposed an improved
SE-specific sentiment detector, POME. We report the study results on
17,581 units (sentences/documents) coming from six currently
available sentiment benchmarks for software engineering. We find that
the existing tools can be complementary to each other in 85-95\NOof
the cases, i.e., one is wrong but another is right. However, a
majority voting-based ensemble of those tools fails to improve the
accuracy of sentiment detection. We develop Sentisead, a supervised
tool by combining the polarity labels and bag of words as features.
Sentisead improves the performance (F1-score) of the individual tools
by 4\NO(over Senti4SD) -- 100\NO(over POME). The initial development
of Sentisead occurred before we observed the use of deep learning
models for SE-specific sentiment detection. In particular, recent
papers show the superiority of advanced language-based pre-trained
transformer models (PTM) over rule-based and shallow learning models.
Consequently, in a second phase, we compare and improve Sentisead
infrastructure using the PTMs. We find that a Sentisead
infrastructure with RoBERTa as the ensemble of the five stand-alone
rule-based and shallow learning SE-specific tools from Lin et al.\
offers the best F1-score of 0.805 across the six datasets, while a
stand-alone RoBERTa shows an F1-score of 0.801.}
}
Downloads: 0
{"_id":"wxJXfHATeSazy5jMA","bibbaseid":"uddin-guhneuc-khomh-roy-anempiricalstudyoftheeffectivenessofanensembleofstandalonesentimentdetectiontoolsforsoftwareengineeringdatasets-2022","author_short":["Uddin, G.","Gu�h�neuc, Y.","Khomh, F.","Roy, C."],"bibdata":{"bibtype":"article","type":"article","author":[{"firstnames":["Gias"],"propositions":[],"lastnames":["Uddin"],"suffixes":[]},{"firstnames":["Yann-Ga�l"],"propositions":[],"lastnames":["Gu�h�neuc"],"suffixes":[]},{"firstnames":["Foutse"],"propositions":[],"lastnames":["Khomh"],"suffixes":[]},{"firstnames":["Chanchal"],"propositions":[],"lastnames":["Roy"],"suffixes":[]}],"journal":"Transactions on Software Engineering and Methodology (TOSEM)","title":"An Empirical Study of the Effectiveness of an Ensemble of Stand-alone Sentiment Detection Tools for Software Engineering Datasets","year":"2022","month":"April","note":"37 pages.","number":"3","pages":"1–38","volume":"31","editor":[{"firstnames":["Mauro"],"propositions":[],"lastnames":["Pezze"],"suffixes":[]}],"keywords":"Topic: <b>Program comprehension</b>, Venue: <b>TOSEM</b>","publisher":"ACM Press","url":"http://www.ptidej.net/publications/documents/TOSEM21.doc.pdf","abstract":"Sentiment analysis in software engineering (SE) has shown promise to analyze and support diverse development activities. Recently, several tools are proposed to detect sentiments in software artifacts. While the tools improve accuracy over off-the-shelf tools, recent research shows that their performance could still be unsatisfactory. A more accurate sentiment detector for SE can help reduce noise in analysis of software scenarios where sentiment analysis is required. Recently, combinations, i.e., hybrids of stand-alone classifiers are found to offer better performance than the stand-alone classifiers for fault detection. However, we are aware of no such approach for sentiment detection for software artifacts. We report the results of an empirical study that we conducted to determine the feasibility of developing an ensemble engine by combining the polarity labels of stand-alone SE-specific sentiment detectors. Our study has two phases. In the first phase, we pick five SE-specific sentiment detection tools from two recently published papers by Lin et al., who first reported negative results with stand alone sentiment detectors and then proposed an improved SE-specific sentiment detector, POME. We report the study results on 17,581 units (sentences/documents) coming from six currently available sentiment benchmarks for software engineering. We find that the existing tools can be complementary to each other in 85-95\\NOof the cases, i.e., one is wrong but another is right. However, a majority voting-based ensemble of those tools fails to improve the accuracy of sentiment detection. We develop Sentisead, a supervised tool by combining the polarity labels and bag of words as features. Sentisead improves the performance (F1-score) of the individual tools by 4\\NO(over Senti4SD) – 100\\NO(over POME). The initial development of Sentisead occurred before we observed the use of deep learning models for SE-specific sentiment detection. In particular, recent papers show the superiority of advanced language-based pre-trained transformer models (PTM) over rule-based and shallow learning models. Consequently, in a second phase, we compare and improve Sentisead infrastructure using the PTMs. We find that a Sentisead infrastructure with RoBERTa as the ensemble of the five stand-alone rule-based and shallow learning SE-specific tools from Lin et al.\\ offers the best F1-score of 0.805 across the six datasets, while a stand-alone RoBERTa shows an F1-score of 0.801.","bibtex":"@ARTICLE{Uddin22-TOSEM-CombinedSentiments,\r\n AUTHOR = {Gias Uddin and Yann-Ga�l Gu�h�neuc and Foutse Khomh and \r\n Chanchal Roy},\r\n JOURNAL = {Transactions on Software Engineering and Methodology (TOSEM)},\r\n TITLE = {An Empirical Study of the Effectiveness of an Ensemble \r\n of Stand-alone Sentiment Detection Tools for Software Engineering \r\n Datasets},\r\n YEAR = {2022},\r\n MONTH = {April},\r\n NOTE = {37 pages.},\r\n NUMBER = {3},\r\n PAGES = {1--38},\r\n VOLUME = {31},\r\n EDITOR = {Mauro Pezze},\r\n KEYWORDS = {Topic: <b>Program comprehension</b>, \r\n Venue: <b>TOSEM</b>},\r\n PUBLISHER = {ACM Press},\r\n URL = {http://www.ptidej.net/publications/documents/TOSEM21.doc.pdf},\r\n ABSTRACT = {Sentiment analysis in software engineering (SE) has \r\n shown promise to analyze and support diverse development activities. \r\n Recently, several tools are proposed to detect sentiments in software \r\n artifacts. While the tools improve accuracy over off-the-shelf tools, \r\n recent research shows that their performance could still be \r\n unsatisfactory. A more accurate sentiment detector for SE can help \r\n reduce noise in analysis of software scenarios where sentiment \r\n analysis is required. Recently, combinations, i.e., hybrids of \r\n stand-alone classifiers are found to offer better performance than \r\n the stand-alone classifiers for fault detection. However, we are \r\n aware of no such approach for sentiment detection for software \r\n artifacts. We report the results of an empirical study that we \r\n conducted to determine the feasibility of developing an ensemble \r\n engine by combining the polarity labels of stand-alone SE-specific \r\n sentiment detectors. Our study has two phases. In the first phase, we \r\n pick five SE-specific sentiment detection tools from two recently \r\n published papers by Lin et al., who first reported negative results \r\n with stand alone sentiment detectors and then proposed an improved \r\n SE-specific sentiment detector, POME. We report the study results on \r\n 17,581 units (sentences/documents) coming from six currently \r\n available sentiment benchmarks for software engineering. We find that \r\n the existing tools can be complementary to each other in 85-95\\NOof \r\n the cases, i.e., one is wrong but another is right. However, a \r\n majority voting-based ensemble of those tools fails to improve the \r\n accuracy of sentiment detection. We develop Sentisead, a supervised \r\n tool by combining the polarity labels and bag of words as features. \r\n Sentisead improves the performance (F1-score) of the individual tools \r\n by 4\\NO(over Senti4SD) -- 100\\NO(over POME). The initial development \r\n of Sentisead occurred before we observed the use of deep learning \r\n models for SE-specific sentiment detection. In particular, recent \r\n papers show the superiority of advanced language-based pre-trained \r\n transformer models (PTM) over rule-based and shallow learning models. \r\n Consequently, in a second phase, we compare and improve Sentisead \r\n infrastructure using the PTMs. We find that a Sentisead \r\n infrastructure with RoBERTa as the ensemble of the five stand-alone \r\n rule-based and shallow learning SE-specific tools from Lin et al.\\ \r\n offers the best F1-score of 0.805 across the six datasets, while a \r\n stand-alone RoBERTa shows an F1-score of 0.801.}\r\n}\r\n\r\n","author_short":["Uddin, G.","Gu�h�neuc, Y.","Khomh, F.","Roy, C."],"editor_short":["Pezze, M."],"key":"Uddin22-TOSEM-CombinedSentiments","id":"Uddin22-TOSEM-CombinedSentiments","bibbaseid":"uddin-guhneuc-khomh-roy-anempiricalstudyoftheeffectivenessofanensembleofstandalonesentimentdetectiontoolsforsoftwareengineeringdatasets-2022","role":"author","urls":{"Paper":"http://www.ptidej.net/publications/documents/TOSEM21.doc.pdf"},"keyword":["Topic: <b>Program comprehension</b>","Venue: <b>TOSEM</b>"],"metadata":{"authorlinks":{}}},"bibtype":"article","biburl":"http://www.yann-gael.gueheneuc.net/Work/Publications/Biblio/complete-bibliography.bib","dataSources":["8vn5MSGYWB4fAx9Z4"],"keywords":["topic: <b>program comprehension</b>","venue: <b>tosem</b>"],"search_terms":["empirical","study","effectiveness","ensemble","stand","alone","sentiment","detection","tools","software","engineering","datasets","uddin","gu�h�neuc","khomh","roy"],"title":"An Empirical Study of the Effectiveness of an Ensemble of Stand-alone Sentiment Detection Tools for Software Engineering Datasets","year":2022}