Can Machine Learning Support the Selection of Studies for Systematic Literature Review Updates?. Costalonga, M., Napoleão, B. M., Baldassarre, M. T., Felizardo, K. R., Steinmacher, I., & Kalinowski, M. In Proceedings of the 2nd IEEE/ACM International Workshop on Methodological Issues with Empirical Studies in Software Engineering, of WSESE@ICSE '25, 2025. IEEE. Accepted for publication.
Author version abstract bibtex [Background] Systematic literature reviews (SLRs) are essential for synthesizing evidence in Software Engineering (SE), but keeping them up-to-date requires substantial effort. Study selection, one of the most labor-intensive steps, involves reviewing numerous studies and requires multiple reviewers to minimize bias and avoid loss of evidence. [Objective] This study aims to evaluate if Machine Learning (ML) text classification models can support reviewers in the study selection for SLR updates. [Method] We reproduce the study selection of an SLR update performed by three SE researchers. We trained two supervised ML models (Random Forest and Support Vector Machines) with different configurations using data from the original SLR. We calculated the study selection effectiveness of the ML models for the SLR update in terms of precision, recall, and F-measure. We also compared the performance of human-ML pairs with human-only pairs when selecting studies. [Results] The ML models achieved a modest F-score of 0.33, which is insufficient for reliable automation. However, we found that such models can reduce the study selection effort by 33.9% without loss of evidence (keeping a 100% recall). Our analysis also showed that the initial screening by pairs of human reviewers produces results that are much better aligned with the final SLR update result. [Conclusion] Based on our results, we conclude that although ML models can help reduce the effort involved in SLR updates, achieving rigorous and reliable outcomes still requires the expertise of experienced human reviewers for the initial screening phase.
@inproceedings{CostalongaNBFSK25,
author = {Costalonga, Marcelo and Napole\~{a}o, Bianca Minetto and Baldassarre, Maria Teresa and Felizardo, Katia Romero and Steinmacher, Igor and Kalinowski, Marcos},
title = {Can Machine Learning Support the Selection of Studies for Systematic Literature Review Updates?},
year = {2025},
isbn = {},
publisher = {IEEE},
urlAuthor_version = {http://www.inf.puc-rio.br/~kalinowski/publications/CostalongaNBFSK25.pdf},
doi = {},
abstract = {[Background] Systematic literature reviews (SLRs) are essential for synthesizing evidence in Software Engineering (SE), but keeping them up-to-date requires substantial effort. Study selection, one of the most labor-intensive steps, involves reviewing numerous studies and requires multiple reviewers to minimize bias and avoid loss of evidence. [Objective] This study aims to evaluate if Machine Learning (ML) text classification models can support reviewers in the study selection for SLR updates. [Method] We reproduce the study selection of an SLR update performed by three SE researchers. We trained two supervised ML models (Random Forest and Support Vector Machines) with different configurations using data from the original SLR. We calculated the study selection effectiveness of the ML models for the SLR update in terms of precision, recall, and F-measure. We also compared the performance of human-ML pairs with human-only pairs when selecting studies. [Results] The ML models achieved a modest F-score of 0.33, which is insufficient for reliable automation. However, we found that such models can reduce the study selection effort by 33.9% without loss of evidence (keeping a 100% recall). Our analysis also showed that the initial screening by pairs of human reviewers produces results that are much better aligned with the final SLR update result. [Conclusion] Based on our results, we conclude that although ML models can help reduce the effort involved in SLR updates, achieving rigorous and reliable outcomes still requires the expertise of experienced human reviewers for the initial screening phase.},
booktitle = {Proceedings of the 2nd IEEE/ACM International Workshop on Methodological Issues with Empirical Studies in Software Engineering},
pages = {},
numpages = {8},
keywords = {Systematic Review Automation, Selection of Studies, Machine Learning, Systematic Literature Review Update},
location = {Ottawa, Canada},
note = {<font color="red">Accepted for publication.</font>},
series = {WSESE@ICSE '25}
}
Downloads: 0
{"_id":"EJLzCZXDNsoMFsvTb","bibbaseid":"costalonga-napoleo-baldassarre-felizardo-steinmacher-kalinowski-canmachinelearningsupporttheselectionofstudiesforsystematicliteraturereviewupdates-2025","author_short":["Costalonga, M.","Napoleão, B. M.","Baldassarre, M. T.","Felizardo, K. R.","Steinmacher, I.","Kalinowski, M."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","author":[{"propositions":[],"lastnames":["Costalonga"],"firstnames":["Marcelo"],"suffixes":[]},{"propositions":[],"lastnames":["Napoleão"],"firstnames":["Bianca","Minetto"],"suffixes":[]},{"propositions":[],"lastnames":["Baldassarre"],"firstnames":["Maria","Teresa"],"suffixes":[]},{"propositions":[],"lastnames":["Felizardo"],"firstnames":["Katia","Romero"],"suffixes":[]},{"propositions":[],"lastnames":["Steinmacher"],"firstnames":["Igor"],"suffixes":[]},{"propositions":[],"lastnames":["Kalinowski"],"firstnames":["Marcos"],"suffixes":[]}],"title":"Can Machine Learning Support the Selection of Studies for Systematic Literature Review Updates?","year":"2025","isbn":"","publisher":"IEEE","urlauthor_version":"http://www.inf.puc-rio.br/~kalinowski/publications/CostalongaNBFSK25.pdf","doi":"","abstract":"[Background] Systematic literature reviews (SLRs) are essential for synthesizing evidence in Software Engineering (SE), but keeping them up-to-date requires substantial effort. Study selection, one of the most labor-intensive steps, involves reviewing numerous studies and requires multiple reviewers to minimize bias and avoid loss of evidence. [Objective] This study aims to evaluate if Machine Learning (ML) text classification models can support reviewers in the study selection for SLR updates. [Method] We reproduce the study selection of an SLR update performed by three SE researchers. We trained two supervised ML models (Random Forest and Support Vector Machines) with different configurations using data from the original SLR. We calculated the study selection effectiveness of the ML models for the SLR update in terms of precision, recall, and F-measure. We also compared the performance of human-ML pairs with human-only pairs when selecting studies. [Results] The ML models achieved a modest F-score of 0.33, which is insufficient for reliable automation. However, we found that such models can reduce the study selection effort by 33.9% without loss of evidence (keeping a 100% recall). Our analysis also showed that the initial screening by pairs of human reviewers produces results that are much better aligned with the final SLR update result. [Conclusion] Based on our results, we conclude that although ML models can help reduce the effort involved in SLR updates, achieving rigorous and reliable outcomes still requires the expertise of experienced human reviewers for the initial screening phase.","booktitle":"Proceedings of the 2nd IEEE/ACM International Workshop on Methodological Issues with Empirical Studies in Software Engineering","pages":"","numpages":"8","keywords":"Systematic Review Automation, Selection of Studies, Machine Learning, Systematic Literature Review Update","location":"Ottawa, Canada","note":"<font color=\"red\">Accepted for publication.</font>","series":"WSESE@ICSE '25","bibtex":"@inproceedings{CostalongaNBFSK25,\r\n author = {Costalonga, Marcelo and Napole\\~{a}o, Bianca Minetto and Baldassarre, Maria Teresa and Felizardo, Katia Romero and Steinmacher, Igor and Kalinowski, Marcos},\r\n title = {Can Machine Learning Support the Selection of Studies for Systematic Literature Review Updates?},\r\n year = {2025},\r\n isbn = {},\r\n publisher = {IEEE},\r\n urlAuthor_version = {http://www.inf.puc-rio.br/~kalinowski/publications/CostalongaNBFSK25.pdf}, \r\n doi = {},\r\n abstract = {[Background] Systematic literature reviews (SLRs) are essential for synthesizing evidence in Software Engineering (SE), but keeping them up-to-date requires substantial effort. Study selection, one of the most labor-intensive steps, involves reviewing numerous studies and requires multiple reviewers to minimize bias and avoid loss of evidence. [Objective] This study aims to evaluate if Machine Learning (ML) text classification models can support reviewers in the study selection for SLR updates. [Method] We reproduce the study selection of an SLR update performed by three SE researchers. We trained two supervised ML models (Random Forest and Support Vector Machines) with different configurations using data from the original SLR. We calculated the study selection effectiveness of the ML models for the SLR update in terms of precision, recall, and F-measure. We also compared the performance of human-ML pairs with human-only pairs when selecting studies. [Results] The ML models achieved a modest F-score of 0.33, which is insufficient for reliable automation. However, we found that such models can reduce the study selection effort by 33.9% without loss of evidence (keeping a 100% recall). Our analysis also showed that the initial screening by pairs of human reviewers produces results that are much better aligned with the final SLR update result. [Conclusion] Based on our results, we conclude that although ML models can help reduce the effort involved in SLR updates, achieving rigorous and reliable outcomes still requires the expertise of experienced human reviewers for the initial screening phase.},\r\n booktitle = {Proceedings of the 2nd IEEE/ACM International Workshop on Methodological Issues with Empirical Studies in Software Engineering},\r\n pages = {},\r\n numpages = {8},\r\n keywords = {Systematic Review Automation, Selection of Studies, Machine Learning, Systematic Literature Review Update},\r\n location = {Ottawa, Canada},\r\n note = {<font color=\"red\">Accepted for publication.</font>},\r\n series = {WSESE@ICSE '25}\r\n}\r\n\r\n","author_short":["Costalonga, M.","Napoleão, B. M.","Baldassarre, M. T.","Felizardo, K. R.","Steinmacher, I.","Kalinowski, M."],"key":"CostalongaNBFSK25","id":"CostalongaNBFSK25","bibbaseid":"costalonga-napoleo-baldassarre-felizardo-steinmacher-kalinowski-canmachinelearningsupporttheselectionofstudiesforsystematicliteraturereviewupdates-2025","role":"author","urls":{"Author version":"http://www.inf.puc-rio.br/~kalinowski/publications/CostalongaNBFSK25.pdf"},"keyword":["Systematic Review Automation","Selection of Studies","Machine Learning","Systematic Literature Review Update"],"metadata":{"authorlinks":{}}},"bibtype":"inproceedings","biburl":"https://bibbase.org/network/files/KuRSiZJF8A6EZiujE","dataSources":["q7rgFjFgwoTSGkm3G","iSfhee4nHcHz4F2WQ"],"keywords":["systematic review automation","selection of studies","machine learning","systematic literature review update"],"search_terms":["machine","learning","support","selection","studies","systematic","literature","review","updates","costalonga","napoleão","baldassarre","felizardo","steinmacher","kalinowski"],"title":"Can Machine Learning Support the Selection of Studies for Systematic Literature Review Updates?","year":2025,"downloads":1}