Transfer learning for medical image classification: a literature review. Kim, H. E., Cosa-Linan, A., Santhanam, N., Jannesari, M., Maros, M. E., & Ganslandt, T. BMC Medical Imaging, 22(1):69, December, 2022. Paper doi abstract bibtex Background: Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. Methods: 425 peer‑reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine‑tuning and fine‑tuning from scratch. Results: The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine‑tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine‑tuning (n = 3) with pretrained models. Conclusion: The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.
@article{kim_transfer_2022,
title = {Transfer learning for medical image classification: a literature review},
volume = {22},
issn = {1471-2342},
shorttitle = {Transfer learning for medical image classification},
url = {https://bmcmedimaging.biomedcentral.com/articles/10.1186/s12880-022-00793-7},
doi = {10.1186/s12880-022-00793-7},
abstract = {Background: Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task.
Methods: 425 peer‑reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine‑tuning and fine‑tuning from scratch.
Results: The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine‑tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine‑tuning (n = 3) with pretrained models.
Conclusion: The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.},
language = {zh-CN},
number = {1},
urldate = {2023-09-15},
journal = {BMC Medical Imaging},
author = {Kim, Hee E. and Cosa-Linan, Alejandro and Santhanam, Nandhini and Jannesari, Mahboubeh and Maros, Mate E. and Ganslandt, Thomas},
month = dec,
year = {2022},
keywords = {/unread},
pages = {69},
}
Downloads: 0
{"_id":"hEqLiizcAjkfB477A","bibbaseid":"kim-cosalinan-santhanam-jannesari-maros-ganslandt-transferlearningformedicalimageclassificationaliteraturereview-2022","author_short":["Kim, H. E.","Cosa-Linan, A.","Santhanam, N.","Jannesari, M.","Maros, M. E.","Ganslandt, T."],"bibdata":{"bibtype":"article","type":"article","title":"Transfer learning for medical image classification: a literature review","volume":"22","issn":"1471-2342","shorttitle":"Transfer learning for medical image classification","url":"https://bmcmedimaging.biomedcentral.com/articles/10.1186/s12880-022-00793-7","doi":"10.1186/s12880-022-00793-7","abstract":"Background: Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. Methods: 425 peer‑reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine‑tuning and fine‑tuning from scratch. Results: The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine‑tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine‑tuning (n = 3) with pretrained models. Conclusion: The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.","language":"zh-CN","number":"1","urldate":"2023-09-15","journal":"BMC Medical Imaging","author":[{"propositions":[],"lastnames":["Kim"],"firstnames":["Hee","E."],"suffixes":[]},{"propositions":[],"lastnames":["Cosa-Linan"],"firstnames":["Alejandro"],"suffixes":[]},{"propositions":[],"lastnames":["Santhanam"],"firstnames":["Nandhini"],"suffixes":[]},{"propositions":[],"lastnames":["Jannesari"],"firstnames":["Mahboubeh"],"suffixes":[]},{"propositions":[],"lastnames":["Maros"],"firstnames":["Mate","E."],"suffixes":[]},{"propositions":[],"lastnames":["Ganslandt"],"firstnames":["Thomas"],"suffixes":[]}],"month":"December","year":"2022","keywords":"/unread","pages":"69","bibtex":"@article{kim_transfer_2022,\n\ttitle = {Transfer learning for medical image classification: a literature review},\n\tvolume = {22},\n\tissn = {1471-2342},\n\tshorttitle = {Transfer learning for medical image classification},\n\turl = {https://bmcmedimaging.biomedcentral.com/articles/10.1186/s12880-022-00793-7},\n\tdoi = {10.1186/s12880-022-00793-7},\n\tabstract = {Background: Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task.\nMethods: 425 peer‑reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine‑tuning and fine‑tuning from scratch.\nResults: The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine‑tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine‑tuning (n = 3) with pretrained models.\nConclusion: The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.},\n\tlanguage = {zh-CN},\n\tnumber = {1},\n\turldate = {2023-09-15},\n\tjournal = {BMC Medical Imaging},\n\tauthor = {Kim, Hee E. and Cosa-Linan, Alejandro and Santhanam, Nandhini and Jannesari, Mahboubeh and Maros, Mate E. and Ganslandt, Thomas},\n\tmonth = dec,\n\tyear = {2022},\n\tkeywords = {/unread},\n\tpages = {69},\n}\n\n","author_short":["Kim, H. E.","Cosa-Linan, A.","Santhanam, N.","Jannesari, M.","Maros, M. E.","Ganslandt, T."],"key":"kim_transfer_2022","id":"kim_transfer_2022","bibbaseid":"kim-cosalinan-santhanam-jannesari-maros-ganslandt-transferlearningformedicalimageclassificationaliteraturereview-2022","role":"author","urls":{"Paper":"https://bmcmedimaging.biomedcentral.com/articles/10.1186/s12880-022-00793-7"},"keyword":["/unread"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"article","biburl":"https://bibbase.org/zotero/victorjhu","dataSources":["CmHEoydhafhbkXXt5"],"keywords":["/unread"],"search_terms":["transfer","learning","medical","image","classification","literature","review","kim","cosa-linan","santhanam","jannesari","maros","ganslandt"],"title":"Transfer learning for medical image classification: a literature review","year":2022}