Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Haenssle, H. A., Fink, C., Schneiderbauer, R., Toberer, F., Buhl, T., Blum, A., Kalloo, A., Hassen, A. B. H., Thomas, L., Enk, A., Uhlmann, L., Reader study level , I., level , I. I. G., Alt, C., Arenbergerova, M., Bakos, R., Baltzer, A., Bertlich, I., Blum, A., Bokor-Billmann, T., Bowling, J., Braghiroli, N., Braun, R., Buder-Bakhaya, K., Buhl, T., Cabo, H., Cabrijan, L., Cevic, N., Classen, A., Deltgen, D., Fink, C., Georgieva, I., Hakim-Meibodi, L. E., Hanner, S., Hartmann, F., Hartmann, J., Haus, G., Hoxha, E., Karls, R., Koga, H., Kreusch, J., Lallas, A., Majenka, P., Marghoob, A., Massone, C., Mekokishvili, L., Mestel, D., Meyer, V., Neuberger, A., Nielsen, K., Oliviero, M., Pampena, R., Paoli, J., Pawlik, E., Rao, B., Rendon, A., Russo, T., Sadek, A., Samhaber, K., Schneiderbauer, R., Schweizer, A., Toberer, F., Trennheuser, L., Vlahova, L., Wald, A., Winkler, J., Wolbing, P., & Zalaudek, I. Ann Oncol, 29(8):1836–1842, August, 2018.
Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists [link]Paper  doi  abstract   bibtex   
Background: Deep learning convolutional neural networks (CNN) may facilitate melanoma detection, but data comparing a CNN's diagnostic performance to larger groups of dermatologists are lacking. Methods: Google's Inception v4 CNN architecture was trained and validated using dermoscopic images and corresponding diagnoses. In a comparative cross-sectional reader study a 100-image test-set was used (level-I: dermoscopy only; level-II: dermoscopy plus clinical information and images). Main outcome measures were sensitivity, specificity and area under the curve (AUC) of receiver operating characteristics (ROC) for diagnostic classification (dichotomous) of lesions by the CNN versus an international group of 58 dermatologists during level-I or -II of the reader study. Secondary end points included the dermatologists' diagnostic performance in their management decisions and differences in the diagnostic performance of dermatologists during level-I and -II of the reader study. Additionally, the CNN's performance was compared with the top-five algorithms of the 2016 International Symposium on Biomedical Imaging (ISBI) challenge. Results: In level-I dermatologists achieved a mean (+/-standard deviation) sensitivity and specificity for lesion classification of 86.6% (+/-9.3%) and 71.3% (+/-11.2%), respectively. More clinical information (level-II) improved the sensitivity to 88.9% (+/-9.6%, P = 0.19) and specificity to 75.7% (+/-11.7%, P \textless 0.05). The CNN ROC curve revealed a higher specificity of 82.5% when compared with dermatologists in level-I (71.3%, P \textless 0.01) and level-II (75.7%, P \textless 0.01) at their sensitivities of 86.6% and 88.9%, respectively. The CNN ROC AUC was greater than the mean ROC area of dermatologists (0.86 versus 0.79, P \textless 0.01). The CNN scored results close to the top three algorithms of the ISBI 2016 challenge. Conclusions: For the first time we compared a CNN's diagnostic performance with a large international group of 58 dermatologists, including 30 experts. Most dermatologists were outperformed by the CNN. Irrespective of any physicians' experience, they may benefit from assistance by a CNN's image classification. Clinical trial number: This study was registered at the German Clinical Trial Register (DRKS-Study-ID: DRKS00013570; https://www.drks.de/drks_web/).
@article{haenssle_man_2018,
	title = {Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists},
	volume = {29},
	copyright = {All rights reserved},
	issn = {1569-8041 (Electronic) 0923-7534 (Linking)},
	url = {https://www.ncbi.nlm.nih.gov/pubmed/29846502},
	doi = {10.1093/annonc/mdy166},
	abstract = {Background: Deep learning convolutional neural networks (CNN) may facilitate melanoma detection, but data comparing a CNN's diagnostic performance to larger groups of dermatologists are lacking. Methods: Google's Inception v4 CNN architecture was trained and validated using dermoscopic images and corresponding diagnoses. In a comparative cross-sectional reader study a 100-image test-set was used (level-I: dermoscopy only; level-II: dermoscopy plus clinical information and images). Main outcome measures were sensitivity, specificity and area under the curve (AUC) of receiver operating characteristics (ROC) for diagnostic classification (dichotomous) of lesions by the CNN versus an international group of 58 dermatologists during level-I or -II of the reader study. Secondary end points included the dermatologists' diagnostic performance in their management decisions and differences in the diagnostic performance of dermatologists during level-I and -II of the reader study. Additionally, the CNN's performance was compared with the top-five algorithms of the 2016 International Symposium on Biomedical Imaging (ISBI) challenge. Results: In level-I dermatologists achieved a mean (+/-standard deviation) sensitivity and specificity for lesion classification of 86.6\% (+/-9.3\%) and 71.3\% (+/-11.2\%), respectively. More clinical information (level-II) improved the sensitivity to 88.9\% (+/-9.6\%, P = 0.19) and specificity to 75.7\% (+/-11.7\%, P {\textless} 0.05). The CNN ROC curve revealed a higher specificity of 82.5\% when compared with dermatologists in level-I (71.3\%, P {\textless} 0.01) and level-II (75.7\%, P {\textless} 0.01) at their sensitivities of 86.6\% and 88.9\%, respectively. The CNN ROC AUC was greater than the mean ROC area of dermatologists (0.86 versus 0.79, P {\textless} 0.01). The CNN scored results close to the top three algorithms of the ISBI 2016 challenge. Conclusions: For the first time we compared a CNN's diagnostic performance with a large international group of 58 dermatologists, including 30 experts. Most dermatologists were outperformed by the CNN. Irrespective of any physicians' experience, they may benefit from assistance by a CNN's image classification. Clinical trial number: This study was registered at the German Clinical Trial Register (DRKS-Study-ID: DRKS00013570; https://www.drks.de/drks\_web/).},
	number = {8},
	journal = {Ann Oncol},
	author = {Haenssle, H. A. and Fink, C. and Schneiderbauer, R. and Toberer, F. and Buhl, T. and Blum, A. and Kalloo, A. and Hassen, A. B. H. and Thomas, L. and Enk, A. and Uhlmann, L. and Reader study level, I. and level, I. I. Groups and Alt, C. and Arenbergerova, M. and Bakos, R. and Baltzer, A. and Bertlich, I. and Blum, A. and Bokor-Billmann, T. and Bowling, J. and Braghiroli, N. and Braun, R. and Buder-Bakhaya, K. and Buhl, T. and Cabo, H. and Cabrijan, L. and Cevic, N. and Classen, A. and Deltgen, D. and Fink, C. and Georgieva, I. and Hakim-Meibodi, L. E. and Hanner, S. and Hartmann, F. and Hartmann, J. and Haus, G. and Hoxha, E. and Karls, R. and Koga, H. and Kreusch, J. and Lallas, A. and Majenka, P. and Marghoob, A. and Massone, C. and Mekokishvili, L. and Mestel, D. and Meyer, V. and Neuberger, A. and Nielsen, K. and Oliviero, M. and Pampena, R. and Paoli, J. and Pawlik, E. and Rao, B. and Rendon, A. and Russo, T. and Sadek, A. and Samhaber, K. and Schneiderbauer, R. and Schweizer, A. and Toberer, F. and Trennheuser, L. and Vlahova, L. and Wald, A. and Winkler, J. and Wolbing, P. and Zalaudek, I.},
	month = aug,
	year = {2018},
	pages = {1836--1842},
}

Downloads: 0