Wisdom of the expert crowd prediction of response for 3 neurology randomized trials. Atanasov, P., Diamantaras, A., MacPherson, A., Vinarov, E., Benjamin, D. M., Shrier, I., Paul, F., Dirnagl, U., & Kimmelman, J. 95(5):e488–e498, 2020.
Wisdom of the expert crowd prediction of response for 3 neurology randomized trials [link]Paper  doi  abstract   bibtex   
Objective To explore the accuracy of combined neurology expert forecasts in predicting primary endpoints for trials. Methods We identi?ed one major randomized trial each in stroke, multiple sclerosis (MS), and amyotrophic lateral sclerosis (ALS) that was closing within 6 months. After recruiting a sample of neurology experts for each disease, we elicited forecasts for the primary endpoint outcomes in the trial placebo and treatment arms. Our main outcome was the accuracy of averaged predictions, measured using ordered Brier scores. Scores were compared against an algorithm that o?ered noncommittal predictions. Results Seventy-one neurology experts participated. Combined forecasts of experts were less accurate than a noncommittal prediction algorithm for the stroke trial (pooled Brier score = 0.340, 95% subjective probability interval [sPI] 0.340 to 0.340 vs 0.185 for the uninformed prediction), and approximately as accurate for the MS study (pooled Brier score = 0.107, 95% con?dence interval [CI] 0.081 to 0.133 vs 0.098 for the noncommittal prediction) and the ALS study (pooled Brier score = 0.090, 95% CI 0.081 to 0.185 vs 0.090). The 95% sPIs of individual predictions contained actual trial outcomes among 44% of experts. Only 18% showed prediction skill exceeding the noncommittal prediction. Independent experts and coinvestigators achieved similar levels of accuracy. Conclusion In this ?rst-of-kind exploratory study, averaged expert judgments rarely outperformed noncommittal forecasts. However, experts at least anticipated the possibility of e?ects observed in trials. Our ?ndings, if replicated in di?erent trial samples, caution against the reliance on simple approaches for combining expert opinion in making research and policy decisions.
@article{atanasov2020neurology,
	title = {Wisdom of the expert crowd prediction of response for 3 neurology randomized trials},
	volume = {95},
	issn = {0028-3878, 1526-632X},
	url = {http://www.neurology.org/lookup/doi/10.1212/WNL.0000000000009819},
	doi = {10.1212/WNL.0000000000009819},
	abstract = {Objective To explore the accuracy of combined neurology expert forecasts in predicting primary endpoints for trials.
Methods We identi?ed one major randomized trial each in stroke, multiple sclerosis ({MS}), and amyotrophic lateral sclerosis ({ALS}) that was closing within 6 months. After recruiting a sample of neurology experts for each disease, we elicited forecasts for the primary endpoint outcomes in the trial placebo and treatment arms. Our main outcome was the accuracy of averaged predictions, measured using ordered Brier scores. Scores were compared against an algorithm that o?ered noncommittal predictions.
Results Seventy-one neurology experts participated. Combined forecasts of experts were less accurate than a noncommittal prediction algorithm for the stroke trial (pooled Brier score = 0.340, 95\% subjective probability interval [{sPI}] 0.340 to 0.340 vs 0.185 for the uninformed prediction), and approximately as accurate for the {MS} study (pooled Brier score = 0.107, 95\% con?dence interval [{CI}] 0.081 to 0.133 vs 0.098 for the noncommittal prediction) and the {ALS} study (pooled Brier score = 0.090, 95\% {CI} 0.081 to 0.185 vs 0.090). The 95\% {sPIs} of individual predictions contained actual trial outcomes among 44\% of experts. Only 18\% showed prediction skill exceeding the noncommittal prediction. Independent experts and coinvestigators achieved similar levels of accuracy.
Conclusion In this ?rst-of-kind exploratory study, averaged expert judgments rarely outperformed noncommittal forecasts. However, experts at least anticipated the possibility of e?ects observed in trials. Our ?ndings, if replicated in di?erent trial samples, caution against the reliance on simple approaches for combining expert opinion in making research and policy decisions.},
	pages = {e488--e498},
	number = {5},
	journaltitle = {Neurology},
	shortjournal = {Neurology},
	author = {Atanasov, Pavel and Diamantaras, Andreas and {MacPherson}, Amanda and Vinarov, Esther and Benjamin, Daniel M. and Shrier, Ian and Paul, Friedemann and Dirnagl, Ulrich and Kimmelman, Jonathan},
	urldate = {2020-08-24},
	date = {2020-08-04},
	year = {2020},
	langid = {english},
	file = {Atanasov et al. - 2020 - Wisdom of the expert crowd prediction of response .pdf:C\:\\Users\\benjamin\\Zotero\\storage\\RFM6SQFE\\Atanasov et al. - 2020 - Wisdom of the expert crowd prediction of response .pdf:application/pdf}
}

Downloads: 0