Can Oncologists Predict the Efficacy of Treatments in Randomized Trials?. Benjamin, D., Mandel, D. R., Barnes, T., Krzyzanowska, M., Leighl, N., Tannock, I., & Kimmelman, J. 26:56–62, 2020. _eprint: https://theoncologist.onlinelibrary.wiley.com/doi/pdf/10.1634/theoncologist.2020-0054
Can Oncologists Predict the Efficacy of Treatments in Randomized Trials? [link]Paper  doi  abstract   bibtex   
Background Decisions about trial funding, ethical approval, or clinical practice guideline recommendations require expert judgments about the potential efficacy of new treatments. We tested whether individual and aggregated expert opinion of oncologists could predict reliably the efficacy of cancer treatments tested in randomized controlled trials (RCTs). Methods An international sample of 137 oncologists specializing in genito-urinary, lung and colorectal cancer provided forecasts on primary outcome attainment for five active randomized cancer trials within their sub-specialty; skill was assessed using Brier scores (BS), which measure the average squared deviation between forecasts and outcomes. Results 40% of trials in our sample reported positive primary outcomes. Experts generally anticipated this overall frequency (mean forecast = 34%). Individual experts on average outperformed random predictions (mean BS = 0.29, 95% C.I. [0.28,0.33] vs. 0.33) but under-performed prediction algorithms that always guessed 50% (BS = 0.25) or that were trained on base rates (BS = 0.19). Aggregating forecasts improved accuracy (BS = 0.247, 95% C.I. [0.16, 0.36]). Neither individual experts nor aggregated predictions showed appreciable discrimination between positive and non-positive trials (AUC of a receiver operating characteristic curve = 0.52 and 0.43, respectively). Conclusion These findings are based on a limited sample of trials. However, they reinforce the importance of basing research and policy decisions on the results of randomized trials rather than expert opinion or low-level evidence. Implications for Practice Predictions of oncologists, either individually or in the aggregate, did not anticipate reliably outcomes for randomized trials in cancer. Our findings suggest that pooled expert opinion about treatment efficacy is no substitute for randomized trials. They also underscore the challenges of using expert opinion to prioritize interventions for clinical trials or to make recommendations in clinical practice guidelines.
@article{benjamin2021oncologist,
	title = {Can Oncologists Predict the Efficacy of Treatments in Randomized Trials?},
	volume = {26},
	rights = {© {AlphaMed} Press 2020},
	issn = {1549-490X},
	url = {https://theoncologist.onlinelibrary.wiley.com/doi/abs/10.1634/theoncologist.2020-0054},
	doi = {10.1634/theoncologist.2020-0054},
	abstract = {Background Decisions about trial funding, ethical approval, or clinical practice guideline recommendations require expert judgments about the potential efficacy of new treatments. We tested whether individual and aggregated expert opinion of oncologists could predict reliably the efficacy of cancer treatments tested in randomized controlled trials ({RCTs}). Methods An international sample of 137 oncologists specializing in genito-urinary, lung and colorectal cancer provided forecasts on primary outcome attainment for five active randomized cancer trials within their sub-specialty; skill was assessed using Brier scores ({BS}), which measure the average squared deviation between forecasts and outcomes. Results 40\% of trials in our sample reported positive primary outcomes. Experts generally anticipated this overall frequency (mean forecast = 34\%). Individual experts on average outperformed random predictions (mean {BS} = 0.29, 95\% C.I. [0.28,0.33] vs. 0.33) but under-performed prediction algorithms that always guessed 50\% ({BS} = 0.25) or that were trained on base rates ({BS} = 0.19). Aggregating forecasts improved accuracy ({BS} = 0.247, 95\% C.I. [0.16, 0.36]). Neither individual experts nor aggregated predictions showed appreciable discrimination between positive and non-positive trials ({AUC} of a receiver operating characteristic curve = 0.52 and 0.43, respectively). Conclusion These findings are based on a limited sample of trials. However, they reinforce the importance of basing research and policy decisions on the results of randomized trials rather than expert opinion or low-level evidence. Implications for Practice Predictions of oncologists, either individually or in the aggregate, did not anticipate reliably outcomes for randomized trials in cancer. Our findings suggest that pooled expert opinion about treatment efficacy is no substitute for randomized trials. They also underscore the challenges of using expert opinion to prioritize interventions for clinical trials or to make recommendations in clinical practice guidelines.},
	pages = {56--62},
	journaltitle = {The Oncologist},
	author = {Benjamin, Daniel and Mandel, David R. and Barnes, Tristan and Krzyzanowska, Monika and Leighl, Natasha and Tannock, Ian and Kimmelman, Jonathan},
	urldate = {2020-08-24},
	date = {2021},
	year = {2020},
	langid = {english},
	note = {\_eprint: https://theoncologist.onlinelibrary.wiley.com/doi/pdf/10.1634/theoncologist.2020-0054},
	keywords = {Forecasting, Bioethics, Cancer, Clinical Trial, Drug Development, Medical Ethics},
	file = {Full Text PDF:C\:\\Users\\benjamin\\Zotero\\storage\\3UTMCVKY\\Benjamin et al. - Can Oncologists Predict the Efficacy of Treatments.pdf:application/pdf;Snapshot:C\:\\Users\\benjamin\\Zotero\\storage\\6AKRFTM3\\theoncologist.html:text/html}
}

Downloads: 0