Comparing rating paradigms for evidence-based program registers in behavioral health: Evidentiary criteria and implications for assessing programs. Means, S. N.; Magura, S.; Burkhardt, J. T.; Schröter, D. C.; and Coryn, C. L. S. Evaluation and Program Planning, 48:100–116, 2015.
Comparing rating paradigms for evidence-based program registers in behavioral health: Evidentiary criteria and implications for assessing programs [link]Paper  abstract   bibtex   
Decision makers need timely and credible information about the effectiveness of behavioral health interventions. Online evidence-based program registers (EBPRs) have been developed to address this need. However, the methods by which these registers determine programs and practices as being ���evidence-based” has not been investigated in detail. This paper examines the evidentiary criteria EBPRs use to rate programs and the implications for how different registers rate the same programs. Although the registers tend to employ a standard Campbellian hierarchy of evidence to assess evaluation results, there is also considerable disagreement among the registers about what constitutes an adequate research design and sufficient data for designating a program as evidence-based. Additionally, differences exist in how registers report findings of “no effect,” which may deprive users of important information. Of all programs on the 15 registers that rate individual programs, 79% appear on only one register. Among a random sample of 100 programs rated by more than one register, 42% were inconsistently rated by the multiple registers to some degree.
@article{means_comparing_2015,
	title = {Comparing rating paradigms for evidence-based program registers in behavioral health: {Evidentiary} criteria and implications for assessing programs},
	volume = {48},
	issn = {0149-7189},
	shorttitle = {Comparing rating paradigms for evidence-based program registers in behavioral health},
	url = {http://www.sciencedirect.com/science/article/pii/S0149718914001116},
	abstract = {Decision makers need timely and credible information about the effectiveness of behavioral health interventions. Online evidence-based program registers (EBPRs) have been developed to address this need. However, the methods by which these registers determine programs and practices as being ���evidence-based” has not been investigated in detail. This paper examines the evidentiary criteria EBPRs use to rate programs and the implications for how different registers rate the same programs. Although the registers tend to employ a standard Campbellian hierarchy of evidence to assess evaluation results, there is also considerable disagreement among the registers about what constitutes an adequate research design and sufficient data for designating a program as evidence-based. Additionally, differences exist in how registers report findings of “no effect,” which may deprive users of important information. Of all programs on the 15 registers that rate individual programs, 79\% appear on only one register. Among a random sample of 100 programs rated by more than one register, 42\% were inconsistently rated by the multiple registers to some degree.},
	urldate = {2015-02-04},
	journal = {Evaluation and Program Planning},
	author = {Means, Stephanie N. and Magura, Stephen and Burkhardt, Jason T. and Schröter, Daniela C. and Coryn, Chris L. S.},
	year = {2015},
	keywords = {Décideurs, Impacts et effets, Mesure du transfert, Méthodologie, Santé et services sociaux, Étude mixte, Évaluation de programmes},
	pages = {100--116}
}
Downloads: 0