Inter-rater reliability of explicit indicators of prescribing appropriateness. Tully, M. P. & Cantrill, J. A. Pharmacy world & science: PWS, 27(4):311–315, August, 2005.
doi  abstract   bibtex   
OBJECTIVE: To assess the inter-rater reliability of 14 explicit indicators of appropriate long-term prescribing. METHOD: All available data required for the assessment of 59 long-term prescriptions started during a hospital admission for 25 patients were transcribed from the patients' medical records. These transcripts were presented in a standardised format and random order to four raters (two doctors and two pharmacists) who used the indicators to judge the appropriateness of each prescription. Debriefing interviews were held with each rater. An a priori level of acceptable agreement between the raters was set at a weighed kappa of 0.70. RESULTS: There was no apparent difference between pharmacists and doctors for all findings, so data were combined. Two indicators showed poor agreement, three showed moderate agreement, and nine showed substantial or near perfect agreement, exceeding a weighted kappa of 0.70. There was excellent positive agreement as to which prescriptions were judged appropriate by the indicators, but much worse negative agreement as to which prescriptions were judged to be inappropriate. In the interviews, the raters remarked on the difficulty of applying explicit indicators when they routinely made implicit judgements about data in the medical records. CONCLUSION: Nine of the indicators achieved the required level of reliability and the negative agreement levels showed that this was the area that required greater improvement in future developments of the indicators. Further work needs to be conducted to investigate ways of the improving instructions on how to make explicit judgements and reducing the need for implicit or subjective assessments.
@article{tully_inter-rater_2005,
	title = {Inter-rater reliability of explicit indicators of prescribing appropriateness},
	volume = {27},
	issn = {0928-1231},
	doi = {10.1007/s11096-005-2453-y},
	abstract = {OBJECTIVE: To assess the inter-rater reliability of 14 explicit indicators of appropriate long-term prescribing.
METHOD: All available data required for the assessment of 59 long-term prescriptions started during a hospital admission for 25 patients were transcribed from the patients' medical records. These transcripts were presented in a standardised format and random order to four raters (two doctors and two pharmacists) who used the indicators to judge the appropriateness of each prescription. Debriefing interviews were held with each rater. An a priori level of acceptable agreement between the raters was set at a weighed kappa of 0.70.
RESULTS: There was no apparent difference between pharmacists and doctors for all findings, so data were combined. Two indicators showed poor agreement, three showed moderate agreement, and nine showed substantial or near perfect agreement, exceeding a weighted kappa of 0.70. There was excellent positive agreement as to which prescriptions were judged appropriate by the indicators, but much worse negative agreement as to which prescriptions were judged to be inappropriate. In the interviews, the raters remarked on the difficulty of applying explicit indicators when they routinely made implicit judgements about data in the medical records.
CONCLUSION: Nine of the indicators achieved the required level of reliability and the negative agreement levels showed that this was the area that required greater improvement in future developments of the indicators. Further work needs to be conducted to investigate ways of the improving instructions on how to make explicit judgements and reducing the need for implicit or subjective assessments.},
	language = {eng},
	number = {4},
	journal = {Pharmacy world \& science: PWS},
	author = {Tully, Mary P. and Cantrill, Judith A.},
	month = aug,
	year = {2005},
	pmid = {16228630},
	keywords = {Continuity of Patient Care, Drug Prescriptions, Drug Utilization, Hospital Administration, Humans, Medical Records, Observer Variation, Quality Indicators, Health Care, Reproducibility of Results},
	pages = {311--315}
}

Downloads: 0