Using health outcomes data to compare plans, networks and providers. Palmer, R. H. International journal for quality in health care: journal of the International Society for Quality in Health Care / ISQua, 10(6):477–483, December, 1998. abstract bibtex PURPOSE: To analyze the challenge of using health outcomes data to compare plans, networks and providers. ANALYSIS: Different questions require different designs for collecting and interpreting health outcomes data. When evaluating effectiveness of treatments, tests or other technologies, the question is what processes improve health outcomes? For this purpose, the strongest evidence comes from a double-blind randomized controlled trial. In program evaluations, the question is 'what is the impact of this policy and related programs on health outcomes?' For this purpose, we may be able to randomize subjects, but are more likely to have a quasi-experimental or an epidemiological design. When we compare plans, networks and providers for quality improvement purposes the question is 'do these specific plans perform differently from one another?', or, 'are these specific plans improving their performance over time?' We want to isolate for study the effects attributable to specific plans. Designs that yield strong evidence cannot be applied because we lack experimental control. CONCLUSIONS: When we already have strong evidence linking specific processes of care with specific outcomes, comparing process data may reveal more about performance of plans, networks and providers than comparing outcomes data. Comparisons of process data are easier to interpret and more sensitive to small differences than comparisons of outcomes data. Outcomes data are most useful for tracking care given by high volume providers over long periods of time, targeting areas for quality improvement and for detecting problems in implementation of processes of care.
@article{palmer_using_1998,
title = {Using health outcomes data to compare plans, networks and providers},
volume = {10},
issn = {1353-4505},
abstract = {PURPOSE: To analyze the challenge of using health outcomes data to compare plans, networks and providers.
ANALYSIS: Different questions require different designs for collecting and interpreting health outcomes data. When evaluating effectiveness of treatments, tests or other technologies, the question is what processes improve health outcomes? For this purpose, the strongest evidence comes from a double-blind randomized controlled trial. In program evaluations, the question is 'what is the impact of this policy and related programs on health outcomes?' For this purpose, we may be able to randomize subjects, but are more likely to have a quasi-experimental or an epidemiological design. When we compare plans, networks and providers for quality improvement purposes the question is 'do these specific plans perform differently from one another?', or, 'are these specific plans improving their performance over time?' We want to isolate for study the effects attributable to specific plans. Designs that yield strong evidence cannot be applied because we lack experimental control.
CONCLUSIONS: When we already have strong evidence linking specific processes of care with specific outcomes, comparing process data may reveal more about performance of plans, networks and providers than comparing outcomes data. Comparisons of process data are easier to interpret and more sensitive to small differences than comparisons of outcomes data. Outcomes data are most useful for tracking care given by high volume providers over long periods of time, targeting areas for quality improvement and for detecting problems in implementation of processes of care.},
language = {eng},
number = {6},
journal = {International journal for quality in health care: journal of the International Society for Quality in Health Care / ISQua},
author = {Palmer, R. H.},
month = dec,
year = {1998},
pmid = {9928586},
keywords = {Benchmarking, Health Care Reform, Humans, Managed Care Programs, Managed Competition, Outcome Assessment (Health Care), Process Assessment (Health Care), Randomized Controlled Trials as Topic, Research Design, United States},
pages = {477--483}
}
Downloads: 0
{"_id":"44EkdNsBjsE94tk4y","bibbaseid":"palmer-usinghealthoutcomesdatatocompareplansnetworksandproviders-1998","downloads":0,"creationDate":"2018-12-05T13:23:19.868Z","title":"Using health outcomes data to compare plans, networks and providers","author_short":["Palmer, R. H."],"year":1998,"bibtype":"article","biburl":"https://bibbase.org/zotero/emmanuel.chazard","bibdata":{"bibtype":"article","type":"article","title":"Using health outcomes data to compare plans, networks and providers","volume":"10","issn":"1353-4505","abstract":"PURPOSE: To analyze the challenge of using health outcomes data to compare plans, networks and providers. ANALYSIS: Different questions require different designs for collecting and interpreting health outcomes data. When evaluating effectiveness of treatments, tests or other technologies, the question is what processes improve health outcomes? For this purpose, the strongest evidence comes from a double-blind randomized controlled trial. In program evaluations, the question is 'what is the impact of this policy and related programs on health outcomes?' For this purpose, we may be able to randomize subjects, but are more likely to have a quasi-experimental or an epidemiological design. When we compare plans, networks and providers for quality improvement purposes the question is 'do these specific plans perform differently from one another?', or, 'are these specific plans improving their performance over time?' We want to isolate for study the effects attributable to specific plans. Designs that yield strong evidence cannot be applied because we lack experimental control. CONCLUSIONS: When we already have strong evidence linking specific processes of care with specific outcomes, comparing process data may reveal more about performance of plans, networks and providers than comparing outcomes data. Comparisons of process data are easier to interpret and more sensitive to small differences than comparisons of outcomes data. Outcomes data are most useful for tracking care given by high volume providers over long periods of time, targeting areas for quality improvement and for detecting problems in implementation of processes of care.","language":"eng","number":"6","journal":"International journal for quality in health care: journal of the International Society for Quality in Health Care / ISQua","author":[{"propositions":[],"lastnames":["Palmer"],"firstnames":["R.","H."],"suffixes":[]}],"month":"December","year":"1998","pmid":"9928586","keywords":"Benchmarking, Health Care Reform, Humans, Managed Care Programs, Managed Competition, Outcome Assessment (Health Care), Process Assessment (Health Care), Randomized Controlled Trials as Topic, Research Design, United States","pages":"477–483","bibtex":"@article{palmer_using_1998,\n\ttitle = {Using health outcomes data to compare plans, networks and providers},\n\tvolume = {10},\n\tissn = {1353-4505},\n\tabstract = {PURPOSE: To analyze the challenge of using health outcomes data to compare plans, networks and providers.\nANALYSIS: Different questions require different designs for collecting and interpreting health outcomes data. When evaluating effectiveness of treatments, tests or other technologies, the question is what processes improve health outcomes? For this purpose, the strongest evidence comes from a double-blind randomized controlled trial. In program evaluations, the question is 'what is the impact of this policy and related programs on health outcomes?' For this purpose, we may be able to randomize subjects, but are more likely to have a quasi-experimental or an epidemiological design. When we compare plans, networks and providers for quality improvement purposes the question is 'do these specific plans perform differently from one another?', or, 'are these specific plans improving their performance over time?' We want to isolate for study the effects attributable to specific plans. Designs that yield strong evidence cannot be applied because we lack experimental control.\nCONCLUSIONS: When we already have strong evidence linking specific processes of care with specific outcomes, comparing process data may reveal more about performance of plans, networks and providers than comparing outcomes data. Comparisons of process data are easier to interpret and more sensitive to small differences than comparisons of outcomes data. Outcomes data are most useful for tracking care given by high volume providers over long periods of time, targeting areas for quality improvement and for detecting problems in implementation of processes of care.},\n\tlanguage = {eng},\n\tnumber = {6},\n\tjournal = {International journal for quality in health care: journal of the International Society for Quality in Health Care / ISQua},\n\tauthor = {Palmer, R. H.},\n\tmonth = dec,\n\tyear = {1998},\n\tpmid = {9928586},\n\tkeywords = {Benchmarking, Health Care Reform, Humans, Managed Care Programs, Managed Competition, Outcome Assessment (Health Care), Process Assessment (Health Care), Randomized Controlled Trials as Topic, Research Design, United States},\n\tpages = {477--483}\n}\n\n","author_short":["Palmer, R. H."],"key":"palmer_using_1998","id":"palmer_using_1998","bibbaseid":"palmer-usinghealthoutcomesdatatocompareplansnetworksandproviders-1998","role":"author","urls":{},"keyword":["Benchmarking","Health Care Reform","Humans","Managed Care Programs","Managed Competition","Outcome Assessment (Health Care)","Process Assessment (Health Care)","Randomized Controlled Trials as Topic","Research Design","United States"],"downloads":0},"search_terms":["using","health","outcomes","data","compare","plans","networks","providers","palmer"],"keywords":["benchmarking","health care reform","humans","managed care programs","managed competition","outcome assessment (health care)","process assessment (health care)","randomized controlled trials as topic","research design","united states"],"authorIDs":[],"dataSources":["XiRGowmyYWQpwjiC9"]}