Contingent Valuation: From Dubious to Hopeless. Hausman, J. 26(4):43–56.
Contingent Valuation: From Dubious to Hopeless [link]Paper  doi  abstract   bibtex   
Approximately 20 years ago, Peter Diamond and I wrote an article for this journal analyzing contingent valuation methods. At that time Peter's view was contingent valuation was hopeless, while I was dubious but somewhat more optimistic. But 20 years later, after millions of dollars of largely govemment-research, 1 have concluded that Peter's earlier position was correct and that contingent valuation is hopeless. In this paper; I selectively review the continc valuation literature, focusing on empirical results. I find that three long-standing problems continue to exist: 1) hypothetical response bias that leads contingent valuation to overstatements of value; 2) large differences between willingness to pay and willingness to accept; and 3) the embedding problen which encompasses scope problems. The problems of embedding and scope are likely to be the most intractable, indeed, I believe that respondents to contingent valuation surveys are often not responding out of stable or well-defined preferences, but are essentially inventing their answers on the fly, in a which makes the resulting data useless for serious analysis. Finally, I offer a case study of a prominent contingent valuation study done by recognized exp in this approach, a study that should be only minimally affected by these concerns but in which the answers of respondents to the survey are implausible inconsistent. [Excerpt: Conclusion] The controversy over contingent valuation studies often follows a predictable pattern. A contingent value study is designed and carried out, with much talk about how methodology has strengthened over time. When the results are announced, critics point out potentially severe problems, like hypothetical bias and overstate ment, disagreements between willingness to pay and willingness to accept, and problems of scope or embedding. Supporters then respond that perhaps this particular study wasn't well-designed, and that there are ways to make adjustments, and that it would be wrong to conclude from one study that the enterprise of contin gent valuation is fundamentally flawed. Then the next study arrives and is criticized and defended in the same way. For those of us who have criticized a number of contingent valuation studies, it feels as if proponents of contingent valuation retreat to the position that all studies shown to be inaccurate are examples of poor practice rather than any inherent flaw. But despite all the positive-sounding talk about how great progress has been made in contingent valuation methods, recent studies by top experts continue to fail basic tests of plausibility. [\n] I expect that if contingent value respondents had been asked about Prince William Sound (where the Exxon Valdez ran aground) and another group was asked about Prince Andrew Sound (fictitious) after being told that Price William Sound had been saved, and a third group was asked about Price William Sound and Price Andrew Sound together, the combined response would not be much different than the individual responses, so that the sum of the individual responses would be signif icantly greater than the combined response. When contingent studies can routinely pass Diamond-Hausman adding-up tests I am willing to reconsider my conclusion of little or no progress over the past 20 years in solving the most important problems with contingent valuation. But even if that event occurs, contingent valuation would still face problems like how to address the upward bias in responses and how to build a framework for cost-benefit analysis in a setting where the data show a gulf between willingness to pay and willingness to accept. [\n] I do not expect these problems to be resolved, so in my view "no number" is still better than a contingent valuation estimate. Moreover, as the discussion of Australian Copyright Tribunal (2006) showed, other pieces of evidence can be brought to bear on goods that are not directly valued in the market. For example, in environmental damage situations, the method of "habitat equivalency analysis" relies on a group of trustees appointed through government or the courts to analyze what expenditures are needed to restore the environment (Damage Assessment and Restoration Program 2006). The political process can also provide outcomes. As Diamond and I wrote in our 1994 essay in this journal (pp. 58-59), "the choice is between relying on Congress after doing a contingent valuation study and relying on Congress without doing such a contingent valuation study." [\n] My theme is that unless or until contingent value studies resolve their long-standing problems, they should have zero weight in public decision-making.
@article{hausmanContingentValuationDubious2012,
  title = {Contingent Valuation: From Dubious to Hopeless},
  author = {Hausman, Jerry},
  date = {2012-11},
  journaltitle = {Journal of Economic Perspectives},
  volume = {26},
  pages = {43--56},
  issn = {0895-3309},
  doi = {10.1257/jep.26.4.43},
  url = {https://doi.org/10.1257/jep.26.4.43},
  abstract = {Approximately 20 years ago, Peter Diamond and I wrote an article for this journal analyzing contingent valuation methods. At that time Peter's view was contingent valuation was hopeless, while I was dubious but somewhat more optimistic. But 20 years later, after millions of dollars of largely govemment-research, 1 have concluded that Peter's earlier position was correct and that contingent valuation is hopeless. In this paper; I selectively review the continc valuation literature, focusing on empirical results. I find that three long-standing problems continue to exist: 1) hypothetical response bias that leads contingent valuation to overstatements of value; 2) large differences between willingness to pay and willingness to accept; and 3) the embedding problen which encompasses scope problems. The problems of embedding and scope are likely to be the most intractable, indeed, I believe that respondents to contingent valuation surveys are often not responding out of stable or well-defined preferences, but are essentially inventing their answers on the fly, in a which makes the resulting data useless for serious analysis. Finally, I offer a case study of a prominent contingent valuation study done by recognized exp in this approach, a study that should be only minimally affected by these concerns but in which the answers of respondents to the survey are implausible inconsistent.

[Excerpt: Conclusion] The controversy over contingent valuation studies often follows a predictable pattern. A contingent value study is designed and carried out, with much talk about how methodology has strengthened over time. When the results are announced, critics point out potentially severe problems, like hypothetical bias and overstate ment, disagreements between willingness to pay and willingness to accept, and problems of scope or embedding. Supporters then respond that perhaps this particular study wasn't well-designed, and that there are ways to make adjustments, and that it would be wrong to conclude from one study that the enterprise of contin gent valuation is fundamentally flawed. Then the next study arrives and is criticized and defended in the same way. For those of us who have criticized a number of contingent valuation studies, it feels as if proponents of contingent valuation retreat to the position that all studies shown to be inaccurate are examples of poor practice rather than any inherent flaw. But despite all the positive-sounding talk about how great progress has been made in contingent valuation methods, recent studies by top experts continue to fail basic tests of plausibility.

[\textbackslash n] I expect that if contingent value respondents had been asked about Prince William Sound (where the Exxon Valdez ran aground) and another group was asked about Prince Andrew Sound (fictitious) after being told that Price William Sound had been saved, and a third group was asked about Price William Sound and Price Andrew Sound together, the combined response would not be much different than the individual responses, so that the sum of the individual responses would be signif icantly greater than the combined response. When contingent studies can routinely pass Diamond-Hausman adding-up tests I am willing to reconsider my conclusion of little or no progress over the past 20 years in solving the most important problems with contingent valuation. But even if that event occurs, contingent valuation would still face problems like how to address the upward bias in responses and how to build a framework for cost-benefit analysis in a setting where the data show a gulf between willingness to pay and willingness to accept.

[\textbackslash n] I do not expect these problems to be resolved, so in my view "no number" is still better than a contingent valuation estimate. Moreover, as the discussion of Australian Copyright Tribunal (2006) showed, other pieces of evidence can be brought to bear on goods that are not directly valued in the market. For example, in environmental damage situations, the method of "habitat equivalency analysis" relies on a group of trustees appointed through government or the courts to analyze what expenditures are needed to restore the environment (Damage Assessment and Restoration Program 2006). The political process can also provide outcomes. As Diamond and I wrote in our 1994 essay in this journal (pp. 58-59), "the choice is between relying on Congress after doing a contingent valuation study and relying on Congress without doing such a contingent valuation study." 

[\textbackslash n] My theme is that unless or until contingent value studies resolve their long-standing problems, they should have zero weight in public decision-making.},
  keywords = {*imported-from-citeulike-INRMM,~INRMM-MiD:c-13798364,bioeconomy,complexity,controversial-monetarisation,economics,ecosystem-services,multi-criteria-decision-analysis,multiplicity,science-policy-interface,science-society-interface},
  number = {4}
}
Downloads: 0