با کمی کمک از لنگر: بحث و شواهد از اثر لنگر در ارزش گذاری مشروط
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|10612||2006||18 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 9734 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : The Journal of Socio-Economics, Volume 35, Issue 5, October 2006, Pages 836–853
Contingent valuation enjoys increasing popularity among transport, environmental and health economists, though the method has often been criticized. We discuss the main points of controversy, with a focus on anchoring effects. Available literature indicates that higher ambiguity, lower familiarity, relevance or involvement with the problem, a more trustworthy source and a more plausible bid tend to be associated with stronger anchoring effects. In two studies among informal caregivers we found that respondents use all sorts of anchors when eliciting their value of time. We end with a discussion of the substantive significance of CV for economics.
نتیجه گیری انگلیسی
Like many others before us, we found that respondents use all sorts of anchors when eliciting a value. This poses a problem to economists, as this indicates that elicited preferences may not be fully exogenous and stable. Respondents may have difficulties in coming up with a “true” value themselves or find it difficult to put a monetary value on something which they normally do not value monetarily, and consequently may accept proxy good costs or a starting bid as a reasonable valuation of their time. As discussed, higher ambiguity, lower familiarity, relevance or involvement with the problem, a more trustworthy source and a more plausible bid will be associated with a higher tendency to accept such anchors as a candidate response, and thus with stronger anchoring effects. The researcher that provides a reasonable bid to respondents therefore hooks them to an anchor that is hard to ignore and easy to accept: when you make an offer they cannot refuse, you may expect to encounter a lot of “yea saying”. But does this imply we should no longer use the discrete-choice format? Not necessarily. As argued, without starting bids people will have a more difficult time to come up with a reasonable valuation. This leads to two problems. First, as emphasized by the results of this study, not providing a bid decreases the response rate considerably. Of course, one may wonder whether the additional response generated by the elicitation format is very reliable: respondents that would not have elicited their time value in an open-ended format may engage in “yea saying”, while those that would have responded may go astray as a result of “starting point bias”. Second, not providing an anchor does not mean that no anchor is applied. Having and (especially) paying a housekeeper was observed to be an influential anchor in our study, but others perhaps remained unobserved. The discussion may therefore not be whether or not anchoring occurs, but rather whether the use of exogenous anchors (e.g., starting bids) is more acceptable than endogenous anchors (as here the wage rate of a housekeeper). One may argue that it is “better” to provide a reasonable anchor and allow people with strong preferences to deviate, than to loose many respondents and have the remaining respondents using their own (uncontrolled and perhaps irrelevant) anchors. On the other hand, in situations with a prominent and plausible anchor, such as the housekeeper in the case of valuing unskilled informal care tasks, this endogenous anchor may be a good proxy for the (minimum) value of time inputs. The potential error from using this proxy-good costs directly as value, and thus bypassing the preference elicitation all together, may then be minimal. The researcher directly imputes the value of the anchor the larger part of the respondents would probably have used anyway (without loss of respondents). Then again, it may be difficult to assess beforehand which anchor is most prominent or plausible to respondents and it remains a question of debate whether proxy-good costs can be accepted as a valid measure of value. Bateman et al. (1997) argue that in case the evidence that anchoring effects lead to observed divergences in value for the same commodity between elicitation formats is robust, as it seems, these different values should perhaps not be attributed to bias from elicitation formats that will disappear provided the subject gains sufficient information or experience, but be seen as a fundamental property of human decision processes. Psychologists and behavioral theorists argued that lay preferences are constructed in response to stimuli rather than revealed; “each response mode makes certain information, certain concepts, or certain decision-making heuristics particularly salient to the individual” (Bateman et al., 1997). If so, different CV elicitation formats will generate different values, and the economic case for preferring any one health technology over others may depend considerably upon whichever elicitation format happens to have been used (Cookson, 2000, Ryan et al., 2004 and Frew et al., 2004). This means resource allocation may depend on whether WTP or WTA has been used, and how this value was retrieved. How should we proceed? Is there an acceptable way out of the complexity of eliciting values? On the choice between WTP and WTA, we suggest to follow Bromley's (1995) advice: the choice depends on the perspective from which the good or service is to be valued – buyer or seller? Regarding elicitation formats, many authors are of the opinion that there are no compelling arguments to supporting any one format as being theoretically superior to any other, and as long as this is the case we should continue to research the optimal, unbiased method of eliciting individuals’ preferences (e.g., Ryan et al., 2004, Olsen, 1997 and Bateman et al., 1997). Some authors have suggested that a possible way forward would be to expand the response format from dichotomous to polychotomous choice. Instead of forcing respondents to make a choice between accepting or rejecting the bid, one or more intermediate options could be offered that allow respondents to express any ambivalence or preference uncertainty; for instance, “maybe yes” or a range from “definitely no” to “definitely yes” (Liljas and Blumenschein, 2000 and Kartman et al., 1997). Using only the real (definitely) yes-responses could yield WTP/WTA values that are much closer to the true values. This method however needs further investigation. Olsen (1997) suggested a combination of contingent valuation and contingent ranking (in case of valuing multiple goods), with the ranking as a sort of consistency test for the valuation exercise. Still, even with more refined methods of value elicitation, anchoring and other biases are bound to be a persistent phenomenon in contingent valuation. It thus remains doubtful whether responses to CV surveys can be used as reliable measures of individual preferences. The fundamental question then is, as posed by for instance Fischhoff (“Value elicitation: is there anything in there?”) and Diamond and Hausman (“Contingent valuation: is some number better than no number?”), whether economists should resort to CV at all? What we expect (or hope) from CV, and whether or not we consider possible anchoring effects to be a problem, depends on our beliefs regarding peoples’ ability to hold and express stable and exogenous preferences. If we adhere the philosophy of articulated values, we only need to worry about people understanding our questions too well (in casu, “strategic bias”). However, if we really believe people have articulated preferences, the strategies that have been proposed for calibration of responses appear peculiar: what is the rationale for surpassing consumer sovereignty? In case we believe people hold only basic values, we have to be pragmatic. If we feel our survey and population can meet the conditions favorable to a thorough inferential process from basic values, we can opt for a careful design in combination with calibration and sensitivity analysis. We may also choose not to do a CV survey, not to express people's preference for the good or service in monetary units, and to report the potential welfare impacts in some other way. Somewhere in between the two, we might use proxy or opportunity costs as an approximation, bearing in mind the objections discussed before. In the end, much of the available evidence suggests that most of the times people either do not hold articulated values or are not able to express them, and that survey design and analytical techniques cannot make up for the bias-sensitive inferential process from more basic values. As a result of the absence or difficulty to elicit (stable) preferences, contingent valuation does not measure what it claims to measure, and responses are often inconsistent with economic theory (Diamond and Hausman, 1994). In that sense, economists should ask themselves to what extent, and how legitimately, they are hooked on to their own anchor, the homo economicus.11 Our discussion emphasizes the serious doubts raised before to the validity and reliability of CV, and calls for reservation in the use of CV as sole measure to advise policy of welfare change. For some, some number may be better than no number; but they will always have us wondering what's in there.