ساختار پاسخ در تحقیقات رفتار مصرف کننده
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
1767 | 2005 | 6 صفحه PDF |
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Business Research, Volume 58, Issue 3, March 2005, Pages 348–353
چکیده انگلیسی
To date, researchers have been relatively unsuccessful in accounting for a substantial proportion of the variance in the measures of consumer behavior that have been investigated. It is posited here that one of the primary reasons for this lack of success is that most studies of consumer behavior use self-reports—answers or responses to research questions—that are often very labile. It is further posited that responses to research questions are not generally revealed (retrieved directly from memory) but rather are constructed at the time a question is asked and answered. Because they are derived from processes that are inherently constructive, self-reports are susceptible to a variety of contaminating influences that collectively constrain the ability of researchers to explain or predict consumer behavior. Several suggestions are offered for addressing response construction processes and their effects.
مقدمه انگلیسی
More than two decades ago, Jacoby (1978) authored a scathing “state-of-the-art review” of consumer behavior research. Jacoby began his review, which coincidentally received the prestigious Harold H. Maynard Award from the American Marketing Association for its contribution to marketing theory, by stating that “too large a proportion of the consumer (including marketing) research literature is not worth the paper it is printed on or the time it takes to read” (p. 87). A major theme throughout his review was that researchers had produced relatively little substantive knowledge of consumer behavior. If “substantive knowledge” can be equated with “variance accounted for,” it would appear that Jacoby (1978) was correct in his assessment of consumer behavior knowledge produced. Following an analysis of 70 different behavioral data sets (including but not limited to consumer behavior data sets), Cote and Buckley (1987) found that, of the variance accounted for in a variety of construct measures, less than 42% was due to the traits studied; the remainder was accounted for by the methods employed and by error. Although Peterson and Jolibert's (1995) meta-analysis of 1520 country-of-origin effects revealed that, on average, the presence of a country-of-origin cue accounted for 26% of the variance in perceptions and purchase intentions, their results appear to be somewhat of an anomaly. In a general meta-analysis of the variance accounted for in consumer behavior experiments over the time period 1970–1982 (which included the publication of Jacoby's 1978 review), Peterson et al. (1985) found that, across 118 independent experiments containing 1036 effects, on average, 5% of the variance in the dependent variables was accounted for by experimental manipulations. An identical percentage was obtained by Wilson and Sherrell (1993) in their meta-analysis of the effect of message source manipulation on persuasibility. More recently, a meta-analysis of 580 survey-based regression analyses (a majority of which were carried out in consumer behavior studies) conducted by the author covering the period 1964–1994 revealed that, on average, the variance accounted for in a dependent variable by an independent variable was slightly less than 1% (the average zero-order correlation coefficient was .08). Thus, in general, the amount of variance accounted for in measures of consumer behavior would seem to be relatively minor. A question then arises as to why this is so. The amount of variance accounted for is a function of many factors, including the theory employed (or the lack of theory), the research procedures and techniques utilized, the individuals and populations studied, and even the phenomena and constructs investigated. However, there is another plausible explanation for the minimal variance typically accounted for in consumer behavior research. This explanation is based on the data collected and analyzed in consumer behavior research.
نتیجه گیری انگلیسی
The objective of this article is really quite modest. The article simply represents an attempt to focus attention on one of the reasons researchers have been relatively unsuccessful in precisely explaining or consistently predicting various consumer behavior phenomena. In particular, the article posits that, with rare exceptions, answers to questions asked in consumer behavior research studies (and, indeed, in behavioral studies generally) tend to be constructed rather than revealed or directly retrieved from memory. If this article does nothing more than alert producers and interpreters of consumer behavior research to the existence and implications of response construction, it will be considered a successful endeavor. Response construction is analogous to other cognitive phenomena in that it is not directly observable. There is, however, substantial indirect evidence that most question responses are indeed constructed. Consequently, the notion of response construction has major implications for consumer behavior research. An obvious implication is that researchers should not generally expect to account for, say, a majority of the variance in the measures of consumer behavior investigated. (Although an argument might be made that research accounting for a large proportion of the variance in measures of consumer behavior is either trivial or the findings obvious and therefore uninteresting, such an argument ignores the reality of response construction as a determinant of a study outcome.) Thus, for example, assessments of the reliability of a multi-item scale should take into account the constructive nature of responses to individual scale items. In light of the existence of response construction, very high internal consistency scale reliability estimates (those close to 1.0) should be viewed with suspicion, not awe, and what is an “acceptable” reliability coefficient should probably be rethought. This is particularly so for the constructs frequently studied in consumer behavior research—attitudes, emotions, attributions, and so forth. Even as theories of consumer behavior and research methodologies become more sophisticated and ever more capable of finer distinctions, as long as self-reports or responses to questions serve as the primary source of information about consumer behavior, a lacuna in knowledge will remain. Because responses are constructed, they are susceptible to a myriad of contaminating influences (many of which are likely yet undiscovered). What is needed is systematic research, conceptual as well as empirical, that will at least establish the general boundary conditions of response construction. In this regard, Hilton (1995) forcefully argues that a “constructionist perspective” specifically requires an explicit recognition of the role of context when obtaining verbal reports. Moreover, because question responses are susceptible to contaminating influences, increased attention should be given to these influences when eliciting responses. Systematic research along the lines of that conducted by Blair and Burton (1987) and Menon (1997) on asking behavioral frequency questions is needed across a variety of consumer behavior phenomena. Incorporating experimental treatments into a research investigation to evaluate the possible effects of questionnaire structure, mode of data collection, and question form on responses should become standard practice. For instance, employing different questionnaire versions as treatments should be routine in basic research. One benefit of such a practice is that it permits both a practical and a statistical assessment of the stability of the phenomenon or construct being investigated. Fortunately, if recent consumer behavior and marketing research publications are an indication of an incipient trend, there is a renewal of interest in topics that can be subsumed under the rubric of response construction. A second implication of response construction is that researchers need to incorporate measures other than self-reports of consumer behavior phenomena into their investigations. Recently, Winer (1999) has argued in favor of an increased use of scanner data in consumer behavior research to corroborate findings based on self-reports. Other researchers have espoused the use of nonverbal data when studying consumer behavior. Although it is not clear exactly what nonverbal data should be collected and analyzed, attempts should be made to substitute, supplement, and complement self-report data in future investigations. For example, Peterson et al. (1995) studied the effects of nonverbal voice characteristics, such as rate of speaking, average pause duration, and fundamental frequency contour, as complements to self-reports. Response construction, as both a reflection of and a consequence of cognitive information processing, is worthy of investigation in and of itself, not as merely a contributor to measurement unreliability or as a methodological artifact. The notion of response construction should minimally serve as the basis of a framework that will facilitate the organization, integration, and application of methodological as well as substantive findings from numerous and disparate disciplines. Until that happens, the explanatory and predictive power of consumer behavior research will remain low.