دانلود مقاله ISI انگلیسی شماره 2042
ترجمه فارسی عنوان مقاله

بررسی دقت و صحت گزارش خود از رفتار استفاده از نام تجاری (برند)

عنوان انگلیسی
Investigating the accuracy of self-reports of brand usage behavior
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
2042 2013 9 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Journal of Business Research, Volume 66, Issue 2, February 2013, Pages 224–232

ترجمه کلمات کلیدی
بررسی های گزارش خود - داده پانل - فراوانی های خرید - استفاده از نام تجاری - داده ادعا شده
کلمات کلیدی انگلیسی
پیش نمایش مقاله
پیش نمایش مقاله  بررسی دقت و صحت گزارش خود از رفتار استفاده از نام تجاری (برند)

چکیده انگلیسی

This paper increases understanding of the accuracy of consumers' self-reports about using brands and categories. The researchers select television viewing as the category of usage firstly, due to the availability of robust panel data for validation of the claimed data (i.e. self-reported) and secondly, because watching television and purchasing fast moving consumer goods have similar underlying structures in consumer behavior (Ehrenberg, 1969 and Goodhardt et al., 1975). The results show that light users (viewers) are the main source of error at both brand (program) and category (total television viewing) levels. At brand level, the data shows underestimation of once-only events, which suggests that those who engage in behavior infrequently either forget that the event has occurred, or do not form a representation of the event in memory. At category level, light users tend to generalize their responses to reflect the regularity of the behavior, which manifests in fewer non-users in claimed data. Regardless of the measurement level, the main questioning challenge is getting less frequent users to accurately report an event occurring. The paper provides recommendations for brand researchers on how to minimize the errors caused by responses from light users, which will increase the accuracy of the usage metrics overall.

مقدمه انگلیسی

Consumer surveys are a common data source in research by academia and industry. Steenkamp, de Jong, and Baumgartner (2010) report that 30% of all empirical articles in the Journal of Marketing and Journal of Marketing Research from 1996 to 2005 employ surveys. Surveys are also the most common method to gather data in commercial market research, with global survey research valued at $18.9 billion in 2010 (Casro: The Voice & Values of Research, 2011). Despite its common use, survey data have limitations. The key limitation is the reliance on respondents to remember, and report on, their own behavior retrospectively. This reliance on respondent recall introduces the potential for respondents to engage in activities such as telescoping, projecting, and omission when giving responses (East and Uncles, 2008 and Tourangeau, 2000). These factors lead to errors in consumers' recalled responses; these errors bring into question the degree to which survey responses reflect real world behavior. These errors are especially problematic when measuring routine consumer behavior such as purchasing of categories or brands. Purchasing is an extremely common, and important, area of questioning in survey research. Wind and Lerner (1979 p. 46) state that “marketing research if it is to be of practical value, must ensure the reliability and validity of the most basic measures — the measure of past usage behavior.” Research on past usage behavior collects consumer buying metrics. Two such metrics are how many consumers buy a category or a brand, referred to as penetration, and how many times they have bought, referred to as frequency. Consumer usage metrics have several purposes. The first is to generate market estimates for brand buying and market share calculations. This need for estimates is particularly relevant in markets where scanner data or household panels that record actual purchases are not available, such as impulse categories or in emerging markets. The second use of the data is to segment survey responses into categories based on usage levels (light to heavy), which can be used to screen suitable respondents for questions about future loyalty or intent to recommend (such as in Reichheld, 2003), which are asked of a brand's customers only. The third use by researchers is as a dependent variable in marketing studies. This need for a reliable dependent variable is of particular relevance to brand equity researchers given the considerable evidence that past consumer buying and usage behavior impact on consumer's awareness, perceptions and attitudes to the brand (Barnard and Ehrenberg, 1990 and Barwise and Ehrenberg, 1987). To control for this bias, researchers need accurate measurement of consumer's usage of brands. Therefore, accurately collecting brand usage behavior is important for making sure academic and commercial studies use robust variables in analyses. A brand manager's performance assessments and researcher's modeling quality can rely on these survey responses being good representations of real consumer behavior. Ideally, a researcher would capture consumer buying from longitudinal records of purchases collected in panel or scanner data. However, prohibitive costs (Lee, Hu, & Toh, 2000) lack of technology infrastructure, and various category-buying anomalies mean that this option is often not feasible. Researchers also often require other consumer based brand equity (CBBE) variables, such as awareness, perceptions and attitudes (Christodoulides and de Chernatony, 2010 and Keller, 2003), alongside usage behaviors; however panel companies are reluctant to tax panelists further by surveying them to collect CBBE metrics. Therefore, many researchers and marketers are, and will continue to be, reliant on consumer's self-report responses to measure buying behavior. Prior literature (e.g., Hu et al., 1996, Lee et al., 2000, Ram and Hyung-Shik, 1990 and Wind and Lerner, 1979) reports that claimed survey data are accurate in estimating rank order statistics, but the data tend to under or overestimate actual purchase frequencies. Nevertheless, as noted by Ram and Hyung-Shik (1990), there is little in-depth empirical evidence in the area, and most of the research simply compares the averages, ignoring the heterogeneity in respondents' behavior (Rust, Lemon, & Zeithaml, 2004), and assuming that the same type of error dominates everyone's responses. This study addresses the issue by examining and comparing the underlying distributions of responses from claimed self-reported survey data and panel data to identify where, how and with whom errors are concentrated. The researchers conduct a pilot study in a supermarket confectionery category to test the questioning approaches. The main study takes place in a category where there is a comprehensive, and official, recording of consumer metrics — television viewing. The analysis involves comparing consumer responses obtained from an online survey with those reported by OzTAM, the television industry body officially tasked with providing television audience measurement in Australia. The testing is at overall television viewing (category) and program (brand) viewing levels. The next section discusses the inaccuracies in claimed data caused by memory biases.

نتیجه گیری انگلیسی

The major contribution of this research is in identifying a source of aggregate level error in brand metrics that prior studies neglect. This discovery suggests a new avenue for further investigating and solving the inaccuracy problem in self-reports. As a methodological contribution, the findings of this paper highlight the importance of comparing the underlying distributions of frequencies, rather than simply comparing means (as in Brennan, Chan, Hini, & Esslemont, 1996), which assumes that the same type of error dominates everyone's responses. This research is the first to study the distribution of responses in this depth. Such an approach allows not only the identification and quantification of error, but also the identification of avenues for testing that will help explain as well as reduce brand usage recall error. By looking at the underlying distribution of frequencies, the researchers uncover that these are the light buyers or viewers that are the least accurate in their responses. One may question the importance of light buyers or viewers as they engage in the behavior of interest infrequently. However, this group often constitutes the majority of each brand's customer base, and so is of importance to brand managers and researchers (Anschuetz, 1997 and Sharp, 2010). Therefore, obtaining accurate usage information from light (occasional) buyers should be vital to any marketing study. Brand managers should be cautious when looking at claimed usage responses from light brand buyers and to some extent control the way the data provider collects the responses. Further, drawing on the evidence, the authors give recommendations to marketers and researchers about the questioning methods that provide more accurate answer. The discussion of those questioning methods at the category and brand level follows. Total television viewing behavior is more frequent than program viewing behavior, therefore the recollection period for total viewing behavior is only one week. The distributions reveal under-reporting at the extremes of zero and seven nights, and over-reporting at three to six nights. The error at zero likely reflects the sheer regularity of the television viewing behavior, which leads to generalizing. As such light television viewers assume that they are watching television at least one night last week (as per Tourangeau, Groves, Kennedy, & Yan, 2009). This error is particularly common in the respondents 16–34 years old. This age group has a high proportion of light television viewers and due to their life-stage probably has a less predictable social life than older groups. This finding implies that providing specific cues about the day, date or location should reduce the tendency to generalize and increase the accuracy of recollections from this segment (as per Tourangeau et al., 2000). In contrast, heavy television viewers under-report viewing and assume that they do not watch television at least two or three nights last week. This error was only evident in those 35 + years old. There are three possible reasons for this error. The first is likely due to the social desirability bias, in that respondents do not want to admit that they watch television every night during a week. To reduce this form of error, the literature suggests the use of forgiving wording (e.g. starting a question with: many people watch television each day of the week) to reduce error (Tourangeau et al., 2000). The second potential source of error is generalization, but in the reverse manner to light viewers. Heavy viewers might assume at least one night out of the house in the last week, when this behavior does not happen every week. Similar to reducing generalizing error for light television viewers, more precise cues such as prompting for the specific night should improve accuracy. The third potential source of error is the tendency to forward telescope (Sudman & Bradburn, 1973) for unusual events. As 35 + year old respondents are more likely to have children, and therefore spend more time at home — going out is unusual. Therefore, respondents may have been remembering and reporting on events that happen before the target week of measurement. If this error is due to forward telescoping, then prompting the respondent to recall the activities they do on the nights in question, rather than prompting with the specific behavior, will increase the accuracy of responses. Further testing can separate these three sources of error. In contrast, program viewing is less frequent than the total television viewing behavior, therefore the data recollection period for program viewing is eight weeks. For program watching behavior, the errors are predominantly over-reporting at zero and under-reporting at one-time. This result reveals that the light program viewers (those who watch only one episode) are the main source of inaccuracy in data respondents claim at brand level due to the forgetting one-off unimportant events. This finding is consistent with the concept of retrieval failure, where the encoding of infrequent events in memory is too weak for easy retrieval at a later point in time. If a once-off event is unusual or rare the recall is easier as the encoding is deep (Tourangeau et al., 2000). However, encoding of once-only viewing of a television program in a two-month period is unlikely to have a major impact on memory, particularly given the number of programs available to view. Therefore, the challenge is to try to increase the probability of retrieval. One can enhance retrieval of items in memory the memory encodes weakly using additional context cues. In case of television viewing, providing respondents with an additional cue, such as day, date and/or time of viewing may increase the accuracy of reporting among light viewers. Other options to increase the vividness of an event are to decrease the timeframe or ask specifically about the most recent event (as per Allison, 1985). However these solutions are not ideal for light users, as the narrower the timeframe, the less likely a light user will have something to remember. The concentration of the error around light viewers at program level is consistent across gender, and for respondents aged 35 + years. However, considerable inconsistency exists in the responses of those aged under 35 years, with some programs over-reporting at zero viewing and others under-reporting. Therefore, academics and practitioners should be cautious about self-reports from this segment. Concerning the relationship between demographics and recall, in line with the hypothesis, errors are relatively consistent for both males and females. However, unlike previous research, the errors in television viewing claims are not greater for respondents in the 55 + years age group. This finding suggests that an important link exists between the source of error and low frequency of usage. Older viewers are more likely to be heavy television viewers, and so tend to have better memories for this particular category (Sharp et al., 2009). In addition to highlighting to researchers the value of examining the full distribution of responses, the authors find the importance in providing the respondent with the full distribution of potential responses. Relying on respondents to write or type in numbers leads to errors due to use of availability heuristic. An availability heuristic is where respondents either calculate a number drawing on their estimates of frequency per week and multiply this by the timeframe or provides a frequency that is most prevalent in their memory (Tourangeau et al., 2000).