افزایش میزان پاسخگویی بررسی وب سایت در پژوهش نوآوری : مطالعه تجربی از ویژگی های طراحی تماس ایستا و پویا
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
2352 | 2013 | 14 صفحه PDF |
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Research Policy, Volume 42, Issue 1, February 2013, Pages 273–286
چکیده انگلیسی
Web surveys have become increasingly central to innovation research but often suffer from low response rates. Based on a cost–benefits framework and the explicit consideration of heterogeneity across respondents, we consider the effects of key contact design features such as personalization, incentives, and the exact timing of survey contacts on web survey response rates. We also consider the benefits of a “dynamic strategy”, i.e., the approach to change features of survey contacts over the survey life cycle. We explore these effects experimentally using a career survey sent to over 24,000 junior scientists and engineers. The results show that personalization increases the odds of responding by as much as 48%, while lottery incentives with a high payoff and a low chance of winning increase the odds of responding by 30%. Furthermore, changing the wording of reminders over the survey life cycle increases the odds of a response by over 30%, while changes in contact timing (day of the week or hour of the day) did not have significant benefits. Improvements in response rates did not come at the expense of lower data quality. Our results provide novel insights into web survey response behavior and suggest useful tools for innovation researchers seeking to increase survey participation.
مقدمه انگلیسی
Scholars of science and innovation increasingly employ survey data from individual scientists and engineers as well as from administrators and managers. Although many of the early and most influential surveys were conducted by national agencies such as the National Science Foundation in the United States or various national statistical offices in Europe,1 there has been a sharp increase in the number of independent survey efforts, especially online surveys. For example, in the past twelve months there have been more than twenty articles published in Research Policy that employ survey data, nearly half of which were administered online.2 Part of the reason behind the growing trend toward online surveys is that they can be conducted at relatively low cost and within a shorter time frame than conventional paper-based or telephone surveys. In addition, it has become quite easy to obtain email contact information for large samples of scientists and engineers by extracting such information from publications, patents, résumés, university websites or similar sources (cf. Bruneel et al., 2010, Fini et al., 2010 and Haeussler, 2011). Despite the important role of surveys in innovation studies, relatively little attention is given to the challenges of achieving high response rates. Survey participation is a particularly acute issue for web surveys, which tend to suffer from lower response rates than other survey modes, especially as low survey costs lead to “oversurveying” (cf. Couper, 2000, Fricker et al., 2005, Kaplowitz et al., 2004 and Rogelberg and Stanton, 2007). For example, while short and direct surveys involving phone follow-ups can achieve relatively high response rates of 40–70% (Brostrom, 2010 and Van Looy et al., 2011), more detailed online surveys often exhibit lower response rates of around 10–25%. Low response rates, in turn, reduce sample size and statistical power. Moreover, low response rates may also lead to nonresponse bias and affect the validity of survey results irrespective of the sample size. As a consequence, there is a need to better understand web survey response behavior and to develop techniques to increase web survey response rates. We contribute to the study of innovation by examining how contact design features such as personalization, incentives, and the timing of survey invitations affect response rates among scientists and engineers and by deriving recommendations for survey researchers. We first review prior work on the drivers of response rates and present a generalized cost–benefits framework that incorporates heterogeneity across respondents. We then examine the effectiveness of various contact design features using a sample of over 24,000 scientists and engineers who were invited to respond to a survey on their organizational environment, work activities, and their career choices. To examine causal effects, we randomly assigned potential respondents to 25 experimental conditions that differed systematically with respect to various contact design features. This study extends prior work on web survey response rates in several ways. First, we consider not only design parameters that were relevant in mail surveys, but also features that reflect new opportunities provided by web surveys such as the exact timing of survey contacts. Second, in addition to design features of survey contacts at any particular point in time (“static design features”), we consider several “dynamic design features” that capture aspects of the survey strategy over time including the number of reminders, the delay between reminders, and changes in design features over the survey life cycle. Finally, much of the prior literature on survey response rates has used household or general population samples. It is not clear whether the resulting insights generalize to scientists and engineers, who may differ from the general population with respect to characteristics such as their interest in research, internet use, or work schedules. Thus, our findings based on a sample of scientists and engineers should be particularly relevant for survey researchers working in the areas of science and innovation. Our results suggest several design features that significantly increase response rates, but we also show that other features have little to no impact on response rates. As such, our results provide novel insights into web survey response behavior of scientists and engineers and provide survey researchers with guidance regarding where to focus their survey design efforts. In addition, this paper may also be of interest to readers of web survey based studies who seek more background on this important and increasingly utilized methodology.
نتیجه گیری انگلیسی
Online surveys offer significant cost and speed advantages over conventional paper-based surveys. However, response rates tend to be low, limiting statistical power and raising concerns about sample selection bias and representativeness. We develop a generalized cost–benefits framework that explicitly considers heterogeneity in respondents’ preferences for various design features. Building on this framework, we discuss potential effects of static contact design features, as well as of a “dynamic strategy” that systematically varies design features over the survey life cycle. We tested the effectiveness of various design parameters by inviting over 24,000 junior scientists and engineers to participate in a survey on their organizational context, work activities, and career choices. To allow for causal inferences, we employed an experimental approach and randomly assigned subjects to conditions that differed with respect to static and dynamic design features. Before we summarize our results and conclude with recommendations for survey researchers, it is important to consider the generalizability of our findings. Our sample included scientists and engineers working in the United States, and our results may not necessarily generalize to other populations. For example, while the timing of survey invitations did not have much of an effect in our study, it may be important in general population samples that include individuals who do not have regular internet access. Thus, while our results should be particularly valuable for survey researchers working in the area of science and innovation, future research is needed to examine the effectiveness of the various contact design features in other types of samples. More generally, readers seeking guidance in their survey efforts should consider how their samples compare to ours and think carefully about potential differences in response patterns. The generalized cost–benefits framework outlined in the first part of this paper will be helpful in that effort. Table 5 summarizes our findings and provides concrete recommendations for researchers seeking to increase response rates to web surveys. Our observations regarding significant benefits of some design parameters suggest effective levers to increase response rates. Our findings that other factors such as the timing of survey invitations matter little are also important; all survey researchers have to make decisions regarding these factors and our results suggest that they may usefully focus their time and effort on optimizing other design parameters that have greater impacts on survey participation.