دانلود مقاله ISI انگلیسی شماره 5791
ترجمه فارسی عنوان مقاله

کمکی که به رسمیت شناخته نشده است : غفلت زیانبار از سیستم های پشتیبانی تصمیم گیری

عنوان انگلیسی
Help that is not recognized: Harmful neglect of decision support systems
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
5791 2012 10 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Decision Support Systems, Volume 54, Issue 1, December 2012, Pages 719–728

ترجمه کلمات کلیدی
سیستم های پشتیبانی تصمیم گیری - ارزیابی کاربر - عملکرد واقعی - قطع - مشکلات ساختار ضعیف - خلاقیت
کلمات کلیدی انگلیسی
پیش نمایش مقاله
پیش نمایش مقاله  کمکی که به رسمیت شناخته نشده است : غفلت زیانبار از سیستم های پشتیبانی تصمیم گیری

چکیده انگلیسی

Decision support systems (DSSs) aim to enhance the performance of decision makers, but to do so DSSs have to be adopted and used. Technology acceptance research shows that user evaluations (i.e., beliefs, perceptions, and attitudes) are key drivers of adoption and use. This article first presents evidence from the literature suggesting that the link between user evaluations of DSSs and actual performance may be weak, or sometimes even negative. The authors then present two empirical studies in which they found a serious disconnect between user evaluations and actual performance. If user evaluations do not accurately reflect performance, then this may lead to harmful neglect of performance-enhancing DSSs. The article concludes with a discussion of interventions that may alleviate this problem.

مقدمه انگلیسی

Decision support systems (DSSs) are IT-enabled tools that aim to enhance the effectiveness and efficiency of managerial and professional decision making for ill-structured or weakly-structured problems [33]. There is a wide variety of decision support systems (see [25] and [26]), including passive DSSs that provide the user with compiled information only and active DSSs that provide specific solutions or recommendations. In order to enhance the decision-maker's performance, DSSs have to be adopted and used [56]. According to generally accepted models of technology acceptance [10] and [11], technology diffusion [53] and information system success [13] and [14], user evaluations (i.e., perceptions, beliefs and attitudes) are key drivers of DSS adoption and use. However, evidence in the literature suggests that what users achieve with DSSs, i.e., their actual performance, does not always correspond with what users perceive, i.e., their evaluations of the DSSs. When performance-enhancing DSSs are not used because the intended users do not recognize the added value or objective quality of the system, we have a situation of harmful neglect. The disconnect between user evaluations of DSSs and actual performance may partly explain the low adoption and usage rate of DSSs in practice [8] and [38]. To facilitate systematic research into DSS evaluation, Rhee and Rao [52] recently proposed a general framework that is applicable to a wide variety of DSSs. In this framework, they explicitly distinguish between DSS performance as perceived by the user, on the one hand, and their actual performance with the DSS, on the other hand. In the present article, we investigate the link between user evaluations of DSSs and actual performance. The prevalent assumption in the DSS literature is that “if users give a system ‘high marks’, then it must be improving their performance” [22: p. 1827]. However, many studies in psychology and other domains challenge this assumption because they have found evidence that human perception and judgment are subject to biases. The hypothesized connection between user evaluations of DSSs and actual performance has not been researched extensively [22], but there is (indirect) evidence that there is a potential disconnect. For example, in an experimental study, Lilien et al. [39] found that participants who had access to a database-oriented decision support system made objectively better decisions than those with access to an Excel spreadsheet only, but their subjective evaluations of both the decision outcomes and the decision process were not significantly different. As Lilien et al. [39: p. 233] note, “we find a surprising disconnect between objective performance measures that are favorable and subjective evaluation measures that are mixed or unfavorable”. Van Bruggen et al. [61] reported similar findings. In their simulation study, users of a high-quality DSS (with an error between the DSS outcomes and the actual outcomes that was set to 3%) performed much better than users of a medium-quality DSS (with an error that was set to 23%), but they were not more confident about the quality of their decisions. As user evaluations of DSSs are frequently used in research and practice, this apparent lack of connection with objective performance is of great concern, and motivates our research. The remainder of this article is organized as follows. We first discuss possible reasons for a disconnect between user evaluations of DSSs and actual performance and then present existing evidence from the literature. Next, we present our research framework and discuss two empirical studies in which we investigated the link between user evaluations of DSSs and actual performance. Both studies show a clear discrepancy between what users perceive and what they actually achieve with DSSs. This should be a reason for concern to both researchers and practitioners, as it may lead to wrong decisions regarding the adoption and use of DSSs. A neglect of performance-enhancing DSSs, for example, may hamper a company's competitive position and should be avoided. The article concludes with a discussion of interventions that can help users form evaluative judgments of DSSs that are more accurate.

نتیجه گیری انگلیسی

7.1. Substantive findings User evaluations of decision support systems are commonly used in research and practice, but it is not clear whether, and when, these evaluations accurately reflect objective performance. In this article, we first examined the literature looking for evidence on the connection between user evaluations of DSSs and actual performance. We identified sixteen DSS studies that reported both user evaluation measures and actual performance measures and categorized them into four different categories: “Rightful Conviction” (7 studies), “Harmful Neglect” (6 studies), “Seductive Illusion” (2 studies) and “Wise Abstention” (1 study). Of the studies that reported a positive effect of the DSS on actual performance, almost half reported neutral or negative user evaluations. This is alarming as it suggests that users often fail to recognize the performance-enhancing potential of DSSs. The results of the two empirical studies presented in this article add to this bleak picture. We failed to find significant positive correlations between user evaluations of the DSSs and actual performance in either of the studies. We even found significantly negative correlations, meaning that improvements in actual performance were associated with less favorable evaluations of the DSS in question. Our findings imply that if users were to follow their own perceptions, effective DSSs may not be adopted and used (harmful neglect) or ineffective DSSs may be adopted and used (seductive illusion). In our studies, the mind-mapper DSS (Study 1) and the rule-based DSS (Study 2) would be adopted and used based on their favorable user evaluations, even though these DSSs neither increased the quality of the solution nor did they speed up the decision-making process or improve productivity. In Study 1, the stimulus-provider DSS that enhanced performance most was the least likely to be used in the future. In Study 2, the analogy-based DSS enhanced actual performance, but it was perceived as less useful than just a paper document. Based on the process data of Study 2, it seems that readily observable indicators, such as usage time and usage intensity, drive user evaluations more than actual performance. Neglecting performance-enhancing DSSs or using dysfunctional DSSs may eventually hurt the profitability of an organization. From a research perspective, the observed disconnect between user evaluations of DSSs and actual performance calls for great care when using user evaluations as indicators or proxies of actual performance. As demonstrated in this article, what users actually achieve with a DSS may not always correspond with what they perceive. In other words, subjective and objective measures of DSS performance may produce very different results (cf. [4]). Although including different measures of the same construct may enhance content validity in meta-analytical studies of the TAM or similar models [14] and [59], it may deteriorate statistical validity when the measures are disconnected. For example, in their meta-analysis, Petter and McLean [49] reported a weak link between the constructs “system use” and “user satisfaction”, which they argued might be due to the fact that “use was measured as actual use, self-reported use, depth of use, and importance of use” [49: p. 164]. 7.2. Limitations and further research The participants in our studies were business school students. Students have been shown to be good surrogates for real managers in experimental studies [51]. Moreover, these students are prospective users of DSSs in their future roles as consultants and product, brand or general managers. Nonetheless, it would be worthwhile to replicate our findings in a field setting with practitioners. The moderate inter-rater reliability coefficients for the creativity ratings in Study 2 may be another limitation. However, according to Finke at al. [19: p. 41], inter-rater agreements for creativity ratings are typically lower than in other fields, because for creativity “judges should disagree to some extent, as a reflection of their varying backgrounds and expertise”. In their creativity studies, Finke et al. [19: p. 41] found inter-rater reliability coefficients that “typically ranged from .5 to .6”. Using hard, objective performance measures (e.g., sales) would, of course, be better (see [56]), but that would require the actual implementation of all generated campaign proposals, which is not feasible. An interesting question for further research is whether the observed disconnect between user evaluations of DSS and actual performance is dependent on the area of management. Are areas with less structured decision problems, such as marketing, strategy and new product development, more prone to this phenomenon? Several studies listed in the “Harmful Neglect” quadrant of Fig. 1 deal with marketing problems and our empirical studies deal with problems that require creative solutions. In such domains, it is not easy to judge the objective quality of a solution. It would be interesting to study whether a disconnect also exists in domains with more structured problems, such as inventory, transportation and scheduling problems. 7.3. Interventions Now that we know that the user's recognition of the performance-enhancing potential of a DSS does not always come naturally, an interesting question for further research is: what kind of interventions or strategies can we deploy in such situations? In our studies, user evaluations of the performance-enhancing DSSs appeared to be insufficient to stimulate their adoption and (continued) use. What kind of interventions could alleviate this problem? Our findings provide some clues for devising interventions that could stimulate the rate of adoption and use of the studied DSSs and, hence, would help to better exploit their potential for enhancing the creative output of employees. Two potentially effective strategies come to mind. (1) Tell success stories. To alleviate the “lack of connection” problem, stories based on (in-company) experiments or field studies (see, for example, [17]) that demonstrate the positive effects of DSSs on creative performance could be used. But it also seems important to warn users that such performance improvements may be difficult to assess immediately and that the contribution of a DSS may only become evident after a period of extended use. (2) Use efficiency gains as bait. As mentioned earlier, it is generally easier to assess efficiency gains (i.e., reduced cognitive effort or time saved) than to assess improvements in decision quality [39]. In Study 2, users tended to evaluate the DSS more positively when they were able to construct a solution more quickly after using the DSS. Such efficiency gains could be emphasized to stimulate use, which may eventually enhance decision quality (through DSS use). Accurate assessments of DSS performance are essential for the contribution of DSSs to managerial decision-making. In terms of further research, it is therefore important to study the conditions that facilitate or hinder users in forming accurate evaluative judgments of DSS performance. This research agenda should also include a systematic investigation of potential moderators of the relationship between user evaluations and actual performance, such as the type of problems and the experience of the user. This will help to design effective interventions that facilitate the adoption and use of performance-enhancing DSSs in practice.