چارچوب سازی، سیستم های کمک به تصمیم گیری و فرهنگ: بررسی تاثیرات بر روی تحقیقات تقلب
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|17764||2012||18 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : International Journal of Accounting Information Systems, Volume 13, Issue 4, December 2012, Pages 316–333
We conducted an experiment to investigate the influence of the framing of reports, the type of decision-aid system, and the cultural background of the decision maker on the intention to investigate fraud. We compared decisions made from reports generated by automated and manual systems to explore whether automated systems exacerbated or ameliorated the framing bias. We also explored whether the cultural background of participants—Americans and Mexicans—influenced the decision. Results indicated that the influence of type of system and framing are culturally dependent. When the framing highlights the possibility of the results being incorrect, people take a more cautious approach and the intention to investigate fraud is lower compared to the framing that highlights the probability of the results being correct. Automated systems appear to ameliorate the framing bias in the American sample and preserve the framing bias in the Mexican sample. The reason for the different impact of automated systems appears to be in how Americans and Mexicans perceive decision-aid systems. Americans are less likely to trust automated systems and more likely to trust manual systems than Mexicans. Mexicans, on the other hand, rely more on automated systems and evaluate their reputation at a higher level than Americans.
Fraud is a growing concern among business leaders and regulators, with a steady increase reported in the number and magnitude of cases (ACFE, 2011). Recent estimates of global fraud reached $2.9 trillion in losses (ACFE, 2011), compelling the development of more effective methods to prevent fraud. Technology has been instrumental in this effort with the development of automated systems that can continuously monitor information. However, we have limited knowledge on how these automated systems influence decision making (O'Donnell and David, 2000). This paper investigates whether the design characteristics of reports generated by automated decision-aid systems (ADS) influence decision making. We study the framing of the report generated by the system and its impact on the decision whether to investigate fraud. We also explore the cultural background of the decision maker to expand this investigation to cross-cultural settings. Automated decision-aid systems are increasingly used to support fraud prevention and detection because they are capable of analyzing large amounts of data. Data mining programs can distinguish disgruntled employees who can be identified based on the pattern of their email exchanges. Being a disgruntled employee is an early indicator of a potential fraud (Holton, 2009 and ACFE, 2011). The practice of data mining employees' emails has increased because email communications have been used to demonstrate misconduct, such as in the Enron case. Email communications are an abundant source of data that can be used not only for ex-post evidence, but also ex-ante to identify discontented employees, allowing companies to initiate focused investigations to prevent fraud ( Pemble, 2003a and Pemble, 2003b). In addition to analyzing large amounts of data, some automated systems may help individuals to overcome certain cognitive biases in decision making (Rose, 2002). Yet, it is unclear whether the use of ADS exacerbates or mitigates specific cognitive biases such as framing. Therefore, we conduct an experiment to explore the influence of the framing of the report on decision making. The way the report is framed may cause managers to undertake costly and unnecessary investigations. On the other hand, managers may choose to disregard necessary investigations due to the framing of the report. Failing to initiate a timely fraud investigation defeats the purpose of preventive mechanisms to deter fraud. The final aspect of this research is a cross-cultural setting to explore the influence of the cultural background of the decision maker. Conducting studies on multiple countries has increased in relevance as firms move towards multinational systems implementations (Biehl, 2007). Prior research on the influence of framing and ADS on decision-making has produced mixed results. There is evidence that framing interacts (Cheng and Wu, 2010) and does not interact (Brown and Jones, 1998 and Perrin et al., 2001) with automated decision-aid systems. This study contributes to the literature by exploring the interaction between the framing of reports, the use of decision-aid systems, and the culture of the decision-maker. The results of this research suggest that the framing of the reports interacts with the use of automated decision-aid systems, depending on the culture of the decision maker. To further investigate framing and the use of ADS across cultures, our study also explores the relationship between cultural values and reliance and trusting intentions on, and perceived reputation of, manual and automated systems. This paper is organized as follows. First, we review the literature of framing, decision-aid systems, and culture. We develop our research model and present the research questions. Then, we explain the experimental procedure and present the results of the data analysis. In the final section we present the results and draw conclusions from the study. We also discuss the limitations of the study and potential areas for future research.
نتیجه گیری انگلیسی
We conduct an experiment to investigate the influence of the design characteristics of decision-aid systems on decision making. In particular, we focus on the framing of the report generated by a decision-aid system (automated or manual) and investigate these effects in two different cultures (Americans and Mexicans). Participants indicate their intention to initiate a fraud investigation based on a report generated by a decision-aid system that identifies potential fraud perpetrators. Our findings indicate that the influence of automated decision-aid systems and framing is culturally dependent. The results from this study indicate that the framing of the report (describing the probability of correctly or incorrectly identifying a potential fraud perpetrator) influences the intention to investigate fraud. When the framing highlights the possibility of the results being incorrect people take a more cautious approach and the intention to investigate fraud is lower compared to the framing that highlights the probability of the results being correct. This effect was observed for Americans using a report generated by a manual system (but not an automated system) and for Mexicans using a report generated by an automated system (but not a manual system). The literature on framing and information systems shows conflicting evidence on whether automated decision-aid systems exacerbate or ameliorate the framing bias. Studies on automated decision-aid systems conducted with American samples show no influence of framing (Brown and Jones, 1998 and Perrin et al., 2001), but a study conducted with a Taiwanese sample show a significant influence of framing (Cheng and Wu, 2010). Conflicting findings might be explained by the cultural background of the participants; the literature suggests that the influence of framing might be culturally dependent (Hens and Wang, 2011). Our findings confirm a three-way interaction between framing, automated decision-aids, and culture. In the American sample, the framing of the report generated by an automated system does not affect the intention to initiate a fraud investigation. For the Mexican sample, however, the framing of the automated system's report does influence the intention to investigate fraud: Mexicans are more likely to investigate fraud when the report is framed as ‘correct identification’ than when the report is framed as ‘incorrect identification.’ Our study also contributes to the literature by exploring the relationship between cultural values and perceptions of decision-aid systems. The reason for the different impact of automated decision-aid systems appears to be in how Americans and Mexicans perceive decision-aid systems. Our findings indicate that Mexicans rely more on automated systems than Americans. Mexicans also evaluate the reputation of automated systems at a higher level than Americans and are more likely to trust automated systems than Americans. Conversely, Mexicans evaluate the perceived reputation of manual systems at a lower level and are less likely to trust manual systems than Americans. Our results indicate that these perceptions are related to the cultural value of self-enhancement, which emphasizes self-interest based on power and achievement, and a desire for prestige, social status, and achievement. Americans score higher than Mexicans in self-enhancement. The emphasis on self-interest may lead to a more cautious approach toward automated decision-aids. Overall, these findings stress the need to further explore the impact of automated decision-aid systems on cognitive biases. Our findings suggest that reports indicating the possibility of providing incorrect results lead people to take a more cautious approach. Framing may increase people's awareness of the fallibility of systems and the possibility of wasting resources from initiating a fraud investigation that is not needed. If managers want to take a cautious approach, the decision-aid system could be configured to report the probability of reporting incorrect results. Our findings indicate that Americans and Mexicans use the reports generated by automated decision-aid systems differently. Mexican and Americans differ on the perceived reputation and trusting intention of automated and manual systems. These results suggest that people who rate the perceived reputation and trusting intention of automated systems at a high level may tend to rely more on the results generated by the system. Although we observed this effect, it is still unclear the specific way in which culture influences the results. Differences in cultural values, or other characteristics not measured in this study, might influence the results observed. For instance, given that the context of the study is on systems to identify potential fraud perpetrators, societal attitudes towards fraud might have influenced the results. More research is needed to identify the precise way in which culture influences the use of automated decision-aid systems. Findings from this study should be interpreted considering the following limitations. First, as in any experimental design, there is a trade-off between internal and external validity. Therefore, extending our findings may be limited as the motivation of the participants might have been lower than in a business setting. Second, using students as surrogates for managers might compromise the generalizability of our findings; we attempt to mitigate this drawback by selecting participants who are part-time graduate students working in a business environment. Third, prior research has found that automated decision-aid systems influence decision making of non-experts (Arnold et al., 2004). Although the current study investigated only participants who are inexperienced in fraud investigation, an interesting extension could include experts in fraud investigation to contrast the influence of automated decision aids and framing from experienced and inexperienced fraud investigators. Despite these limitations, this study contributes to theory and practice. From a theory development perspective, the lack of a unique effect related to the use of automated decision-aid systems points to the need to develop theory that can explain the circumstances under which automated decision-aid systems exacerbate or ameliorate biases, in particular the framing bias. Moreover, it points to the need to include culture as a factor influencing the use of automated decision-aid systems. From a practical perspective, the different behaviors observed in the American and Mexican participants indicate managers should consider the complexity of multinational implementations of automated decision-aid systems.