دانلود مقاله ISI انگلیسی شماره 70
ترجمه فارسی عنوان مقاله

اطلاعات مالی در برابر اطلاعات غیرمالی: تاثیر سازمان و ارائه اطلاعات در کارت امتیازی متوازن

عنوان انگلیسی
Financial versus non-financial information: The impact of information organization and presentation in a Balanced Scorecard
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
70 2010 14 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Accounting, Organizations and Society, Volume 35, Issue 6, August 2010, Pages 565–578

ترجمه کلمات کلیدی
مقالات نشریه مقالات حسابداریغیر مالی - کارت امتیازی متوازن
کلمات کلیدی انگلیسی
Accounting، Organizations and Society ,Balanced Scorecard , non-financial ,
پیش نمایش مقاله
پیش نمایش مقاله  اطلاعات مالی در برابر اطلاعات غیرمالی: تاثیر سازمان و ارائه اطلاعات در کارت امتیازی متوازن

چکیده انگلیسی

This paper investigates how the organization and presentation of performance measures affect how evaluators weight financial and non-financial measures when evaluating performance. We conduct two experiments, in which participants act as senior executives charged with evaluating two business-unit managers. Performance differences between business units are contained in either a financial or one of the three non-financial categories. Specifically, the first experiment studies how organizing measures in a Balanced Scorecard (BSC) format affects performance evaluations. Our results show that when the performance differences are contained in the financial category, evaluators that use a BSC-format place more weight on financial category measures than evaluators using an unformatted scorecard. Conversely, when performance differences are contained in the non-financial categories, whether measures are organized into a BSC-format or into an unformatted scorecard has no impact on the evaluation. The second experiment shows that when performance markers are added to the scorecards (i.e., +, −, and = signs for above-target, below-target, and on-target performance), evaluators that use a BSC-format weight measures in any category containing a performance difference more heavily than evaluators using an unformatted scorecard. Our findings suggest that firms should carefully consider how to present and organize measures to get the intended effect on performance evaluations.

مقدمه انگلیسی

Kaplan and Norton (1992) originally introduced the Balanced Scorecard (BSC) to overcome problems that result from a sole focus on financial measures. A BSC enables financial performance measures (grouped into a single financial category) and non-financial performance measures (grouped into non-financial categories including customer, internal business process, and learning and growth) to be displayed in combination. In practice, the format of performance scorecards varies significantly across firms (Lohman, Fortuin, & Wouters, 2004). Some firms organize their measures into BSC performance categories while others simply provide a general list of measures. How results are presented in a scorecard also varies. Many firms show only target levels and actual results, while other firms supplement this information with performance markers (i.e., +, −, =) or qualitative signs (e.g., red, yellow, and green indicators) to more explicitly indicate the status of the actual results in relation to the target levels (e.g., Malina et al., 2007, Malina and Selto, 2001 and Merchant and Van der Stede, 2007). Despite the prevalence of these different formats in practice, little work has been done on how variations in scorecard formats affect performance evaluations. In this study, we examine how variations in, first, the organization (i.e., BSC versus unformatted scorecard) and, second, the presentation of measures (i.e., the use of markers) affect how evaluators weight financial and non-financial measures in performance evaluations. Prior studies have primarily focused on the finding that, when firms use both common measures (i.e., measures common across multiple units) and unique measures (i.e., measures unique to particular units) for their business units, evaluators ignore the unique measures (Lipe & Salterio, 2000). Solutions to this problem have also been explored (Libby et al., 2004 and Roberts et al., 2004). Many firms, however, use similar scorecards that contain only measures common to all business units (e.g., Malina & Selto, 2001). In such cases, presentation formats and features may well affect how evaluators weight financial and non-financial information in performance evaluations. To investigate these issues, we present two experiments that extend the basic setup of Lipe and Salterio (2002). Lipe and Salterio (2002) study how information organization (i.e., how organizing measures into a BSC as opposed to an unformatted list) affects the performance evaluation of two business-unit managers. They consider, however, only the case wherein performance differences between the two business units (i.e., consistent above-target performance for one business unit and consistent below-target performance for the other) are located on the non-financial category of customer measures. They show that evaluators using a BSC weigh these measures less heavily than evaluators viewing the same measures in an unformatted scorecard. Our first experiment extends Lipe and Salterio’s work by examining whether the effect of how the measures are organized depends on which type of category—that is, financial or non-financial—contains the performance differences between business units. We predict that information organization will have a greater effect on evaluations when performance differences appear in the financial category. We base this prediction on performance-measurement as well as psychology literature, which suggest both that people are heavily led by financial outcomes and that how people use a BSC to process information may lead these users to place more weight on financial performance measures than users of an unformatted scorecard. We use a 2 × 4 design, manipulating how information is organized (i.e., in a BSC or an unformatted scorecard) when performance differences between two business units are located in either the financial category or one of three non-financial categories. We qualify the results of Lipe and Salterio (2002) by showing that a BSC only “increases” the weight evaluators attach to performance differences when these differences are located in the financial category. We find that when performance differences are located in one of the three non-financial categories, information organization has no effect. We thus also observe no decrease in how measures are weighted for the customer category, which is the only case comparable to that of Lipe and Salterio (2002). We attribute this latter finding to some differences in design choices, which we will explain in ‘Methods and results’. Increasing the weight evaluators place on financials may not always be the effect firms hope to achieve by using a BSC instead of an unformatted list of measures.1 Therefore, our second experiment examines whether the use of markers (i.e., +, −, and = signs for above-target, below-target, or on-target performance) offers a counterbalancing effect. The design of Experiment 2 is similar to that of Experiment 1 except that we add performance markers to the scorecards’ results. We hypothesize, and find, that, when supplemented with markers, performance differences on measures of any category, be it financial or non-financial, are always weighted more heavily in a BSC than in an unformatted scorecard. Our research contributes to the literature in several ways. First, prior results on the use of financial and non-financial measures are still inconclusive (Luft and Shields, 2001 and Schiff and Hoffman, 1996). Although the BSC has gained prominence in accounting research as a way of integrating financial and non-financial performance measures (Hoque & James, 2000), we show a consequence of organizing the measures into the BSC categories that may well be uncalled-for if firms adopt a BSC to stimulate the use of non-financials. Our finding in Experiment 1 that a BSC only increases the weight evaluators assign to the financial category, leaving non-financial categories unaffected, adds a new issue to the BSC literature, which to date has focused on the problem of common versus unique measures. Second, we show how different presentation formats can produce different processing strategies (Payne, 1982 and Schkade and Kleinmuntz, 1994). In Experiment 1, we show that grouping and labeling measures (i.e., in a BSC), as opposed to leaving measures unlabeled and in no particular order (i.e., in an unformatted scorecard), helps evaluators identify financials more easily and may activate their beliefs in the relative importance of financials. As a result, a BSC-format increases an evaluator’s basic tendency to weight financial measures more heavily than non-financial measures. Experiment 2 shows that performance markers in a BSC can also direct an evaluator’s attention to other non-financial categories that contain important performance differences. In this case, BSC users compared with users of an unformatted scorecard, give more weight to any category (financial and non-financial alike) that shows consistently good or bad performance. These findings have important practical implications for the many firms that use the BSC as a tool to evaluate and reward managers (Kaplan and Norton, 1996 and Liedka et al., 2008). If evaluators assimilated all measures without bias, then the format of a scorecard would not matter. However, because format, in fact, appears to have a strong impact on how evaluators assimilate measures, firms should carefully consider how they display these measures. Given that managers’ behavior is driven by weights placed on the performance measures (e.g., Ittner et al., 2003 and Smith, 2002), formatting can thus have far-reaching consequences for the firm.

نتیجه گیری انگلیسی

Our paper studies how variations in the format of scorecards and the presentation of measures therein affect how evaluators weight financial versus non-financial information in performance evaluations. Experiment 1 shows that when performance differences are located in the financial category, BSC users place more weight on financial measures than do users of an unformatted scorecard. In contrast, when performance differences are located in one of the non-financial categories, the type of scorecard used (i.e., a BSC versus an unformatted scorecard) does not affect performance evaluations. Experiment 2, however, demonstrates that with the addition of performance markers, organizing measures into a BSC increases the weight evaluators attach to performance differences located on both financial and non-financial measures. Ultimately, performance differences on non-financial measures, receive the greatest weight in evaluations when presented in a marked BSC. We extend the results of Lipe and Salterio (2002) in two important ways. First, we show that organizing information in a BSC compared to in an unformatted scorecard can increase (rather than decrease) the weight evaluators attach to a particular category of performance measures, especially when performance differences are located in the financial category. A BSC simplifies the task of identifying the financial measures and assessing them in combination, can reinforce the evaluator’s tendency to rely more on the financial measures. Second, in Experiment 2, we show that, when we add performance markers to the scorecards, a BSC can increase an evaluator’s attention toward any type of category therein that contains a performance difference, be it financial or non-financial. Our findings have important practical implications. Some firms use a BSC to emphasize the leading non-financial indicators of firm value. Subtle changes in the presentation of information in a BSC (such as adding performance markers) can offer a solution to firms who want to use a BSC to increase the weight evaluators assign to such indicators of firm value. Without performance markers, business-unit managers may react negatively to the use of a BSC for fear that evaluators will not fully incorporate these non-financials into their evaluations (see Ittner et al., 2003 and Malina and Selto, 2001). Our study also offers some opportunities for further research. First, prior studies (e.g., Banker et al., 2004 and Lipe and Salterio, 2000) have shown that evaluators favor common and general measures over unique and strategy-linked measures. One important suggestion for studies that focus on this problem of common–unique measures is to explore whether unique non-financial measures are more easily ignored than unique financial measures in a BSC-format, because evaluators tend to focus more strongly on financial measures when measures are organized in a BSC-format. Second, while our experiment employed students who had received instruction in the BSC, it would be interesting to explore how certain presentation features in a BSC affect more experienced managers, whose knowledge of, for example, measurement properties and causal relationships across measures is more developed (Bonner & Lewis, 1990). This might cause them to focus less intensely on financials. Prior work has, however, shown that experienced managers also face cognitive processing limitations (Shanteau, 1988 and Shanteau, 1992) similar to less knowledgeable evaluators (Dilla & Steinbart, 2005a). Simple changes to the presentation of information, like performance markers, might therefore also help them to better deal with a large set of measures. Third, we located similar performance differences between two business units in each of the four BSC perspectives. Future work, however, can study how participants weight performance information when the business units themselves are less distinguishable on a specific BSC category. For example, one business unit might score well in the financial category, whereas the other might score well on a non-financial category. In addition, one might spread excellent performance across multiple categories. It is interesting to then study how different presentation formats facilitate the processing of performance information. Fourth, the weights evaluators attach to different types of performance measures may well depend on strategy (as well as the information provided about that strategy) and other factors in the operating environment (see e.g., Banker et al., 2004, Lillis and van Veen-Dirks, 2008, Van Veen-Dirks, 2006 and Van Veen-Dirks, 2010). Future research can disentangle how information about such factors interacts with the organization and presentation of performance measures. Finally, researchers can explore the use of other presentation features, such as graphs, traffic lights, or aggregations of measures in formulas (Cardinaels, 2008, Dilla and Steinbart, 2005b and Roberts et al., 2004). Certainly, if a particular firm has derived a set of measures that are known to drive firm value, it is important that evaluators use these measures in their evaluations and, consequently, that business-unit managers use these measures in their daily decisions (Feltham and Xie, 1994 and Holmstrom and Milgrom, 1991). We therefore support continued research into how different types of scorecards, as well as other factors in the evaluation process, inhibit or stimulate such use.