دانلود مقاله ISI انگلیسی شماره 4124
ترجمه فارسی عنوان مقاله

هم افزایی بازخورد - برچسب زنی در پیش بینی قضاوتی قیمت سهام

عنوان انگلیسی
Feedback-labelling synergies in judgmental stock price forecasting
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
4124 2004 12 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Decision Support Systems, Volume 37, Issue 1, April 2004, Pages 175–186

ترجمه کلمات کلیدی
پیش بینی - قضاوت - بازخورد - کالیبراسیون - قیمت سهام - اطلاعات متنی -
کلمات کلیدی انگلیسی
Forecasting,Judgment,Feedback,Calibration,Stock price,Contextual information,
پیش نمایش مقاله
پیش نمایش مقاله  هم افزایی بازخورد - برچسب زنی در پیش بینی قضاوتی قیمت سهام

چکیده انگلیسی

Research has suggested that outcome feedback is less effective than other forms of feedback in promoting learning by users of decision support systems. However, if circumstances can be identified where the effectiveness of outcome feedback can be improved, this offers considerable advantages, given its lower computational demands, ease of understanding and immediacy. An experiment in stock price forecasting was used to compare the effectiveness of outcome and performance feedback: (i) when different forms of probability forecast were required, and (ii) with and without the presence of contextual information provided as labels. For interval forecasts, the effectiveness of outcome feedback came close to that of performance feedback, as long as labels were provided. For directional probability forecasts, outcome feedback was not effective, even if labels were supplied. Implications are discussed and future research directions are suggested.

مقدمه انگلیسی

Forecasting and decision support systems are partly systems for learning. One of their objectives is to improve management judgment by fostering understanding and insights and by allowing appropriate access to relevant information [16]. Feedback is the key information element of systems that are intended to help users to learn. By providing managers with timely feedback, it is hoped that they will learn about the deficiencies in their current judgmental strategies and hence enhance these strategies over time. When a system is being used to support forecasting, feedback can be provided in a number of forms [6] and [10]. The simplest form is outcome feedback, where the manager is simply informed of the actual outcome of an event that was being forecasted. Performance feedback provides the forecaster with a measure of his or her forecasting accuracy or bias. Process feedback involves the estimation of a model of the forecaster's judgmental strategy. By feeding this model back to the forecaster, it is hoped that insights will be gained into possible ways of improving this strategy. Finally, task properties feedback delivers statistical information on the forecasting task (e.g. it may provide statistical measures of trends or correlations between the forecast variable and independent variables). Most of the research literature on management judgment under uncertainty suggests that outcome feedback is less effective than other forms in promoting learning (e.g. [6] and [33]). For example, much research into the accuracy of judgmental forecasts has found that forecasters tend to focus too much on the latest observation (e.g. the latest stock value) which will inevitably contain noise. The result is that they see evidence of new, but false, systematic patterns in the latest observation [31] and overreact to it. Because outcome feedback draws attention to the latest observation it exacerbates this tendency. This means that a long series of trials may be needed to distinguish between the systematic and random elements of the information received by the forecaster [31].5 In contrast, by averaging results over more than one period (or over more than one series if cross-sectional data is being used), other forms of feedback are likely to reduce the attention that is paid to the most recent observation and to filter out the noise from the feedback. For example, performance feedback may be presented in the form of the mean forecast error, or in the case of categorical forecasts, the percentage of forecasts that were correct. However, if conditions could be found where outcome feedback does encourage learning as efficiently (or nearly as efficiently) as other forms of feedback, then this would yield considerable benefits to users and designers of support systems. This is because outcome feedback overcomes, or at least reduces, various shortcomings of the other forms. Firstly, outcome feedback is easier to provide and is likely to be more easily understood by the forecaster. Conversely, the provision of performance feedback, for instance, can involve difficult choices on which performance measure to provide—each measure will only relate to one aspect of performance, but providing several measures may confuse the forecaster. Moreover, some measures may be difficult to comprehend and will therefore require that the forecaster is trained in their use. Process feedback will require the identification of cues that the forecaster is assumed to be using, with no guarantee that these cues have really been used. Also, multicollinearity in these cues means that there will be large standard errors associated with the estimates of the weights that the forecaster is attaching to the cues. Task properties feedback requires regular statistical patterns in past data. By definition, these characteristics are often absent in tasks where management judgment is preferred to statistical methods. Secondly, when judgments are being made in relation to a single variable over time, outcome feedback will not be contaminated by old observations when circumstances are changing. Because performance and process feedback are measured over a number of periods, they may lag behind changing performance or changes in the strategies being used by the forecaster. Also, several periods must elapse before a meaningful measure of performance, or a reliable model of the judgmental process, can be obtained. For cross-sectional data, outcome feedback can be provided for each variable and, as such, is not merely an average of potentially different performances (or strategies) on different types of series. Furthermore, a reasonably large number of judgments over different series are required in order to obtain reliable estimates of performance or a reliable estimate of the process model. As we discuss below, there are some indications in the literature of situations that may be favourable to outcome feedback. These relate to (i) the nature of the forecast that is required, and (ii) the type of information that is supplied with the feedback—in particular, whether the past history of the forecast variable is accompanied by an informative label. This paper describes an experiment that was used to investigate the effects of these factors in an important application area: stock price forecasting. Financial forecasting is an area where human judgment is particularly prevalent [8], [35] and [45] and the specific role of judgment in forecasting stock prices has itself received particular attention from the research community (see [7], [21], [26], [32], [33], [34], [41] and [48]). The paper compares the effectiveness of outcome feedback under different conditions with that of performance feedback. Performance feedback was used as the benchmark because, of the other feedback types, it is likely to be the most relevant to financial forecasting and most acceptable to forecasters. The paper is structured as follows. First, a literature review is used to explain why outcome feedback may be more effective when particular types of forecasts are required and why feedback type and label provision might be expected to have interactive effects. Then details of the experiment are discussed, followed by analysis and discussion. The paper concludes with suggestions for further research.

نتیجه گیری انگلیسی

This research examined the effects of performance and outcome feedback on judgmental forecasting performance conditional on (i) the availability of contextual information provided in the form of labels and (ii) the form in which the forecast was expressed. Using stock prices as the forecast variables of interest, the current study employed judgmental prediction intervals and probability forecasts as formal expressions conveying the forecasters' uncertainties. Earlier work utilizing prediction intervals in other domains has indicated that the assessors typically provide narrow intervals [15], [19], [20], [29], [30], [36], [40] and [47]. Our findings from initial experimental sessions confirm earlier results in that the participants' intervals enveloped the realized value less frequently than the desired level (i.e. 90% for the current study). In response to recurrent feedback, however, subjects were able to widen their intervals, attaining significant improvements after two feedback sessions. In particular, subjects receiving interval calibration feedback secured hit rates very close to 90% in the third session, followed by outcome-feedback groups with significantly improved, but still trailing, hit rates. This is consistent with Hammond's [14] assertion that learning through outcome feedback requires more trials than other forms of feedback as judges seek to distinguish between the systematic and random components of the outcome information. While these findings highlight the effectiveness of interval calibration feedback on reducing interval overconfidence, they also show that simple outcome feedback is most effective when labels are provided. Indeed, by the third session, calibration in the outcome feedback–labels condition was approaching that of the calibration-feedback conditions. As hypothesised earlier, this may have resulted from increased propensity of subjects to consider the characteristics of the entire time series pattern, rather than just the most recent value, when they were provided with company specific labels. For the calibration-feedback group, this beneficial effect may already have been achieved by providing the feedback so that the specific labels brought no added benefits to the task. This implies that in a task where only interval forecasts are required the benefits of outcome feedback that were referred to earlier (e.g. ease of provision and adaptability to new conditions) may outweigh its slightly worse performance as an aid to improving calibration, as long as specific labels are provided. Analysis of directional probability forecasts also reflects that, even though the calibration-feedback participants displayed quite poor calibration in their forecasts of session 1, detailed feedback immediately enhanced their performance in sessions 2 and 3. In contrast, the outcome-feedback subjects maintained a relatively more uniform calibration performance throughout the sessions. These results are in agreement with Lim and O'Connor's [24] findings with point forecasts. These authors suggest that individuals may feel overconfident about their ability to acquire all the information they need from time series anyway, leading them to disregard any new negative outcome feedback. Our findings may denote that this unwarranted confidence may persist with outcome feedback, but may be overcome if detailed performance feedback is provided. In contrast to the results on interval forecasting, outcome feedback cannot therefore be recommended as an aid to learning when directional probability forecasts are required, even if labels are provided. In fact, no significant effects of label information on directional probability forecasting performance were found. One potential explanation could be that feedback was given preeminent importance, leading participants to overlook contextual factors like stock identities. Another explanation could relate simply to the inherent difficulty of converting contextual information into financial prices [27]. A final explanation could stem from the design of this study. In particular, all the participants knew they were forecasting stock prices; subjects in the no-labels group did not know which particular stocks were being forecast, while the other participants knew the stock names. Subjects indicated that, when no specific contextual information was given, they did not attempt to identify the particular stocks, but rather tried to base their forecasts on the price movements they could detect as well as their general expectations about the stock market. Given that this experiment was conducted in a highly volatile setting (i.e. prior to national elections),8 it could be that the wide swings in prices preempted any effects that knowledge of stock names could potentially have on assessors' reactions to feedback. In fact, our analyses clearly reveal the prevailing effects of forecasting session on predictive performance. Taken together, these findings attest to the importance of market volatility on the quality of judgmental predictions, regardless of the elicitation format utilized. Future research investigating the influence of environmental factors like volatility is definitely needed to enhance our understanding of judgmental forecasting. Post-experimental interviews indicated that all participants found the task very appealing, and yet highly difficult. Overall, subjects who were given calibration feedback expected better probability forecasting performance when compared to subjects receiving outcome feedback. Provision of performance feedback appeared to intensify the focus on performance, leading assessors to closely track their accomplishments across sessions, raising their performance expectations. It is also worth noting that the participants not given label information found it more difficult to make probability forecasts. Although no differences in difficulty were expressed for the interval forecasts, assessment of probabilities were perceived to be easier when stock names were supplied. These accounts suggest “feedback inquiry” [4], [5], [18] and [42] as a promising extension of current research. That is, if the participants are to decide on the timing and the type of feedback they would like to access (if any), would there be any resulting differences on forecasting accuracy; and how would the availability of contextual information affect all these considerations? Further studies investigating the effects of differing contextual cues [38] and other types of feedback like task properties feedback [37] can be expected to enhance our understanding of the processes involved in judgmental forecasting. Such work may particularly benefit from employing participants with varying levels of expertise [46] and studying combined or group judgments [36] and [43]. Future research exploring forecasters' use of information and feedback will also be instrumental in designing effective forecast support systems that address users' concerns [12] and [49]. Financial settings provide ideal platforms for pursuing these issues, with their intrinsically complex, information-rich and dynamic contexts. This complexity, coupled with forecasters' boundless needs for refined predictive accuracy, means that financial forecasting remains an interesting and potent challenge for decision support systems research.