پیش بینی احتمال و پاسخگویی بانک مرکزی
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|23131||2006||12 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Policy Modeling, Volume 28, Issue 2, February 2006, Pages 223–234
The paper studies probability forecasts of inflation and GDP by monetary authorities. Such forecasts can contribute to central bank transparency and reputation building. Problems with principal and agent make the usual argument for using scoring rules to motivate probability forecasts confused; however, their use to evaluate forecasts remains valid. Public comparison of forecasting results with a “shadow” committee is helpful to promote reputation building and thus serves the motivational role. The Brier score and its Yates-partition of the Bank of England's forecasts are compared with those of a group of non-bank experts.
For years, the conduction of monetary policy by central bankers has been a mystery to the general public. Central bankers built reputations making decisions in environments of confidentiality. Arguments supporting a higher degree of transparency have recently persuaded monetary authorities to be more open with respect to policymaking decisions, up to the point that some make their forecasts of key variables public. Intensifying the public's response to monetary policy changes is among the potential gains of increased transparency (Svensson, 1997 and Woodford, 2003). The Bank of England (BoE) is one of the few Central Banks that actually publish inflation forecasts. 1 The Monetary Policy Committee (MPC) of the BoE has been issuing density forecasts of inflation, also called “Fan Charts,” on a quarterly basis in its Inflation Report since August 1997. It has been issuing output growth forecasts since November 1997. In addition, the BoE has published probabilistic forecasts of these two “key” variables from a quarterly survey of undisclosed external forecasters, averaging their responses for each range of the probability distribution. In this paper, we evaluate the probability forecasts of the MPC and those of the group of undisclosed external forecasters using the Brier score and its partition, the latter originally suggested by Yates (1982). Our purpose is to demonstrate that the ex post evaluations of probability forecasts of both the MPC and an alternative “shadow” committee offer valuable information that is not available from reports on the MPC (alone).2 A humorous (slightly edited) epigraph, summarizing a conversation between person “A” and person “B” of Granger and Newbold (1986)Granger and Newbold (1986) p. 265 illustrates well our suggestion—“A: How is your spouse? B: Compared to what?” Comparing the Central Bank's probability forecasts with a competent but “shadow” expert will help induce forecasting “soundness” by reputation building and learning. Analyzing both of the forecasters’ predictability performances appeals to the forecast competition argument suggested above in the Granger and Newbold quote. Recognizing the incentive-compatible feature of the Brier score, we considered (and later ruled out) utilizing the Brier score in the context of a contract between the government and the central bank in the spirit of Persson and Tabellini, 1993, Persson and Tabellini, 1999 and Persson and Tabellini, 2000 and Walsh, 1995 and Walsh, 1998. Because of ambiguities discussed in McCallum (1999) and Blinder (1998) that present themselves in central banking, this possibility was abandoned.3 Determining whether it is the principal (Parliament or Congress) or the agent (central bank) who has more incentive to try and boost real output in the short-run by creating “surprise inflation” is among these ambiguities. Clements (2004) also calculates the Brier score of the MPC forecasts. This paper differs from his as we apply the Yates decomposition to extract meaningful information about the forecaster's beliefs. We find that the MPC is upwardly biased by placing larger probabilities to the high state, preventing the less conservative members of the Committee to gain any approval for interest rate cuts. These results are consistent with Pagan (2003), Wallis, 2003 and Wallis, 2004 and Clements (2004). The Yates-partition shows that the MPC forecasts of inflation do not sort or discriminate between events that occur versus events that do not occur as well as the “shadow” forecasters. On the other hand, the MPC's forecasts of GDP do sort (or distinguish between events that ultimately obtain versus events that do not obtain) about the same as the “shadow” committee. The remainder of the paper is divided into three sections. Section 2 provides an overview of probabilistic forecasting concepts. Section 3 presents empirical results using the evaluation methods on the density forecasts of the MPC and an external group of forecasters on UK inflation and GDP. Section 4 concludes the paper.
نتیجه گیری انگلیسی
The Monetary Policy Committee of the Bank of England has been issuing quarterly probabilistic forecasts of inflation since 1997. This paper considers two issues related to probability forecasts by a central bank: motivation and evaluation. Since problems of explicit monetary payoffs, rewards or penalties associated with the probability forecast and subsequent realization appear to be non-trivial when considered in the context of the central bank's forecasting problem, we suggest reputation building be promoted via comparison of forecasting results of the monetary committee and those from a group of monetary experts (a “shadow” committee). Through explicit competition of the two groups, learning and adapting, the central bank can evolve to a more informative and transparent monetary policy. In order to promote accountability, optimal forecast evaluation becomes an issue. While some studies use techniques designed to evaluate point-forecasts for the study of the MPC forecasts (e.g., Pagan, 2003), others have applied calibration-based evaluation methods (Clements, 2004, Wallis, 2003 and Wallis, 2004). Although calibration procedures are more appropriate than point-forecast techniques, due to the probabilistic nature of the published forecasts, calibration fails to take into account the forecaster's ability to sort between the events that occurred and the events that did not occur. This paper suggests the use of the Brier score and its Yates partition to evaluate probability forecasts. We suggest that public reporting of the forecaster's performance with these methods can help alleviate the central banks’ accountability problem and, potentially, bolster monetary policy's stabilization features. The Brier score encompasses both calibration and resolution of the forecast. We argue that it is important to evaluate a central bank in terms of its accuracy in matching probabilities with both ex post relative frequencies (calibration) and resolution (sorting). The Yates’ partition of the Brier score allows analysts to study the ability of the forecasters to sort events, ex ante, into groups—those that ultimately obtain versus those that do not obtain. Utilizing these methods to evaluate the MPC and the surveyed forecasters’ (“shadow” committee) inflation and output growth rate forecasts, we found two substantive results. First, the MPC and the other forecasters have shown a large responsiveness to information not related to the forecasted variable. Second, our results suggest the MPC has hedged somewhat (or engaged in “wishful thinking”) its forecasts of inflation. The MPC could be hedging its inflation forecasts and or influencing them with “wishful” thoughts against a high (perhaps even moderate) inflation outcome. In this paper, we used the forecasts of other economic forecasters available from the Bank of England as our “shadow” committee of alternative forecasters. Our use of this set of forecasts was for illustration purposes only. In practice, we suggest that membership on the “shadow” committee needs to be given careful consideration. In particular, our set of other forecasters does not necessarily share the same set of information as the MPC. Important conditioning information held by the MPC was not necessarily held by our set of other forecasters. The “shadow” committee should be made aware of major policy conditionals so that the subsequent comparison of probabilistic forecasting performance is credible.