سیاست های پولی در یک محیط غنی از اطلاعات
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
24950 | 2003 | 22 صفحه PDF |

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Monetary Economics, Volume 50, Issue 3, April 2003, Pages 525–546
چکیده انگلیسی
Most empirical analyses of monetary policy have been confined to frameworks in which the Federal Reserve is implicitly assumed to exploit only a limited amount of information, despite the fact that the Fed actively monitors literally thousands of economic time series. This article explores the feasibility of incorporating richer information sets into the analysis, both positive and normative, of Fed policymaking. We employ a factor-model approach, developed by Stock, J.H., Watson, M.W., Diffusion Indices, Journal of Business & Economic Statistics 2002, 20 (2) 147, Forecasting Inflation, 1999, Journal of Monetary Economics 44 (2) 293, that permits the systematic information in large data sets to be summarized by relatively few estimated factors. With this framework, we reconfirm Stock and Watson's result that the use of large data sets can improve forecast accuracy, and we show that this result does not seem to depend on the use of finally revised (as opposed to “real-time”) data. We estimate policy reaction functions for the Fed that take into account its data-rich environment and provide a test of the hypothesis that Fed actions are explained solely by its forecasts of inflation and real activity. Finally, we explore the possibility of developing an “expert system” that could aggregate diverse information and provide benchmark policy settings.
مقدمه انگلیسی
Monetary policy-makers are inundated by economic data. Research departments throughout the Federal Reserve System, as in other central banks, monitor and analyze literally thousands of data series from disparate sources, including data at a wide range of frequencies and levels of aggregation, with and without seasonal and other adjustments, and in preliminary, revised, and “finally revised” versions. Nor is exhaustive data analysis performed only by professionals employed in part for that purpose; observers of Alan Greenspan's chairmanship, for example, have emphasized his own meticulous attention to a wide variety of data series (Beckner, 1996). The very fact that central banks bear the costs of analyzing a wide range of data series suggests that policy-makers view these activities as relevant to their decisions. Indeed, recent econometric analyses have confirmed the longstanding view of professional forecasters, that the use of large number of data series may significantly improve forecasts of key macroeconomic variables (Stock and Watson (2002) and Stock and Watson (1999); Watson, 2000). Central bankers’ reputations as data fiends may also reflect motivations other than minimizing average forecast errors, including multiple and shifting policy objectives, uncertainty about the correct model of the economy, and the central bank's political need to demonstrate that it is taking all potentially relevant factors into account.1 Despite this reality of central bank practice, most empirical analyses of monetary policy have been confined to frameworks in which the Fed is implicitly assumed to exploit only a limited amount of information. For example, the well-known vector autoregression (VAR) methodology, used in many recent attempts to characterize the determinants and effects of monetary policy, generally limits the analysis to eight macroeconomic time series or fewer.2 Small models have many advantages, including most obviously simplicity and tractability. However, we believe that this divide between central bank practice and most formal models of the Fed reflects at least in part researchers’ difficulties in capturing the central banker's approach to data analysis, which typically mixes the use of large macroeconometric models, smaller statistical models (such as VARs), heuristic and judgmental analyses, and informal weighting of information from diverse sources. This disconnect between central bank practice and academic analysis has, potentially, several costs: First, by ignoring an important dimension of central bank behavior and the policy environment, econometric modeling and evaluation of central bank policies may be less accurate and informative than it otherwise would be. Second, researchers may be foregoing the opportunity to help central bankers use their extensive data sets to improve their forecasting and policymaking. It thus seems worthwhile for analysts to try to take into account the fact that in practice monetary policy is made in a “data-rich environment”. This paper is an exploratory study of the feasibility of incorporating richer information sets into the analysis, both positive and normative, of Federal Reserve policy-making. Methodologically, we are motivated by the aforementioned work of Stock and Watson. Following earlier work on dynamic factor models,3 Stock and Watson have developed dimension reduction schemes, akin to traditional principal components analysis, that extract key forecasting information from “large” data sets (i.e., data sets for which the number of data series may approach or exceed the number of observations per series). They show, in simulated forecasting exercises, that their methods offer potentially large improvements in the forecasts of macroeconomic time series, such as inflation. From our perspective, the Stock–Watson methodology has several additional advantages: First, it is flexible, in the sense that it can potentially accommodate data of different vintages, at different frequencies, and of different spans, thus replicating the use of multiple data sources by central banks. Second, their methodology offers a data-analytic framework that is clearly specified and statistically rigorous but remains agnostic about the structure of the economy. Finally, although we do not take advantage of this feature here, their method can be combined with more structural approaches to improve forecasting still further (Stock and Watson, 1999). The rest of our paper is structured as follows. Section 2 extends the research of Stock and Watson by further investigating the value of their methods in forecasting measures of inflation and real activity (and, by extension, the value of those forecasts as proxies for central bank expectations). We consider three alternative data sets: first, a “real-time” data set, in which the data correspond closely to what was actually observable by the Fed when it made its forecasts; second, a data set containing the same time series as the first but including only finally revised data; and third, a much larger, and revised, data set based on that employed by Stock and Watson (2002). We compare forecasts from these three data sets with each other and with historical Federal Reserve forecasts, as reported in the Greenbook. We find, in brief, that the scope of the data set (the number and variety of series included) matters very much for forecasting performance, while the use of revised (as opposed to real-time) data seems to matter much less. We also find that “combination” forecasts, which give equal weight to our statistical forecasts and Greenbook forecasts, can sometimes outperform Greenbook forecasts alone. In Section 3 we apply the Stock–Watson methodology to conduct a positive analysis of Federal Reserve behavior. Specifically, we estimate monetary policy reaction functions, or PRFs, which relate the Fed's instrument (in this article, the fed funds rate) to the state of the economy, as determined by the full information set. Our interest is in testing formally whether the Fed's reactions to the state of the economy can be accurately summarized by a forward-looking Taylor rule of the sort studied by Battini and Haldane (1999) and Clarida et al (1999) and Forni and Reichlin (1996), among others; or whether, as is sometimes alleged, the Fed responds to variables other than expected real activity and expected inflation. We show here that application of the Stock–Watson methodology to this problem provides both a natural specification test for the standard forward-looking PRF, as well as a nonparametric method for studying sources of misspecification. Section 4 briefly considers whether the methods employed in this paper might not eventually prove useful to the Fed in actual policy-making. In particular, one can imagine an “expert system” that receives data in real time and provides a consistent benchmark estimate of the implied policy setting. To assess this possibility, we conduct a counterfactual historical exercise, in which we ask how well monetary policy would have done if it had relied mechanically on SW forecasts and some simple policy reaction functions. Perhaps not surprisingly, though our expert system performs creditably, it does not match the record of human policy-makers. Nevertheless, the exercise provides some interesting results, including the finding that the inclusion of estimated factors in dynamic models of monetary policy can mitigate the well-known “price puzzle”, the common finding that changes in monetary policy seem to have perverse effects on inflation. Section 5 concludes by discussing possible extensions of this research.
نتیجه گیری انگلیسی
Positive and normative analyses of Federal Reserve policy can be enhanced by the recognition that the Fed operates in a data-rich environment. In this preliminary study, we have shown that methods for data-dimension reduction, such as those of Stock and Watson, can allow us to incorporate large data sets into the study of monetary policy. A variety of extensions of this framework are possible, of which we briefly mention only two. First, the estimation approach used here identifies the underlying factors only up to a linear transformation, making economic interpretation of the factors themselves difficult. It would be interesting to be able to relate the factors more directly to fundamental economic forces. To identify unique, interpretable factors, more structure would have to be imposed in estimation. One simple, data-based approach consists of dividing the data set into categories of variables and estimating the factors separately within these categories. In the spirit of structural VAR modeling, imposing some “weak theory” restriction on the multivariate dynamics of the factors could then identify the factors. A more ambitious alternative would be to combine the atheoretic factor model approach with an explicit theoretical macromodel, interpreting the factors as shocks to the model equations. If the model is identified, the restrictions that its reduced form place on the factor model estimation would be sufficient to identify the factors. A second extension would address the large VAR literature on the identification of monetary policy shocks and their effects on the economy (Christiano et al., 2000). A key question in this literature is whether policy “shocks” are well and reliably identified. Our approach, by using large cross-sections of real-time data, should provide more accurate estimates of the PRF residual. Additionally, the comparison of real-time and finally revised data provides a useful way of identifying policy shocks, as the Fed's response to mismeasured data is perhaps the cleanest example of a policy shock. Finally, as we have mentioned, the factor structure allows for the estimation of impulse response functions (measuring the dynamic effects of monetary policy changes) for every variable in the data set, not just the small set of variables included in the VAR. We expect to pursue these ideas in future research.