برآورد یک قانون سیاست های پولی رو به جلو : یک مدل متغیر با پارامتر زمان با استفاده از ارسال داده های سابق
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|26101||2006||18 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Monetary Economics, Volume 53, Issue 8, November 2006, Pages 1949–1966
In this paper, we consider estimation of a time-varying parameter model for a forward-looking monetary policy rule, by employing ex post data. A Heckman-type (1976. The common structure of statistical models of truncation, sample selection, and limited dependent variables and a simple estimator for such models. Annals of Economic and Social Measurement 5, 475–492) two-step procedure is employed in order to deal with endogeneity in the regressors. This allows us to econometrically take into account changing degrees of uncertainty associated with the Fed's forecasts of future inflation and GDP gap when estimating the model. Even though such uncertainty does not enter the model directly, we achieve efficiency in estimation by employing the standardized prediction errors for inflation and GDP gap as bias correction terms in the second-step regression. We note that no other empirical literature on monetary policy deals with this important issue. Our empirical results also reveal new aspects not found in the literature previously. That is, the history of the Fed's conduct of monetary policy since the early 1970s can in general be divided into three subperiods: the 1970s, the 1980s, and the 1990s. The conventional division of the sample into pre-Volcker and Volcker–Greenspan periods could mislead the empirical assessment of monetary policy.
Since the seminal work by Taylor (1993), various versions of backward-looking and forward-looking Taylor rule for the U.S. monetary policy have been estimated by many empirical macroeconomists. Based on subsample analyses, Judd and Rudebusch (1998), Clarida et al. (2000), and Orphanides (2004) show that the Fed's interest rate policy has changed since 1979. Cogley and Sargent, 2001 and Cogley and Sargent, 2003 and Boivin (2001) report significant time variation in the policy response to the state of the economy, within the framework of the time-varying parameter models. By applying Hamilton's (1989) Markov-switching models, Sims (2001) and Sims and Zha (2006) argue that time-varying variance of the shocks is more important than time-varying coefficients in modeling the monetary policy rule. Focusing on estimation of a Taylor-rule type forward-looking monetary policy rule, the literature introduces two alternative approaches depending on the data set employed. One approach, undertaken by Orphanides, 2001 and Orphanides, 2004, is to use historical real-time forecasts data by the Fed, called “Greenbook data.” If these real-time forecasts are made under the assumption that the nominal federal funds rate will remain constant within the forecasting horizon, there would be no endogeneity problem in the policy rule equation. Thus, the use of real-time forecasts data allows one to straightforwardly extend the basic model to incorporate time-varying coefficients and to employ the conventional Kalman (2006) filter. Such an attempt has recently been made by Boivin (2006). Another approach, undertaken by Clarida et al. (2000), is to use ex post data and explicitly estimate the Fed's expectation process. An instrumental variables (IV) estimation procedure or a generalized method of moment (GMM) is applied, since the future economic variables used as regressors in the policy rule equation are correlated with the disturbance terms. However, extending the basic model to incorporate time-varying coefficients would not be as straightforward as in Boivin (2006), and no such attempts have been made so far. With the endogeneity problem that results from using the ex post data, a conventional IV estimation procedure or a conventional GMM estimation procedure cannot be readily applied to a time-varying parameter model. In this paper, we consider estimation of a Taylor-rule type forward-looking monetary policy rule, that allows for time-varying parameters (TVPs) by employing ex post data.1 In doing so, we apply Kim's (2006) TVP-with-endogenous-regressors model in at least two directions. First, the model is extended to deal with nonlinearities, which results from the Fed's interest rate smoothing. Second, it deals with heteroscedasticity in the disturbance terms of the monetary policy rule, as emphasized by Sims (2001) and Sims and Zha (2006). The endogeneity problem is solved by employing the Heckman-type (1976) two-step procedure, with bias correction terms in the second step. An important feature of the proposed estimation procedure is that it allows us to econometrically take into account the changing degrees of uncertainty associated with the Fed's forecasts of future economic conditions. An inflation forecast of 5%, for example, would be associated with much higher uncertainty during the 1970s than during the 1980s or 1990s. Even though such uncertainty does not enter the model directly, we achieve efficiency in estimation by employing the standardized prediction errors for inflation and GDP gap as bias correction terms in the second-step regression.2 As argued by Orphanides (2001), estimating a forward-looking monetary policy rule using ex post data, which were not available at the time the policy was made, may distort the historical conduct of monetary policy. However, the use of real-time data as in Orphanides (2004) or Boivin (2006) also has its drawbacks. For example, if the real-time forecasts are not made under the assumption that the nominal federal funds rate will remain constant within the forecasting horizon, they would induce the endogeneity problem in the monetary policy rule equation. The main focus of this paper is not to discuss advantages or disadvantages of ex post data or real-time data. Rather, this paper focuses on taking care of the endogeneity issue that results from the use of ex post data, within the framework of the time-varying response of the Fed to future economic conditions. Incorporating the changing degree of uncertainty about future economic conditions in the estimation of the monetary policy rule is an additional important issue.
نتیجه گیری انگلیسی
This paper provides efficient estimation of a forward-looking monetary policy rule with the Fed's time-varying responses to expected future macroeconomic conditions. Unlike existing literature, we econometrically take into account the changing nature of uncertainty associated with the Fed's forecasts of future economic conditions, as a byproduct of applying the Heckman-type (1976) two-step procedure in dealing with engodeneity problem in the regressors of the model. Heteroscedasticity in the disturbance terms of the policy rule equation is also explicitly taken into account. Our empirical results also reveal some new aspects not found in the existing literature. Focusing on the response of the Fed to future expected inflation and GDP gap, the whole sample can be divided into three subperiods: the 1970s, the 1980s, and 1990s. Notice that the usual practice is to divide the whole sample into two: pre-Volcker (pre-1979) period and Volcker–Greenspan period (post-1979). However, dividing the sample in this way could mislead the Fed's historical performance of the monetary policy. The latter half of the 1970s was the period during which the Fed mainly focused on the stabilization of real economic activity. This policy, combined with the misperception of potential GDP, could have destabilized the economy during the 1970s. During the 1980s, however, the Fed mainly focused on stabilizing inflation. During the 1980s, the probability that the response of the federal funds rate to inflation is greater than one remained close to 1, even though they have somewhat decreased in the 1990s. Furthermore, during the 1980s, the Fed's response to GDP gap decreased considerably. This policy might have stabilized inflation at a lower level. Once the inflation has been stabilized at a lower level, the Fed could pay more attention to stabilizing real economic activity since the early 1990s. This is the reason why the Fed's response to GDP gap was higher than ever and statistically different from zero during the most of the 1990s. One potential drawback in our approach is that the use of ex post data, which were not available at the time of policy making, could distort the empirical results on the historical conduct of the Fed's policy. Thus, it would be worthwhile to investigate how the results in this paper would change if the approach in this paper were modified and applied to handle the real-time data. We leave this as a future research topic.