دانلود مقاله ISI انگلیسی شماره 24992
ترجمه فارسی عنوان مقاله

سیاست های پولی مقاوم با مدل های مرجع رقیب

عنوان انگلیسی
Robust monetary policy with competing reference models
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
24992 2003 31 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Journal of Monetary Economics, Volume 50, Issue 5, July 2003, Pages 945–975

ترجمه کلمات کلیدی
مدل عدم اطمینان - کنترل مقاوم - کنترل بهینه - کنترل تجاری
کلمات کلیدی انگلیسی
Model uncertainty, Robust control, Optimal control, Bayesian control
پیش نمایش مقاله
پیش نمایش مقاله  سیاست های پولی مقاوم با مدل های مرجع رقیب

چکیده انگلیسی

The existing literature on robust monetary policy rules has largely focused on the case in which the policymaker has a single reference model while the true economy lies within a specified neighborhood of the reference model. In this paper, we show that such rules may perform very poorly in the more general case in which non-nested models represent competing perspectives about controversial issues such as expectations formation and inflation persistence. Using Bayesian and minimax strategies, we then consider whether any simple rule can provide robust performance across such divergent representations of the economy. We find that a robust outcome is attainable only in cases where the objective function places substantial weight on stabilizing both output and inflation; in contrast, we are unable to find a robust policy rule when the sole policy objective is to stabilize inflation. We analyze these results using a new diagnostic approach, namely, by quantifying the fault tolerance of each model economy with respect to deviations from optimal policy.

مقدمه انگلیسی

Most studies of the problem of formulating monetary policy under uncertainty about the true structure of the economy, have followed Brainard (1967) in focusing on the case in which the policymaker has a single reference model and the true economy lies within a specified neighborhood of this model. In recent work, for example, Hansen and Sargent (2002) provide a rigorous treatment of robust control in the face of uncertainty about the data-generating process, or DGP, of the exogenous disturbances. Giannoni 2001 and Giannoni 2002 characterizes rules that are robust to uncertainty about the estimated parameters, while Onatski and Stock (2002) and Onatski and Williams (2002) analyze the robustness of simple rules when the behavioral equations of the model are subject to misspecification errors; these papers also consider uncertainty about the shock process. Finally, Svensson (1997) and Giannoni and Woodford (2003) have emphasized that the optimal targeting rule for a given model has a representation that is invariant to known changes in the shock process and contend that this is the primary sense in which a proposed rule should be robust. 1 In this paper, we analyze the robustness of policy rules when non-nested models represent competing perspectives about controversial issues such as expectations formation and inflation persistence.2 Such an approach was initially advocated by McCallum (1988) and seems consistent with the aims of Taylor (1993a), whose simple policy rule was intended to yield reasonable macroeconomic stability under a wide range of assumptions about the “true” structure of the economy.3 One interpretation of this approach, suggested by Patrick Minford, is related to the decision-making of a policymaking committee. Each member of the committee holds to a particular view of the behavior of the economy, represented by a macro model. A robust rule is one that, although not exactly optimal for any member of the committee, yields outcomes that are acceptable to all members of the committee. A nonrobust rule, in contrast, is one that performs very poorly in at least one of the committee members’ models and thus interferes with the building of a consensus view of policy. We consider three distinct macroeconomic models, two of which have been scrutinized in the robust control literature. First is a benchmark version of the New Keynesian model (henceforth denoted the NKB, for New Keynesian Benchmark), which been studied by Hansen and Sargent (2002), Giannoni 2001 and Giannoni 2002, and Giannoni and Woodford (2002b); this model has purely forward-looking specifications for price setting and aggregate demand and exhibits no intrinsic persistence.4 In contrast, the macroeconometric model of Rudebusch and Svensson (1999) has purely backward-looking structural equations and very high intrinsic persistence; this model (henceforth denoted as the RS model) served as the benchmark in the analysis of Onatski and Stock (2002) and Onatski and Williams (2002). Our third model—taken from Fuhrer (2000) and denoted as the FHP, for Fuhrer-habit-persistence, model—utilizes rational expectations but exhibits substantial intrinsic persistence of aggregate spending and inflation. In all three models, the short-term nominal interest rate is assumed to be the monetary policy instrument. Throughout the analysis, we assume that the policymaker's objective is to minimize a weighted sum of the unconditional variances of the inflation rate, the output gap, and the change in the short-term nominal interest rate. We begin by demonstrating that the robust control rules proposed in the literature are not necessarily very robust to model uncertainty; that is, a rule obtained from a given reference model may perform very poorly in other models. This potential pitfall of robust control was anticipated by Sargent (1999), who noted that the perturbations of the exogenous shock process only comprise a fairly restrictive set of potential model misspecifications, because the perturbed shocks still feed through the system just as in the reference model. Thus, while the approach of Giannoni and Woodford (2002a) yields a policy rule which is invariant to the characteristics of the shock process, the optimal control rule does embed the structure of endogenous relationships of the reference model, and hence we find that such rules may generate poor or even disastrous outcomes when implemented in another model with markedly different endogenous relationships. More generally, our results suggest that focusing on specification errors or parameter uncertainty in the neighborhood of a particular reference model may dramatically understate the true degree of model uncertainty.5 For example, Giannoni (2001) quantifies the parameter uncertainty of the NKB model by using the estimated standard errors of Amato and Laubach (2003), and obtains rules that involve a very high degree of interest rate smoothing. Unfortunately, we find that such “super-inertial” rules typically yield very poor performance in the presence of substantial intrinsic persistence (as in FHP) and generate dynamic instability under the assumption of adaptive expectations (as in RS).6 Evidently, the degree of uncertainty due to sampling variation is relatively small in comparison with the uncertainty associated with various choices about model specification, estimation technique, etc. Next, using Bayesian and minimax methods, we investigate the extent to which simple policy rules can provide robust performance across all three competing reference models. In particular, we focus on the class of 3-parameter rules in which the short-term nominal interest rate is adjusted in response to its own lagged value as well as to the current output gap and inflation rate. For a given choice of objective function weights, we determine the policy parameters that minimize the average loss across the three models (the Bayesian strategy with flat prior beliefs about the accuracy of the three models), and then we determine the parameters that minimize the maximum loss across the three models (the minimax strategy). Using a similar approach, Levin et al. (1999) have shown that first-difference rules—that is, rules with a coefficient of unity on the lagged interest rate—provide robust performance across a fairly wide range of rational expectations models.7 However, Sargent (1999) has noted that those “comforting” results might primarily reflect the relative proximity of the models, and in fact, Rudebusch and Svensson (1999) find that first-difference rules (and super-inertial rules) typically generate dynamic instability in the RS model. Thus, as Taylor (1999) concludes, the remaining challenge has been to identify rules that yield robust performance in both forward- and backward-looking models. We find that simple rules incorporating a moderate degree of interest rate smoothing yield remarkably robust performance in all three reference models as long as the loss function places nontrivial weight on stabilizing both output and inflation. In contrast, under strict inflation targeting, there is no simple rule that yields robust performance across all three models. Finally, we interpret these results using a new diagnostic approach, namely, by analyzing the fault tolerance of each model economy with respect to deviations from optimal policy. For example, when the loss function assigns substantial weight to both output and inflation volatility, we find that the NKB model exhibits a very high degree of fault tolerance: although the optimal rule for this model is super-inertial, the use of a rule with moderate inertia does not cause a severe deterioration in stabilization performance. The RS model exhibits much less fault tolerance; that is, the loss function has much greater curvature, especially with respect to deviations in the interest rate smoothing parameter (which has an optimal value close to zero). Thus, while super-inertial rules generate dynamic instability in this model, rules with moderate policy inertia perform nearly as well as the optimal rule. The remainder of this paper proceeds as follows. Section 2 describes the key properties of the three competing models. Section 3 documents the lack of robustness of rules designed to work well in the neighborhood of a specific model. Section 4 describes the performance of simple rules obtained by applying Bayesian and minimax methods to the set of competing models. Section 5 defines measures of fault tolerance and then uses these tools to interpret our results. Section 6 extends the analysis to incorporate a number of other macroeconomic models. Finally, Section 7 summarizes our conclusions and considers directions for further research.

نتیجه گیری انگلیسی

Although an extensive literature has considered the problem of obtaining a policy rule that is robust to modifications of a specific reference model, our analysis indicates that the robustness of such rules may be somewhat illusory, because policymakers actually face a much greater degree of model uncertainty. Thus, a more promising approach is to consider a range of distinct reference models, and to identify rules that provide robust performance across these models. Our results also highlight the advantages of considering the “fault tolerance” of each competing reference model as a means of characterizing and interpreting the conditions under which a robust policy outcome is attainable. The main finding from our model-based analysis is positive: it is possible to find policy rules that perform very well in a wide range of macro models as long as the policymaker cares about both inflation and output variability. Or, put differently, the members of a policymaking committee that share similar preferences for stabilizing fluctuations in inflation, output, and interest rates, but who have quite different views of the dynamics behavior of the economy, can relatively easy come to a mutually acceptable compromise over the design of monetary policy. Only in the case where policymakers are indifferent to fluctuations in output do the models lack fault tolerance, and as a result finding a mutually agreeable policy becomes problematic. In future research, it will be useful to extend this approach in several directions. Throughout our analysis, we have assumed that the policymaker observes all macroeconomic variables; including latent variables such as the natural rates of output, unemployment, and interest, without error. But, as Staiger et al. (1997), Orphanides and van Norden (2002), Laubach and Williams (2003), and others have documented, natural rates tend to be poorly measured, especially in real time. A natural extension of this paper would be to incorporate natural rate mismeasurement into the analysis and to derive policy rules that are robust to both model and natural rate uncertainties.18 We have also assumed that the parameters of the reference models are known with certainty. It is relatively straightforward to extend this approach to allow for parameter uncertainty in computing the losses in each model either using Bayesian or robust control approaches.19 We have also assumed that the policymakers’ objective function, in terms of the variability of output, inflation, and interest rates, is known and invariant across the models. In the context of models with well-specified household optimization problems, the welfare maximization problem can be approximated by the loss used in this paper. The relative weights in the loss function, however, depend on the structure and parameters of the particular model. Levin and Williams (2003) examine the link between parameter uncertainty and uncertainty regarding the uncertainty about the weights and structure of the objective function, and the implications of these cross-equation restrictions for monetary policy. Finally, we have assumed that policymakers never update their beliefs about the relevance of the competing reference models. An open question is the design of robust policy when the policymaker gradually obtains additional knowledge about the true structure of the economy.