دانلود مقاله ISI انگلیسی شماره 27455
ترجمه فارسی عنوان مقاله

یک روش تجاری برای سیاست پولی بهینه با مدل عدم قطعیت و پارامتر

عنوان انگلیسی
A Bayesian approach to optimal monetary policy with parameter and model uncertainty
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
27455 2011 27 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Journal of Economic Dynamics and Control, Volume 35, Issue 12, December 2011, Pages 2186–2212

ترجمه کلمات کلیدی
سیاست های پولی - تجزیه و تحلیل تجاری - نظریه تصمیم آماری - مدل سازی سیاست های کمی -
کلمات کلیدی انگلیسی
Monetary policy, Bayesian analysis, Statistical decision theory, Quantitative policy modeling,
پیش نمایش مقاله
پیش نمایش مقاله  یک روش تجاری برای سیاست پولی بهینه با مدل عدم قطعیت و پارامتر

چکیده انگلیسی

This paper undertakes a Bayesian analysis of optimal monetary policy for the U.K. We estimate a suite of monetary-policy models that include both forward- and backward-looking representations as well as large- and small-scale models. We find an optimal simple Taylor-type rule that accounts for both model and parameter uncertainty. For the most part, backward-looking models are highly fault tolerant with respect to policies optimized for forward-looking representations, while forward-looking models have low fault tolerance with respect to policies optimized for backward-looking representations. In addition, backward-looking models often have lower posterior probabilities than forward-looking models. Bayesian policies therefore have characteristics suitable for inflation and output stabilization in forward-looking models.

مقدمه انگلیسی

Central bankers frequently emphasize the importance of uncertainty in shaping monetary policy (e.g. see Greenspan, 2004 and King, 2004). Uncertainty takes many forms. The central bank must act in anticipation of future conditions, which are affected by shocks that are currently unknown. In addition, because economists have not formed a consensus about the best way to model the monetary transmission mechanism, policy makers must also contemplate alternative theories with distinctive operating characteristics. Finally, even economists who agree on a modeling strategy sometimes disagree about the values of key parameters. Central bankers must therefore also confront parameter uncertainty within macroeconomic models. A natural way to address these issues is to regard monetary policy as a Bayesian decision problem. As noted by Brock et al. (2003), a Bayesian approach is promising because it seamlessly integrates econometrics and decision theory. Thus, we can use Bayesian econometric methods to assess various sources of uncertainty and incorporate the results as an input to a decision problem. Our aim in this paper is to consider how monetary policy should be conducted in the face of multiple sources of uncertainty, including model and parameter uncertainty as well as uncertainty about future shocks. We apply Bayesian methods root and branch to a suite of macroeconomic models estimated on U.K. data, and we use the results to devise a simple, optimal monetary-policy rule. 1.1. The method in more detail Just to be clear, we take two shortcuts relative to a complete Bayesian implementation. First, we neglect experimentation. Under model and/or parameter uncertainty, a Bayesian policy maker has an incentive to vary the policy instrument in order to generate information about unknown parameters and model probabilities. In the context of monetary policy, however, a number of recent studies suggest that experimental motives are weak and that ‘adaptive optimal policies’ (in the language of Svensson and Williams, 2008a) well approximate fully optimal, experimental policies.1 Because of that, and also because many central bankers are averse to experimentation, our goal is to formulate an optimal non-experimental rule. We also restrict attention to a simple rule, i.e. one involving a relatively small number of arguments as opposed to the complete state vector. This is for tractability as well as for transparency. For a Bayesian decision problem with multiple models, the fully optimal decision rule would involve the complete state vector for all the models under consideration. That would complicate our calculations a great deal. Some economists also argue that simple rules constitute more useful communication tools. For example, Woodford (1999) writes that “a simple feedback rule would make it easy to describe the central bank's likely future conduct with considerable precision, and verification by the private sector of whether such a rule is actually being followed should be straightforward as well.” Thus, we restrict policy to follow Taylor-like rules. With those simplifications in mind, our goal is to choose the parameters of a Taylor rule to minimize expected posterior loss. Suppose ϕϕ represents the policy-rule parameters and that li(ϕ,θi)li(ϕ,θi) represents expected loss conditional on a particular model i and a calibration of its parameters θiθi. Typically li(ϕ,θi)li(ϕ,θi) is a discounted quadratic loss function that evaluates uncertainty about future shocks. One common approach in the literature is to choose ϕϕ to minimize li(ϕ,θi)li(ϕ,θi). This delivers a simple optimal rule for a particular model and calibration, but it neglects parameter and model uncertainty. To incorporate parameter uncertainty within model i , we must first assess how much uncertainty there is. This can be done by simulating the model's posterior distribution, p(θi|Y,Mi)p(θi|Y,Mi), where M i indexes model i , and Y represents current and past data on variables relevant for that model. Methods for Bayesian estimation of DSGE models were pioneered by Schorfheide (2000) and Smets and Wouters (2003) and are reviewed by An and Schorfheide (2007). If model i were the only model under consideration, expected loss would be equation(1) View the MathML sourceli(ϕ)=∫li(ϕ,θi)p(θi|Y,Mi)dθi. Turn MathJax on This integral might seem daunting, but it can be approximated by averaging across draws from the posterior simulation. Assuming evenly weighted draws from the posterior, expected loss is equation(2) View the MathML sourceli(ϕ)≈N−1∑j=1Nli(ϕ,θij), Turn MathJax on where N represents the number of Monte Carlo draws and θijθij is the j th draw for model i . A policy rule robust to parameter uncertainty within model i can be found by choosing ϕϕ to minimize li(ϕ)li(ϕ). This is a step forward, but it still neglects model uncertainty. To incorporate multiple models, we attach probabilities to each and weigh their implications in accordance with those probabilities. Posterior model probabilities depend on prior beliefs and on their fit to the data. Suppose that p (M i) is the policy-makers prior probability on model i , that p(θi|Mi)p(θi|Mi) summarizes his prior beliefs about the parameters of that model, and that p(Y|θi,Mi)p(Y|θi,Mi) is the model's likelihood function. 2 According to Bayes' theorem, the posterior model probability is equation(3) p(Mi|Y)∝p(Y|Mi)p(Mi),p(Mi|Y)∝p(Y|Mi)p(Mi), Turn MathJax on where equation(4) View the MathML sourcep(Y|Mi)=∫p(Y|θi,Mi)p(θi|Mi)dθi Turn MathJax on is the marginal likelihood or marginal data density. The latter can also be approximated numerically using output of the posterior simulation; see An and Schorfheide for details. To account for model uncertainty, we average li(ϕ)li(ϕ) across models using posterior model probabilities as weights, equation(5) View the MathML sourcel(ϕ)=∑i=1mli(ϕ)p(Mi|Y). Turn MathJax on A policy rule robust to both model and parameter uncertainty can be found by choosing ϕϕ to minimize l(ϕ)l(ϕ). This decision problem might seem complicated, but because the problem is modular it can be solved numerically without much trouble. The main simplification follows from the fact that the econometrics can be done separately for each model and also separately from the decision problem. 1.2. Sketch of previous literature Our work follows and builds on many previous contributions. As mentioned above, one is the body of work estimating dynamic general equilibrium models using Bayesian methods. This literature has exploded in recent years and includes numerous applications to monetary policy. A second closely related literature concerns forecast-model averaging. This research was initiated by Bates and Granger (1969) and is now widely regarded as representing best practice in forecasting. Amongst others, recent contributions to the frequentist literature include Clements and Hendry, 1998 and Clements and Hendry, 2002 and Newbold and Harvey (2002), while examples of Bayesian forecast averaging include Diebold and Pauly (1987), Jacobson and Karlsson (2004), and Kapetanios et al. (2008). Our work is distinct from this in that we are interested not only in forecasting but also in solving a decision problem. Of course, forecasting is an input to our decision problem, but it is not an end in itself. For that reason, we concern ourselves with structural macroeconomic models. Another important precursor is Brock et al., 2003 and Brock et al., 2007. They also emphasize the importance of accounting for model and parameter uncertainty in policy design, and they describe a variety of Bayesian and frequentist approaches for integrating econometrics and policy design. Our framework follows directly from one of their proposals.3 They also investigate the robustness of Taylor rules within a class of backward-looking models a la Rudebusch and Svensson (1999). Cogley and Sargent (2005) apply the ideas of Brock et al. to investigate how model uncertainty affected U.S. monetary policy during the Great Inflation. For tractability, Cogley and Sargent adopt two shortcuts, restricting the model set to a trio of very simple Phillips-curve models and neglecting parameter uncertainty within each model. In our application, we expand the model set to include forward-looking new Keynesian models, and we explicitly account for parameter uncertainty. Cogley and Sargent's (2005) work is a positive exercise: our paper follows Brock et al., 2003 and Brock et al., 2007 and concentrates on normative questions. Other routes to robustness include those of McCallum (1988) and Hansen and Sargent (2007). McCallum pioneered an informal version of model averaging, deprecating policy rules optimized with respect to a single model and advocating rules that work well across a spectrum of models. Much of Taylor's (1999) volume on monetary-policy rules can be read as an application of McCallum's ideas. Recent applications include Levin and Williams (2003), Levin et al., 2003 and Levin et al., 2005.4 We embrace McCallum's approach and extend it by providing Bayesian underpinnings. We want to forge a tighter link between this literature and the literature on Bayesian estimation of DSGE models. Our hope is that a more formal assessment of uncertainties will pay off in policy design. Hansen and Sargent (2007) develop yet another approach to model uncertainty. They specify a single, explicit benchmark model, surround it with an uncountable cloud of alternative models whose entropy relative to the benchmark model is bounded, and find an optimal rule by solving a minimax problem over that set of models. In contrast, we work with a small number of explicit models and assume that policy makers entertain no other possibilities. Our approach no doubt understates the true degree of model uncertainty by excluding a priori a large number of potential alternatives. Despite this shortcoming, we think the Bayesian approach is useful because it is more explicit about the relative probabilities of models within the suite.5 1.3. Outline The paper is organized as follows. Section 2 describes our suite of models, emphasizing their distinctive characteristics and features of the posterior that are most salient for monetary policy. Section 3 reports posterior model weights, and Section 4 presents our main results. There we describe an optimal Taylor rule and illustrate how it works in the various submodels.

نتیجه گیری انگلیسی

This paper executes a Bayesian analysis of optimal monetary policy for the U.K. Our method takes into account model and parameter uncertainty as well as uncertainty about future shocks and outcomes. We examine a suite of models that have received a lot of attention in the monetary policy, including versions of the Rudebusch–Svensson (1999) model, the Smets–Wouters (2007) model, the Bernanke et al. (1999) model, and the small-open-economy model of Gali and Monacelli (2005). We estimate each model using Bayesian methods and calculate posterior model probabilities. Then we compute the coefficients of a simple rule that minimizes expected losses, where expectations incorporate uncertainty about shocks, parameters, and models, and where losses are defined as a weighted sum of the unconditional variance of inflation, the output gap, and the change in the interest rate. Since our methods are modular, adding new models to the suite is straightforward. Indeed, because of its modular nature, it would be possible to extend this research through a network of decentralized modeling groups. Several conclusions emerge from our analysis. First, the rule which is optimal within each model differs substantially across models. Our best estimates of the RS model suggest there is little intrinsic inflation inertia. Since that model is backward looking and shocks dissipate quickly on their own, the optimal RS rule is passive and seeks mainly to minimize interest-rate volatility. Indeed, for two versions of the RS model, the model-specific optimal policy approximates a pure nominal-interest peg. At the other end of the spectrum, the policy optimal for the BGG model is approximately equivalent to an inflation-only Taylor rule. Our estimates of the BGG model find little evidence of inflation inertia. Because this is a forward-looking model, the optimal BGG rule responds very aggressively to deviations of inflation from its target, with little response to other variables. The SW-optimal rule approximates a first-difference rule for the nominal interest rate with high long-run response coefficients on inflation and output. This follows from the fact that the SW model features sticker prices and both sticky wages and large and persistent cost-push shocks, thus presenting a more challenging policy tradeoff. Finally, in the small-open-economy model, the central bank can simultaneously stabilize the output gap and producer prices (though not consumer prices). As a result, expected loss is significantly lower than in the SW model. Like the other forward-looking models, the optimal rule calls for a high long-run coefficient on inflation. Second, the forward-looking models have low fault tolerance with respect to policies designed for the backward-looking models. Those policies either violate the Taylor principle or barely satisfy it with long-run inflation response coefficients just above 1. Outcomes in the forward-looking models are poor in either case. In contrast, the backward-looking models have high fault tolerance with respect to policies designed for forward-looking models. In this respect, results for the U.K. contrast sharply with those for the U.S. One of the main challenges for the U.S. is to find a rule that works well both for forward- and backward-looking models. Backward-looking models typically imply a high degree of intrinsic inflation persistence when estimated with U.S. data. Policy rules that succeed in stabilizing inflation in forward-looking models often result in excessive output variability in backward-looking models, while gradualist rules well adapted to a backward-looking environment permit more inflation variability in forward-looking models than one might like. For the U.S., finding a rule well adapted to both environments is difficult. For the U.K., this turns out not to be an issue because backward-looking models estimated with U.K. data for the inflation-targeting period involve little intrinsic persistence. Thus, rules that work well for forward-looking models also work well in our backward-looking models. Hence optimal rules bear a closer resemblance to those for forward-looking models than would be the case for the U.S. In two of the three suites, the backward-looking model has a low probability weight. Since it is also highly fault tolerant, it has virtually no influence on the optimal Bayesian policy. In those suites, the SW model has a high probability weight, and the optimal Bayesian policy resembles the SW-optimal policy, with a slight hedge in the direction of policies appropriate for the other forward-looking models. Relative to the SW-optimal policy, the Bayesian policy improves outcomes substantially in the other forward-looking models at the cost of a slight deterioration in outcomes in the SW model. In the third suite, the backward-looking model has a probability weight of 0.8, and the forward-looking models collectively have weight of 0.2. Despite that, the optimal Bayesian policy differs substantially from the policy that is optimal for the backward-looking model, which violates the Taylor principle. Since we assign an infinite loss to indeterminate outcomes, our Bayesian policy maker shies away from the RS-optimal rule, seeking first and foremost a rule that guarantees determinacy in all the models. Within that family, s/he strikes a balance between performance in the various models. The optimal Bayesian policy in this case is a Taylor rule with modest interest smoothing, a long-run inflation response around 1.5, and virtually no reaction to output or output growth. Our version of the Smets–Wouters model is the only one that has substantial posterior probability in all three suites. It ranks first with a probability weight of 0.8 in suites 1 and 2, and it comes in second with a weight of 0.16 in suite 3. This result is interesting the Smets–Wouters model is sometimes criticized for being profligately parameterized. Since Bayesian model probabilities reward fit but penalize heavily parameterized models, it was not clear to us going into the project whether our methods would prefer simpler or more complex versions of the new Keynesian model. At least in our examples, improving fit turns out to be more important than maintaining parsimony. Our version of the Smets–Wouters model is more streamlined than the original, and it is possible that streamlining is important for obtaining a high model weight. Whether the model should be streamlined more or less is an interesting open question.