دانلود مقاله ISI انگلیسی شماره 26234
ترجمه فارسی عنوان مقاله

سیاست های پولی تحت مدل و عدم اطمینان داده های پارامتر

عنوان انگلیسی
Monetary policy under model and data-parameter uncertainty
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
26234 2007 19 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Journal of Monetary Economics, Volume 54, Issue 7, October 2007, Pages 2083–2101

ترجمه کلمات کلیدی
حکومت تیلور - نامعلومی یا عدم قطعیت - قرعه کشی دو مرحله ای غیر کاهشی -
کلمات کلیدی انگلیسی
Taylor rule, Uncertainty, Non-reduction two-stage lotteries,
پیش نمایش مقاله
پیش نمایش مقاله  سیاست های پولی تحت مدل و عدم اطمینان داده های پارامتر

چکیده انگلیسی

Empirical Taylor rules are much less aggressive than those derived from optimization-based models. This paper analyzes whether accounting for uncertainty across competing models and (or) real-time data considerations can explain this discrepancy. It considers a central bank that chooses a Taylor rule in a framework that allows for an aversion to the second-order risk associated with facing multiple models and measurement-error configurations. The paper finds that if the central bank cares strongly enough about stabilizing the output gap, this aversion leads to significant declines in the coefficients of the Taylor rule even if the central bank's loss function assigns little weight to reducing interest rate variability. Furthermore, a small degree of aversion can generate an optimal rule that matches the empirical Taylor rule.

مقدمه انگلیسی

Since Taylor (1993), the monetary policy literature has increasingly focused on characterizing desirable monetary policy in terms of simple interest rate rules. Typically, given some structural model linking inflation, output and the interest rate, the central bank sets policy according to the interest rate feedback rule to minimize a weighted average of inflation, output and interest rate variability. The result of that exercise is often puzzling, however, unless the central bank assigns an implausibly high weight to controlling the variability of changes to the interest rate, parameters of optimal policy rules call for much stronger responses to inflation and output than those estimated from historical data (Rudebusch, 2001). What can explain this apparent reluctance of policy-makers to act aggressively? One important branch of this literature argues that attenuated policy is the result of policy-makers facing uncertainty, whether regarding the model parameters or the data. Since Brainard (1967), it has generally been accepted that parameter uncertainty can lead to less aggressive policy. However, most studies that formally incorporate parameter uncertainty in the central bank's decision process find that it has a negligible effect on policy (Estrella and Mishkin, 1999; Peersman and Smets, 1999). Similarly, although considerable differences can exist between real-time and final estimates of inflation and the output gap (Orphanides, 2001), various authors have found that sensible degrees of measurement error do not lead to a high enough attenuation in the policy rule parameters (Rudebusch, 2001). The papers cited above assume no model uncertainty: although the central bank is unsure about the model parameters or fears data uncertainty, it is confident that the structure of the model is the right one for policy-making. A related strand of the literature examines whether a direct concern for model uncertainty can help to explain why policy-makers may prefer less aggressive policy. Much of that literature assumes that policy-makers have one good reference model for setting policy but are concerned about uncertain possible deviations from it. Therefore, they use a robust control approach (Hansen and Sargent, 2004; Onatski and Stock, 2002) to design policy rules that resist deviations from their particular reference model. But what if the central bank is uncertain between competing reference models of the economy ( Levin and Williams, 2003)? This paper considers a central bank that finds various models of the economy plausible. The problem of the central bank is to choose a Taylor rule that performs reasonably well given its difficulty to choose between the competing models. How can the central bank choose such a rule? The literature proposes two main approaches: the central bank can take a worst-case approach if it values robustness or it can use a Bayesian criterion ( Brock et al., 2003) if it values good average performance. In this paper, the central bank achieves a trade-off between average performance and robustness by using a decision-making framework that exhibits the non-reduction of two-stage lotteries ( Segal, 1990, Klibanoff et al., 2005 and Ergin and Gul, 2004). What this means in my context, is that, when dealing with model uncertainty, the central bank distinguishes between two distinct kinds of risks: a first-order risk (or within-model risk), which arises given a particular model and its stochastic properties, and a second-order risk (or across-model risk), which is associated with the multiplicity of models. It is the central bank's attitude towards the across-model risk (i.e., its degree of aversion to the across-model risk) that determines the extent to which it wants to trade-off average performance for robustness. Indeed, the framework nests both the Bayesian and worst-case approach as special cases: the worst-case approach is the limiting case where the central bank's degree of aversion to the across-model risk tends to positive infinity while the Bayesian approach is the special case where the degree of aversion is zero. Therefore, a positive and finite degree of aversion to the across-model risk implies a trade-off between average performance and robustness. Specifically, the policy problem I analyze in this paper considers a central bank that views the models of Fuhrer and Moore (1995), Rudebusch and Svensson (1999), and Woodford (1999) and Giannoni (2000)'s version of the New Keynesian model (see also Clarida et al., 1996; Goodfriend and King, 1997McCallum and Nelson, 1999) as three plausible models of the economy. My objective is to analyze how the rule chosen changes when I vary the central bank's degree of aversion to the across-model risk. I find that policy becomes less aggressive as I increase the degree of aversion to the across-model risk. Moreover, if the central bank cares strongly enough about stabilizing the output gap, this aversion generates important declines in the coefficients of the Taylor rule even when the central bank's loss function gives little weight to reducing interest rate variability. I then extend the policy problem to one where the central bank is not only uncertain about the three competing models, but also considers data uncertainty to be important. The central bank accounts for data uncertainty by modeling the measurement-error processes for the output gap and inflation, but also recognizes that it is uncertain about the parameters of those processes (henceforth, this is referred to as data-parameter uncertainty). The decision-making framework is extended to incorporate a second-order risk when evaluating policy across models and parameter configurations. I find that, in the presence of data-parameter uncertainty as defined above, an increase in aversion to model and data-parameter uncertainty can generate an optimal Taylor rule that matches the empirically observed Taylor rule. I interpret the economic significance of the degree of aversion to model and data-parameter uncertainty by relating it to the proportional premium that the central bank would pay to be indifferent between facing model and data-parameter uncertainty or achieving the average loss of the models and data-parameter configurations for sure. I find that a small degree of aversion is enough to generate an optimal Taylor rule that matches the empirical Taylor rule. The rest of this paper is organized as follows. Section 2 describes a general framework for handling model and parameter uncertainty. Section 3 analyzes the monetary policy problem under model uncertainty. Section 4 expands the policy problem to analyze concerns about data uncertainty and data-parameter uncertainty. Section 5 provides an interpretation of the economic significance of the aversion parameter and Section 6 concludes.

نتیجه گیری انگلیسی

The first part of this paper considered a central bank that faces model uncertainty because it finds three non-nested models of the economy to be plausible: Woodford and Giannoni's forward-looking model (WG), Rudebusch and Svensson's empirical backward-looking model (RS), and Fuhrer and Moore's contracting model (FM). The central bank accounts for its model uncertainty in a framework that exhibits the non-reduction of two-stage lotteries developed by Segal (1990), Klibanoff et al. (2005), and Ergin and Gul (2004). For the purpose of this paper, this means that the central bank considers model risk as a risk distinct from the first-order risk that arises because of the stochastic properties of a particular model. Similar to Klibanoff et al. (2005), I interpret the central bank's attitude to this second-order risk as its attitude towards model uncertainty. The central bank's policy problem is to choose a Taylor rule that works reasonably well in all models given its degree of aversion to model uncertainty. I have attempted to answer the question: Does model-uncertainty aversion make policy more or less aggressive? Given my model set, I find that an aversion to model uncertainty indeed makes policy less aggressive. But if, in addition, the central bank assigns a higher weight to output stabilization relative to interest rate variability control, an aversion to model uncertainty generates quantitatively important declines in the coefficients of the Taylor rule. And this occurs even if the central bank assigns little weight to controlling the variability of the changes to the interest rate in its loss function. This result is interesting because many authors have argued that one of the reasons why a central bank prefers less aggressive policy responses is that it assigns a high weight to interest rate variability control in its loss function. In contrast, I find that, in the presence of model uncertainty, this is not necessary. Model uncertainty can still lead to less aggressive policy when the central bank cares strongly about controlling the interest rate variability relative to output stabilization, but the declines are quantitatively less important. Even though an aversion to model uncertainty seems to be able to generate quite important declines in the coefficients of the Taylor rule, for the various central bank preferences that I considered, it still leads to optimal Taylor rules that are more aggressive (especially in the response to inflation) than empirically observed Taylor rules. The second part of this paper considered a central bank that faces not only model uncertainty but also noisy data. I found that, when the central bank faces both model uncertainty and uncertainty about the parameters of the measurement error processes of inflation and output gap, an increase in the degree of aversion leads to an optimal Taylor rule that matches empirically estimated Taylor rules. Finally, I interpreted the economic significance of the degree of model-uncertainty aversion by relating it to the proportional premium that the central bank would pay to be indifferent between facing model uncertainty or achieving the average loss of the models for sure. I found that a small degree of aversion is enough to generate an optimal Taylor rule that matches the empirical Taylor rule. This degree of aversion was shown to be small in the sense that it corresponds to a small premium. This result suggests that it is economically sensible to claim that an aversion to model and data-parameter uncertainty is at least part of the reason why the central bank may, in practice, prefer more attenuated Taylor rules.