دانلود مقاله ISI انگلیسی شماره 24594
ترجمه فارسی عنوان مقاله

سیاست های پولی مقاوم با مدل های مس پکفد : آیا مدل عدم قطعیت همواره برای سیاست های ضعیف شده نامیده می شود؟

عنوان انگلیسی
Robust monetary policy with misspecified models: Does model uncertainty always call for attenuated policy?
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
24594 2001 39 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Journal of Economic Dynamics and Control, Volume 25, Issues 6–7, June 2001, Pages 911–949

ترجمه کلمات کلیدی
مدل عدم اطمینان - کنترل مقاوم - سیاست های پولی -
کلمات کلیدی انگلیسی
Model uncertainty, Robust control, Monetary policy,
پیش نمایش مقاله
پیش نمایش مقاله  سیاست های پولی مقاوم با مدل های مس پکفد : آیا مدل عدم قطعیت همواره برای سیاست های ضعیف شده نامیده می شود؟

چکیده انگلیسی

This paper explores Knightian model uncertainty as a possible explanation of the considerable difference between estimated interest rate rules and optimal feedback descriptions of monetary policy. We focus on two types of uncertainty: (i) unstructured model uncertainty reflected in additive shock error processes that result from omitted-variable misspecifications, and (ii) structured model uncertainty, where one or more parameters are identified as the source of misspecification. For an estimated forward-looking model of the US economy, we find that rules that are robust against uncertainty, the nature of which is unspecifiable, or against one-time parametric shifts, are more aggressive than the optimal linear quadratic rule. However, policies designed to protect the economy against the worst-case consequences of misspecified dynamics are less aggressive and turn out to be good approximations of the estimated rule. A possible drawback of such policies is that the losses incurred from protecting against worst-case scenarios are concentrated among the same business cycle frequencies that normally occupy the attention of policymakers.

مقدمه انگلیسی

Recent articles have uncovered a puzzle in monetary policy: Interest-rate reaction functions derived from solving optimization problems call for much more aggressive responsiveness of policy instruments to output and inflation than do rules estimated with US data.1 What explains the observed lack of aggressiveness — the attenuation — of policy? Three distinct arguments have been advanced to explain the observed reluctance to act aggressively. The first is that it is simply a matter of taste: policy is slow and adjusts smoothly in response to shocks because central bankers prefer it that way, either as an inherent taste or as a device to avoid public scrutiny and criticism (see, e.g., Drazen, 2000, Chapter 10). The second argues that partial adjustment in interest rates aids policy by exploiting private agents’ expectations of future short-term rates to move long-term interest rates in a way that is conducive to monetary control (see, e.g., Goodfriend, 1991; Woodford, 1999; Tetlow and von zur Muehlen, 2000). The third contention is that attenuated policy is the optimal response of policymakers facing uncertainty in model parameters, in the nature of stochastic disturbances, in the data themselves given statistical revisions, and in the measurement of latent state variables such as potential output, the NAIRU, and the steady-state real interest rate. Blinder (1998), Estrella and Mishkin (1998), Orphanides (1998), Rudebusch (1998), Sack (1998a), Smets (1999), Orphanides et al., 2000, Sack and Wieland (2000), Wieland (1998) and Tetlow, 2000 all support this general argument, following the line of research that began with Brainard (1967).2 The present paper is concerned with this third explanation for policy attentuation. There is no unanamity on this third line of argument, however. Chow (1975) and Craine (1979) demonstrated long ago that uncertainty can lead to the opposite result of more aggressive policy than in the certainty equivalence case — or what we might dub as anti-attenuation. Söderström (1999a) provides an empirical example of such a case. Moreover, possible deficiencies in the Brainard-style story are hinted at in the range of uncertainties required in papers by Sack (1998a) and Rudebusch (1998) to come even close to explaining observed policy behavior. 3 Lastly, time-variation in uncertainty can, in some circumstances, lead to anti-attentuation of policy as shown by Mercado and Kendrick (1999). The concept of model uncertainty underlying the papers cited above is Bayesian in nature: A researcher faces a well-defined range of possibilities for the true economy over which he or she must formulate a probability distribution function (Easley and Kiefer, 1988). All of the usual laws from probability theory can be brought to bear on such questions. These are problems of risk, and risks can be priced. More realistically, however, central bankers see themselves as facing far more profound uncertainties. They seem to view the world as so complex and time varying that the assignment of probability distributions to parameters or models is impossible. Perhaps for this reason, no central bank is currently committed to a policy rule (other than an exchange rate peg). In acknowledgment of this view, this paper arises from the conception of uncertainty, in the sense of Knight, wherein probability distributions for parameters or models cannot be articulated. We consider two approaches to model uncertainty that differ in the nature of the specification errors envisioned and in the robustness criterion applied to the problem. One approach treats errors as manifested in arbitrarily serially correlated shock processes, in addition to the model's normal stochastic disturbances. This formulation, called unstructured model uncertainty, follows in the tradition of Caravani and Papavassilopoulos (1990) and Hansen et al. (1999), among others. A second approach puts specific structure on misspecification errors in selected parameters in a model. It is possible, for example, to analyze the effect on policy of the worst possible one-time shift in one or more parameter. Alternatively, misspecification in model lag structures could be examined. Such unmodeled dynamics, will affect robust policy. The seminal economics paper in this area of structured model uncertainty is Onatski and Stock (2000). The inability to characterize risk in probability terms compels the monetary authority to protect losses against worst-case outcomes, to play a mental game against nature, as it were. In the case of unstructured uncertainty, the solution to the game is an problem or, in a related special case, a problem that minimizes absolute deviations of targets. In the case of structured uncertainty, the monetary authority ends up choosing a reaction function that minimizes the chance of model instability. In both cases, the authority adopts a bounded ‘worst-case’ strategy, planning against nature's ‘conspiring’ to produce the most disadvantageous parameterization of the true model. With the exception of Hansen and Sargent (1999b), Kasa (2000) and Giannoni (2000), robust decision theory has been applied solely to backward-looking models. Hansen and Sargent (1999b) and Kasa (2000) derive policies under the assumption of unstructured uncertainty, while Giannoni (2000) solves a problem with structured uncertainty, wherein policies are derived subject to uncertainty bounds on selected parameters of the model. In a series of papers, Rustem lays out a minimax strategy for choosing among rival models; see, e.g., Rustem (1988). In this paper, we break new ground in that we consider a number of cases of unstructured as well as structured uncertainty, doing so for an estimated forward-looking model, and with a particular real-world policy issue in mind. Also, unlike Hansen and Sargent (1999b) and Kasa (2000), but like Giannoni (2000) and Onatski and Stock (2000), we derive robust simple policy rules, similar in form to the well-known Taylor (1993) rule. Our analysis differs from Giannoni (2000) in that it is less parametric, relies on numerical techniques, and is amenable to treatment of larger models and covers unstructured as well as structured uncertainty. The rest of this paper unfolds as follows. In Section 2, we introduce structured and unstructured perturbations as a way of modeling specification errors to a reference model considered to be the authority's best approximation to true but unknown model. We define a number of Stackelberg games that differ according to the central bank's assessment of the bounds on uncertainty and its loss function. To analyze the specific questions at hand, in Section 3, we estimate a small forward-looking macro model with Keynesian features. The model is a form of contracting model, in the spirit of Taylor (1980) and Calvo (1983), and is broadly similar to that of Fuhrer and Moore (1995a). Section 4 provides our results leaving Section 5 to sum up and conclude. To presage the results, although we are able to produce a robust rule that is nearly identical to the estimated rule, it is not clear how much of the issue this resolves. Robustness, per se, cannot explain attenuated policy. As others have found, when policy is robust against a combination of shock and misspecification errors, monetary policy becomes even more reactive than the linear quadratic optimal rule. Indeed, we observe a seeming inverse relationship between reactiveness and the degree of structure imposed on uncertainty. At one extreme, unstructured uncertainty justifies the most reactive set of rules. At the other extreme, heavily attenuated policies are generated in cases with a lot of structure on model uncertainty. In particular, if the monetary authority chooses a policy that is robust only to misspecification of the lag structure of the model, the optimal interest rate reaction rule becomes very similar to the estimated rule. When the only criterion is robustness to misspecifications of the lagged output coefficients in the aggregate demand equation, the robust rule and the estimated rule are practically identical. Therefore, one might cautiously conclude that policy has historically been conducted with special concern about ill-understood future effects of current actions.

نتیجه گیری انگلیسی

We began this paper by reflecting on a puzzle: if monetary policy seeks to minimize output and inflation fluctuations, how does one explain the fact that historical interest-rate responses to these two indicators have been far more muted than suggested by optimal policy rules? We have found, as others have, that optimal linear-quadratic rules derived in the absence of model uncertainty are indeed more reactive than rules estimated on data for the United States. Stabilizing a monetary economy is a difficult job. The authority has but one instrument and usually at least two targets. The instrument works with a lag. Moreover, the authority faces an economy that is constantly changing, resulting in profound uncertainties regarding estimated structural parameters. Can such uncertainties explain the observed attenuation of policy? Our results suggest that the answer is yes and no. We did find rules that protect against a class of specification errors, modeled as structured perturbations to a reference model, that resemble the estimated rule. However, we also found that robust policy rules that seek to guard against very general forms of misspecifications are even more reactive than the linear-quadratic rule. The robust rule that comes closest to approximating the estimated rule is one that seeks to guarantee a minimum level of stability against worst-case specification errors in the dynamics of aggregate demand. It follows that one possible interpretation of Fed behavior of the last twenty years is that the observed attenuation in policy was motivated by distrust regarding the estimated degree of output persistence. This motivation arises in large part because the aggregate demand function determines the dominant root — and hence the stability — of the model. Given that the aggregate demand function is specified in terms of excess demand — meaning output relative to potential output — the literature on mismeasurment of potential output is of pertinence here. Work by Orphanides (1998), Smets (1999), Orphanides et al. (2000), and Tetlow, 2000, among others, shows that potential output can be badly mismeasured and that correcting the measurement error can take a long time. Such errors could easily show up as mismeasured persistence in an aggregated demand function. But if uncertainty of a particular structure can explain observed Fed behavior, what can be said about more generalized uncertainty? Our results suggest a hierarchy of policy responses measured in terms of attenuation or anti-attenuation indexed against the assumed degree of structure in Knightian uncertainty: The greater the structure on the uncertainty, the more likely policy attenuation is likely to arise. At the same time however, the more structure is assumed in the perturbations the authority faces, the larger the losses that are borne if the robustness turns out to have been unnecessary.