سیاست های پولی درون زا با تولید بالقوه مشاهده نشده
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|25795||2005||33 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Economic Dynamics and Control, Volume 29, Issue 11, November 2005, Pages 1951–1983
This paper characterizes monetary policy when policymakers are uncertain about the extent to which fluctuations in output and inflation are due to changes in potential output or to cyclical demand and cost shocks. Our results suggest an explanation for the inflation of the 1970s and the price stability of the 1990s. It is shown that: (1) policy is likely to be excessively loose for some time when there is a large decrease in potential output in comparison to a full information benchmark. (2) Retrospective policy errors and errors in forecasting potential output and the output gap are generally serially correlated. (3) The increase in the Fed's conservativeness between the 1970s and the 1990s implies that the information problem had greater consequences in the former period.
A stabilizing role for monetary policy hinges on some notion of ‘potential output’, a non-observable economic variable that is central for the determination of the target level of output. The conduct of monetary policy therefore requires that the central bank estimates and continually updates, its measure of potential output. Kuttner, 1992 and Kuttner, 1994 was among the first to raise the issue of the quantitative importance of uncertainty about potential output for real-time policymaking. He examined the difficulties inherent in real-time estimation of potential output and suggested that situations requiring policy actions might not be immediately recognizable because of signal extraction errors arising under imperfect information. This policy implication is central for Orphanides, 2001, Orphanides, 2003a and Orphanides, 2003b, who reports evidence of a significant (real time) overestimation of potential output during the oil shocks of the 1970s. Enlightening documentation on the ex-post downward revisions of potential output appears in the Economic Report of the President (1979, Chart 7, pp. 72–76), reported below, which vividly illustrates the magnitude and persistence of the revisions (Fig. 1).Orphanides argues that by leading to a monetary policy stance which turned out to be, with the benefit of hindsight, excessively loose, the real-time overestimation of potential output aggravated inflation at the time. Somewhat symmetrically, the strong productivity gains recorded in the United States during the second half of the 1990s raise the possibility that the greater-than-expected increases in potential output could have allowed a less restrictive monetary policy stance than the one implied initially by real time estimates of the output gap and inflation. The hypothesized relevance of imperfect information may shed new light on monetary policy ‘errors’ during the 1970s and raises an important question about the extent to which such retrospective policy mistakes can be avoided in the future. If the errors were due to poor forecasting procedures or to an inefficient specification of the ‘policy rule’, a likely answer to this question is yes. But if, given the available real-time information, policy was as efficient as possible, the likely answer is no. Assessing the extent to which retrospective policy mistakes are due to ‘bad policies’ rather than to ‘bad luck’ requires a model which identifies optimal monetary policy under imperfect information. The availability of such a benchmark is essential for evaluating the extent to which (retrospective) policy mistakes were avoidable in real time. This paper makes a step in this direction by analyzing such a benchmark model. We embed the real-time information problem into a simple macroeconomic model by assuming that the central bank cannot perfectly distinguish (not even ex-post) between fluctuations in inflation and output that are due to shocks in potential output and those that are due to higher frequency demand and cost shocks. We label this inevitable confusion as the ‘information problem’, IP in brief.1 To isolate the effects of uncertainty that arise from the IP, we adopt a specification that features the certainty equivalence property of optimal policy so that the form of the policy function in terms of optimal forecasts of relevant variables is invariant to uncertainty. But the mapping from real-time information into those forecasts and, therefore, macroeconomic outcomes, do depend on uncertainty. The main purpose of the paper is to study how the IP influences the dynamics of these outcomes, in particular output and inflation, in comparison to a full information benchmark. The results show that, given the structure of information, some policy decisions that are judged ex-post to be ‘mistakes’ may be unavoidable in real-time even if the central bank uses the best forecasting procedures. These retrospective mistakes are normally small during periods in which changes in potential output are small. But during periods characterized by unusually large changes in the long run level of output, policy mistakes in a given direction are likely to be large and to persist for some time.2 The evidence in Orphanides (2001) supports the view that monetary policy during the 1970s was excessively loose since a reduction in potential output was interpreted for some time as a negative output gap. This paper provides analytical foundations for this mechanism within a stylized backward looking model and identifies conditions under which the IP leads monetary policy to be systematically looser than under perfect information in periods of large reductions in potential output, and to be overly restrictive relative to this benchmark in periods of large expansions in potential output. The intuitive reason is that, even when they filter available information in an optimal manner, policymakers as well as the public at large detect changes in potential output only gradually. When there is a large decrease in potential output, as was the case in the 1970s, policymakers interpret part of this reduction as a negative output gap due to insufficient demand and loosen monetary policy too much in comparison to a benchmark without the IP. Thus, in periods of large decreases in potential output, inflation accelerates partly because of the relatively expansionary monetary policy stance. Conversely, when – as may have happened in the US during the 1990s – a ‘new economy’ raises the level of potential output, inflation subsides partly because policymakers interpret some of the increase in output as a positive output gap, so that policy is tighter than under perfect information. 3 The paper shows that, even when the real-time information is processed efficiently and monetary policy chosen optimally, the forecast errors in real time estimates of potential output and of the output gap are normally serially correlated even in the population. In general, this serial correlation is induced by shocks to potential output, as well as to the cyclical components of output. The paper identifies conditions under which the bulk of the serial correlation is due to shocks to potential output. In particular, it shows that, when the variance of shocks to potential output is relatively small, most of the measured serial correlation is due to innovations to potential output. Interestingly, retrospective evidence about forecast errors in potential output during the 1970s and the 1980s is consistent with these implications (Orphanides, 2003b). As a consequence of the serial correlation in those errors monetary policy also appears in retrospect to be systematically biased in one direction. In summary the paper provides a simple unified framework for understanding some of the reasons for both the inflation of the 1970s and the remarkable price stability of the 1990s and shows, by means of simulations, that similar mechanisms operate in the presence of more elaborate lag structures. It illustrates how the speed of learning by policymakers and the deviations of policy from an ideal full-information-benchmark depend on the stochastic structure of various economic shocks. Identification of such conditions is a necessary first step for gauging empirically whether imperfect information is quantitatively important. Finally, the paper argues that the IP problem is likely to have been less important during the 1990s than during the 1970s for two reasons. In the 1990s, the Fed was more conservative and the evaluation of uncertainties surrounding potential output was more realistic. The paper is organized as follows. Section 2 presents a simple model of endogenous monetary policy in the presence of imperfect information about the origins of fluctuations in output and characterizes optimal monetary policy in this environment. The consequences for the behavior of real interest rates, inflation and the output gap in comparison to their full information counterparts are analyzed in Section 3. Section 4 develops the real-time optimal forecast of potential output and shows that forecast errors of real time estimates of potential output and of the output gap are serially correlated. Section 5 discusses reasons supporting the view that retrospective policy errors were smaller during the 1990s than during the 1970s. This is followed by concluding remarks. Extensive analytical derivations and simulations are relegated to appendices.
نتیجه گیری انگلیسی
This paper provides a unified explanation to account for part of the inflation of the 1970s and for part of the remarkable price stability of the 1990s. This is accomplished by showing that, even if monetary policy is optimal and forecasts of potential output are efficient, large permanent changes in potential output trigger excessively loose monetary policy when those changes are negative and excessively tight policy when the changes are positive. But the paper also shows that even if the positive shocks to potential output during the 1990s were similar in absolute value to the negative shocks of the 1970s, there is reason to believe that policy was excessively loose in the 1970s to a greater extent than it was excessively tight during the 1990s. This conclusion is based on two presumptions. The first is that the Fed was relatively more conservative in the Rogoff (1985) sense in the 1990s than in the 1970s. For the economic structure postulated in the paper, a higher degree of conservativeness reduces the difference between the imperfect and the full information policy at any given level of the error in forecasting potential output. The second is that a more realistic evaluation of uncertainties surrounding potential output enabled the Fed to learn faster and more accurately about changes in potential output during the 1990s than during the 1970s, so that its policy was nearer to the full information benchmark. The framework of the paper also leads to two wider conclusions that are likely to transcend the particular model used to illustrate them. The first is that even if monetary policy is chosen optimally and even if, given the stochastic structure of shocks, available information is used as efficiently as possible, retrospective policy errors are unavoidable. During periods in which changes in potential output are moderate these errors are neither very important, nor persistent. As a consequence, they do not draw much attention ex-post. But during periods following large sustained changes in potential output, retrospective policy errors appear, with the benefit of hindsight, to be large and to exhibit substantial serial correlation. This makes them noticeable and draws public attention. Thus, even central banks that forecast and behave optimally may sometimes be judged retrospectively as having committed serious policy errors. But, since they had behaved efficiently at the time, it does not follow that (given the information structure) such errors can be avoided in the future. This mechanism is quantitatively more important at sufficiently small values of the variance of innovations to potential output. Obviously, this does not necessarily mean that policy and forecasting procedures during the 1970s were as efficient as possible at the time. The point, however, is that the ex-post identification of policy errors is not sufficient to conclude that such errors were avoidable in real time. A challenge facing policymakers and economists is to distinguish between avoidable (in real time) and unavoidable policy errors. We believe that models in the spirit of the one analyzed here, where policy is consistent with the economic structure and information is processed efficiently, can pave the way towards a better understanding of this issue. The second conclusion is that, with the exception of extreme cases, the fact that in the wake of large and sustained changes in potential output policymakers commit serious errors in forecasting potential output does not imply that noisy but optimally devised forecasts of potential output should not be used as indicator variables for monetary policy. In order to focus on the consequences of imperfect information in isolation we have used a backward looking economic structure that abstracts from the effects of expectations. Simulations (not shown) we conducted with the Ehrmann and Smets (2003) model of the Euro area which features both forward as well as backward looking terms, suggest that some of our results carry over to this more elaborate framework while others are modified. In particular, retrospective errors in the conduct of monetary policy still exhibit serial correlation and, following a positive shock to potential output, the interest rate overshoots its full information counterpart for a while. But, after a while the interest rate undershoots the full information benchmark. In addition, the simulations suggest that the effects of imperfect information do not differ by much between a discretionary and a commitment regime. We do not know the extent to which these new results depend on the particular parameters in the Ehrmann and Smets model rather than on the introduction of expectations per se. Discrimination between those two views could be helped by further analytical work on an imperfect information model that features both forward as well as backward looking terms. This is left for future work.