سیاست های پولی و بی ثباتی بهم پیوسته
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|26772||2009||18 صفحه PDF||سفارش دهید||11722 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Monetary Economics, Volume 56, Supplement, 15 October 2009, Pages S1–S18
Discretionary conduct of monetary stabilization policy can increase real and nominal aggregate volatility by arbitrary amounts when firms pay limited attention to aggregate shocks. A conservative central banker with stronger preference for price stability eliminates the commitment problem, thereby reduces output and price volatility and gives rise to a policy-induced ‘Great Moderation’. Increased focus on price stability facilitates firms’ information processing and aligns their expectations better with policy decisions. This ‘coordination effect’ reduces aggregate real and nominal volatility. Consistent with empirical evidence, the moderation manifests itself through reduced residual variance in vector autoregressions (VARs) involving macroeconomic variables.
The aim of this paper is to present a simple monetary policy model in which increased emphasis on price stability by a discretionary central bank can give rise to arbitrarily large reductions in the variance of aggregate output and inflation. Conversely, the model suggests that overly ambitious attempts to stabilize the real economy via discretionary monetary policy actions can result in very large increases in the volatility of real and nominal variables. With regard to the first point, the model thus suggests the existence of a causal link between two major macroeconomic events that are widely believed to have taken place around 1980 in a number of developed economies: (1) following the inflation experience of the 1970s many central banks seem to have increasingly focused on insuring price stability.1 In the U.S. this policy shift is typically associated with the appointment of Paul Volcker as chairman of the Federal Reserve and the subsequently implemented disinflation program; (2) the volatility of aggregate output and inflation has fallen significantly around the beginning of the 1980s, a fact generally referred to as the ‘Great Moderation’ and first documented in McConnell and Perez-Quiros (2000) and Blanchard and Simon (2001). Conversely and with regard to the second point, the model suggests that the recently observed increase in aggregate real and nominal volatility may possibly be attributed to a renewed shift in the objectives of monetary policy towards greater stabilization of the real economy. Clearly, it is still too early to say whether such a shift has actually occurred and whether the recent volatility increase is more than a temporary phenomenon. The paper, therefore, focuses on the aggregate attenuation resulting from greater emphasis on price stability, although the proposed mechanism works also in the reverse direction. The monetary model presented is a standard rational expectations model with maximizing firms and consumers. The key new feature of the model is that firms are assumed to face constraints on the amount of information they can process about aggregate shocks and about policy decisions. This follows recent work by Sims (2003) which stresses the scarcity of information in decision making, based on the observation that processing and incorporating information into decisions is not a costless process.2 This paper emphasizes the information processing problems of price setting firms, as these appear of particular relevance for questions related to the conduct of monetary policy.3 The presence of information processing frictions implies that the quality of firms’ information about their profit maximizing price is endogenous and depends, amongst other things, on the conduct of monetary policy. Specifically, monetary policies that give rise to large volatility of firms’ profit maximizing price also make it harder for firms to track the precise value of their truly optimal price and thereby give rise to larger information processing errors. These processing errors increase the variability of firms’ information sets and lead to a misalignment between the private sector decisions and the actual policy stance. Since these misalignments are unpredictable for policymakers, they end up amplifying the nominal and real volatility in the economy. In the present setting discretionary maximization of social welfare by the monetary authority is shown to generate excessively volatile monetary policy decisions compared to the fully optimal policy with commitment. Volatility of monetary policy induces volatility of profit maximizing prices and this leads—via the channels just described—to excessive real and nominal volatility in the aggregate economy. The commitment problem emerges because discretionary policy fails to incorporate the amount of information noise it generates: the variance of the information processing noise is a function of the average volatility of policy decisions in response to shocks, while the discretionary policy problem consists of determining the strength of the policy reaction to a specific shock realization; since the latter contributes little (nothing with continuous shock distributions) to the overall variance of policy, it is rational to ignore it under discretionary maximization. The commitment problem is thereby more pronounced in economies in which firms can process information rather well because achieving any desired real effect then requires a larger amount of variation in the policy instrument.4 Therefore, if over time firms become better at processing information, say due to technological progress in information and communication technologies, the response of discretionary monetary policy is to increase the variability of the policy instrument. This gives rise to increased aggregate variability and larger processing errors, which more than compensates the reduction in processing errors resulting from increased capacity to process information. This in turn suggests that the increased volatility of the 1970s in the U.S. when compared to the 1960s may be partly the result of an increasingly severe monetary commitment problem. The paper shows that appointing a ‘conservative central banker’ à la Rogoff (1985), who place greater emphasis on price stability, reduces the volatility of policy decisions and allows—for an appropriate weight on price stability—to replicate optimal commitment policy via discretionary maximization.5 With a conservative central banker the monetary policy response to shocks is less activist, which reduces the variance of firms’ optimal price and thereby their processing errors. The resulting increased coordination between policy decisions and firms’ pricing decisions is shown to unambiguously lower aggregate price and output volatility, independently of the model parameterization. This mechanism will be referred to as the ‘coordination effect’. The paper also analyzes the model-implied vector autoregressive (VAR) dynamics for output, prices and the monetary policy instrument. Interestingly, a marginal improvement in monetary policy (less activist policy) can result in no change of the auto-regressive coefficients–including those coefficients describing the VARs ‘policy equation’–but manifest itself via reduced variance of the VAR residuals. This suggests that it is well possible that some of the findings of the empirical VAR literature, e.g., Canova and Gambetti (2008), Primiceri (2005), or Sims and Zha (2006), are consistent with the notion that the Great Moderation is partly the result of improvements in monetary policy. Even more strikingly, estimating VARs on model generated data from a monetary regime with and a monetary regime without a conservative central bank, and exchanging the (correctly identified) monetary reaction functions across the estimated VARs, results in unchanged behavior for the resulting output and price process. This is the case, although in the model the volatility differences are exclusively due to a change in the conduct of monetary policy. This finding is of interest because empirical findings of this kind have occasionally been interpreted as suggesting that monetary policy is an unlikely explanation for the observed ‘Great Moderation’. More generally, the model suggests that the variability of private sector information sets (information processing errors) can be an important source of ‘fundamental’ shocks entering the residuals of empirical VARs and that the volatility of these information sets may be crucially influenced by the conduct of policy. This is consistent with the empirical findings in Galí and Gambetti (2008) who report that in the U.S. the observed volatility reduction is large due to a substantial fall in the volatility of non-technology shocks. Obviously, the model (trivially) predicts that lower price and output volatility can occur without any changes in monetary policy, e.g., following a reduction in the variance of standard shocks (shocks that are not processing errors). Therefore, the model is equally consistent with the notion that the findings of the empirical VAR literature are simply the result of reduced shock variance. The paper is structured as follows. It starts in Section 2 with a brief overview over the related literature. Section 3 then presents a simple static version of the model with imperfectly informed firms and derives a linear–quadratic approximation to the monetary policy problem. After introducing firms’ information processing constraints in Section 4, Section 5 derives the monetary policy implications. In particular, it is shown how the presence of information processing constraints causes discretionary monetary policy to generate excessive aggregate volatility and how increased focus on price stability reduces volatility. Section 7 extends the static model to an infinite horizon economy. It derives and discusses the model-implied VAR dynamics for output, prices, and the monetary policy instrument and compares it to stylized facts documented in the empirical literature. A conclusion briefly summarizes. All proofs are contained in the web appendix to this article.
نتیجه گیری انگلیسی
Taking into account the endogeneity of decision makers’ information structures appears to have important implications for the conduct of monetary policy and stabilization policy more generally. Stabilization policies become counter-productive if the achievement of the stabilization goals causes optimal private sector decisions to become very volatile. Volatility of private sector decisions considerably complicates the information processing problems faced by private agents, so that their decisions become contaminated by large and unpredictable noise components which may end up increasing aggregate volatility. In empirical applications these processing errors would show up as a fundamental source of randomness, although their variance ultimately depends on the conduct of stabilization policy.