دانلود مقاله ISI انگلیسی شماره 26399
ترجمه فارسی عنوان مقاله

بهبود مدل های سیاست های پولی

عنوان انگلیسی
Improving monetary policy models
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
26399 2008 16 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Journal of Economic Dynamics and Control, Volume 32, Issue 8, August 2008, Pages 2460–2475

ترجمه کلمات کلیدی
مدل های اقتصاد سنجی - مدل های بانک مرکزی - استنباط اساسی -
کلمات کلیدی انگلیسی
Econometric models, Central bank models, Bayesian inference,
پیش نمایش مقاله
پیش نمایش مقاله  بهبود مدل های سیاست های پولی

چکیده انگلیسی

If macroeconomic models are to be useful in policy-making, where uncertainty is pervasive, the models must be treated as probability models, whether formally or informally. Use of explicit probability models allows us to learn systematically from past mistakes, to integrate model-based uncertainty with uncertain subjective judgment, and to bind data-based forecasting together with theory-based projection of policy effects. Yet in the last few decades policy models at central banks have steadily shed any claims to being believable probability models of the data to which they are fit. Here we describe the current state of policy modeling, suggest some reasons why we have reached this state, and assess some promising directions for future progress.

مقدمه انگلیسی

Fifty years ago most economists thought that Tinbergen's original approach to macromodeling, which consisted of fitting many equations by single-equation OLS and assembling them into a multiple-equation model, had been shown to be internally inconsistent and an inadequate basis for scientific progress in macroeconomics.1 The basic point, made at length by Haavelmo (1944), is that because in economics our theories do not make exact predictions, they can never be proved inadequate simply by showing that they make prediction errors. In order to allow models to be compared and improved, they must be formulated as probability models. That is, they must characterize the probability distribution of observations, rather than simply make point predictions. A model can then be judged on where observed data fall in the distribution the model predicts. For macroeconomic models, this means they must be probability models of the joint behavior of the time series they are meant to explain. If we use models that do not produce distributions, or do not produce reliable distributions, for the data, then in comparing the models or assessing how well a given model is doing, we are forced to rely on informal judgements about what errors are so big as to cast doubt on the model, or about what metric to use in comparing records of forecast errors for two models. If we intend to use the models in decision-making we have to go beyond Haavelmo's proposal to use frequentist hypothesis testing as a way to detect false models and progress toward true models. Hypothesis testing, and indeed all of the apparatus of frequentist inference, fails to connect to the problem of making decisions under uncertainty. The frequentist approach to inference insists on a distinction between unknown ‘parameters’, which are never given probability distributions, and random variables, which are given distributions. The random variables are supposed, at least in principle, to be objects whose patterns of variation could be repeatedly observed, like repeated rolls of the dice or repeated forecast errors. Parameters are supposed to have single values. But a macroeconomic model is not complete, for decision-making purposes, unless it characterizes all sources of uncertainty, including the fact that we do not know parameter values. This means that attempts to limit probability statements to areas of uncertainty where the frequentist interpretation of probability is useful cannot be adequate. We need to think of our probability models as characterizing uncertainty from all sources and as capable of integrating uncertain information from sources other than the data – one aspect of what is sometimes called ‘judgment’. Most economists have learned a frequentist approach to inference and may think that frequentist inference does in fact characterize uncertainty about parameters. But this is an illusion. Frequentist data analysis often reports standard errors of estimates or confidence intervals for parameters. These are reported because they appear to satisfy a need to make probability statements about unknown parameters. But the standard errors describe the variability of the estimators, not the distribution of the unknown parameters, and the probabilities associated with confidence intervals are not probabilities that the unknown parameter is in the interval, based on the observed data, but instead probabilities that a similarly constructed interval would contain the true parameter if we repeatedly constructed such intervals. These probability statements about estimators and randomly fluctuating intervals sometimes are in many cases approximately the same as probability statements about the distribution of unknown parameters given the data. But there are situations where they are in sharp conflict with any reasonable probability statements about unkown parameters. One of those situations is where we are modeling highly persistent time series – exactly the situation most macroeconomic modelers find themselves in all the time. 2 In an empirical macromodel there are many a priori uncertain parameters and the data on its own only incompletely resolve the a priori uncertainty. Since frequentist approaches refuse to put probability distributions on parameters, they cannot be helpful in blending uncertain a prior beliefs about parameters with information in the sample, yet in the context of macromodeling this is always essential. This can lead to a priori beliefs that are in fact uncertain being imposed as if they were deterministically known. Then frequentist measures of uncertainty fail to reflect a major component of actual uncertainty. Of course people use inexact, but non-probabilistic models all the time, and they choose among such models, improve them over time, and use them for decision making, with no explicit reference to probabilities. If they are doing these things well, though, they are behaving as if the models were probabilistic and as if they were using Bayesian methods in using the models for decision-making. A policy-maker facing uncertainty contemplates a range of possible actions, with the consequences of those actions uncertain. She should consider the desirability or undesirability of each of the consequences, and also the likelihood that each of the consequences will occur. She should not choose an action that is dominated, in the sense that there is another available action that, regardless of what consequence emerges, surely produces better results than the chosen action. With this simple set of criteria for good decision making, we can conclude that she will be acting as if she is weighting uncertain consequences by probabilities and choosing an action that produces the best expected outcome. This is basically the same argument as the one that shows that a producer choosing inputs efficiently will be acting as if he is minimizing costs for some vector of prices or shadow prices of inputs. The probabilities for the uncertain decision maker are playing the same role as shadow prices of inputs for the producer. Just as we are ready to suppose that producers can minimize costs without knowing calculus and calculating marginal costs explicitly, we should be ready to accept that good decision makers will weight uncertain prospects as they make choices, with the weights behaving like probabilities. The explicit language of probability is not needed when decisions are made in small groups and when model construction and use are carried out by the same small group, though even there it may be useful in organizing thought. But when a large staff must communicate about these issues, or when researchers write articles or reports for a broad audience of potential users, having a clear framework for discussing uncertainty is necessary, and probability is the only internally consistent and complete language for that discussion. Being explicit about uncertainty is difficult and time is limited, so we cannot expect that discussions of policy, models and data will always or even usually take place in a complete, formal probability framework. But we should recognize that such a framework is what our actual procedures should aim at approximating. The recent history of central bank macroeconomic modeling has seemed to ignore entirely the need for explicit probability modeling. The models that are in actual use as frameworks for discussion in the policy-making process have abandoned the theoretical framework of the Cowles foundation approach that Haavelmo's ideas set in motion. They have not replaced it with another probability-modeling framework, but rather with a reincarnation of the single-equation fitting approach of Tinbergen. There is no attempt to construct a joint likelihood for the observed time series, and no attempt to assess whether the model's own structure can support the single-equation methods used to estimate it. No model-generated measures of uncertainty play any important role in policy discussions. This is not, apparently, a principled rejection of probability modeling in favor of some other principled paradigm for combining data with judgment in policy-making. The models still do come with standard errors attached to parameters, developed by frequentist methods that ignore the multivariate model structure; and there is no claim that this is good procedure or even any explicit apology for its incoherence.

نتیجه گیری انگلیسی

It is a little discouraging that the biggest recent model reform, BEQM, which was worked on in part after the Smets and Wouters proof of concept, represents possibly the most complete turning away from probability-based inference of any large central bank model. On the other hand, there is apparently interest at the Bank of England in attempting a probability-based approach to BEQM or a variant of it, and there is active research toward building probability-based models at many central banks and other policy institutions. Developments in computational power and in statistical and economic theory seem to be coming together to promise a period of rapid progress in policy modeling.