دانلود مقاله ISI انگلیسی شماره 28670
ترجمه فارسی عنوان مقاله

استنتاج غیر مستقیم و کالیبراسیون مدل های تعادل عمومی تصادفی پویا

عنوان انگلیسی
Indirect inference and calibration of dynamic stochastic general equilibrium models
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
28670 2007 34 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Journal of Econometrics, Volume 136, Issue 2, February 2007, Pages 397–430

ترجمه کلمات کلیدی
کالیبراسیون - استنتاج غیر مستقیم - مدل سازه - چرخه کسب و کار واقعی - قیمت گذاری دارایی -
کلمات کلیدی انگلیسی
Calibration, Indirect inference, Structural models, Real business cycle, Asset pricing,
پیش نمایش مقاله
پیش نمایش مقاله  استنتاج غیر مستقیم و کالیبراسیون مدل های تعادل عمومی تصادفی پویا

چکیده انگلیسی

We advocate in this paper the use of a sequential partial indirect inference (SPII) approach, in order to account for calibration practice where dynamic stochastic general equilibrium models (DGSE) are studied only through their ability to reproduce some well-chosen moments. We stress that, despite a lack of statistical formalization, the controversial calibration methodology addresses a genuine issue on the consequences of misspecification in highly nonlinear and dynamic structural macro-models. We argue that a well-driven SPII strategy might be seen as a rigorous calibrationnist approach, that captures both the advantages of this approach (accounting for structural “a-statistical” ideas) and of the inferential approach (precise appraisal of loss functions and conditions of validity). This methodology should be useful for the empirical assessment of structural models such as those stemming from the real business cycle theory or the asset pricing literature.

مقدمه انگلیسی

Dynamic stochastic general equilibrium (DSGE) models are the common framework of new classical macroeconomics, with the ambition to provide structural microfoundations for macroeconomics. However, this ambition comes at a price. Nobody can believe that DSGE models present a descriptively realistic model of the economic process. “Of course, the model is not ‘true”’ (Lucas, 1987) and this is probably the reason why the advent of DSGE models has led new classical macroeconomics to turn to calibration methods as an alternative to classical econometrics, involving estimation and testing. The endorsement of calibration as an alternative to estimation, and the related endorsement of verification as an alternative to statistical tests may lead to the conclusion that “the new classical macroeconomics is now divided between calibrators and estimators” (Hoover, 1995). However some econometricians claim that considering, as Lucas (1987) and Kydland and Prescott (1991) do that “the specification errors being committed are of sufficient magnitude as to make conventional estimation and testing of dubious value” is simply misunderstanding econometrics since “traditional model building never proceeded under the assumption that any model was true” (Kim and Pagan, 1995). The approach we advocate in this paper is somewhere in between the two extreme views that either some unrealistic features of DSGE models should lead to eschew orthodox econometrics altogether or that calibrators simply misunderstand that traditional econometrics “never proceed under the assumption that any model was true”. On the contrary, we do think that econometricians have something to learn from calibrators and we try to go further in the research program put forward by Hansen and Heckman (1996): “model calibration and verification can be fruitfully posed as econometric estimation and testing problems”. We argue, by contrast with the “never” claim above, that more often than not econometric practices are seriously flawed with a maintained assumption of model truth. The recent regain of popularity of maximum likelihood (MLE) approaches to DSGE precisely shows that many econometricians still consider that MLE is the best thing to do, at least when it is tractable. However, there is no such thing in econometric theory as compelling arguments in favor of MLE in the case of misspecified models. Of course, properties of MLE in case of misspecification, also called quasi- or pseudo-maximum likelihood (QMLE) are well known since White (1982) and Gouriéroux et al. (1984). However, while the former stresses that QMLE converges towards a pseudo-true value of the unknown parameters and that its asymptotic variance is no longer conformable to the common Cramer Rao bound but must be replaced by the so-called sandwich formula, the latter characterizes the very restrictive assumptions under which the pseudo-true value coincides with the true unknown value. In other words, not only QMLE does not provide such thing as an efficient asymptotic variance but, even worse, it leads to select a pseudo-true value of unknown parameters which may be quite different from the one which would be associated to an economically meaningful loss function. The econometrician's hopeless search for a well-specified parametric model (“quest for the Holy Grail” as dubbed by Monfort (1996)) and associated efficient estimators even remain popular when MLE becomes intractable due to highly nonlinear dynamic structures including latent variables. Efficiency properties of “efficient method of moments” (EMM, Gallant and Tauchen, 1996) or more generally of generalized method of moments (GMM, Hansen, 1982), simulated method of moments (SMM, Duffie and Singleton, 1993) and indirect inference (II, Gouriéroux et al. (1993)) when the set of moment conditions is sufficiently large to span the likelihood scores are often advocated as if the likelihood score was something well specified. Actually, not only one should not forget that we are the most often dealing with a pseudo-score but the resort to simulation requires even more care since the likely misspecified structural parametric model is used as a simulator. This paper is a contribution to the econometric literature that has “attempted to tame calibration and return it to the traditional econometric fold” by interpreting “calibration as a form of estimation by simulation” (Hoover, 1995) along the lines of Manuelli and Sargent (1988), Gregory and Smith (1990), Canova (1994) and Bansal et al. (1995). However, even more focus is put on the likely severe misspecification of structural models stemming from the DSGE literature. This leads us to an explicit account of calibrators’ recommendations, while showing that they may be made compatible with a well-established approach to econometrics. In other words, we aim at delineating a close methodology which could be able to gather both the advantages of the inferential approach (estimation, confidence sets and specification testing) and also the advantages of the calibration approach that correspond, in our opinion, to consistent estimation of some structural parameters of interest and robust prediction and induction despite misspecification of the structural model. Contributions of this paper are threefold. First, we point out that asymptotic variance formulas for any kind of simulated moment-based method (SMM, EMM or II) must take into account some kind of sandwich formulas for the choice of efficient weighting matrices and associated formulas for asymptotic variance of estimators. Forgetting this kind of correction is even more detrimental than for QMLE since two kinds of sandwich formulas must be taken into account, one for the data generating process (DGP) and one for the simulator which turns out to be different from the DGP in case of misspecification. Moreover, since only endogenous variables are simulated, correct formulas for asymptotic variance matrices require a specific account for exogenous variables. In this respect, we extend the results of Gouriéroux et al. (1993) theory of II to a case of possible misspecification of the simulator. As for QMLE, misspecification may not only imply a violation of standard asymptotic variance formulas but even more importantly, may lead the econometrician to consistently estimate a pseudo-true value which may have nothing to do with the true unknown value of the parameters of interest. The second contribution of this paper is to put forward the encompassing tests methodology as a way to focus SMM or more generally II estimators on the consistent estimation of the true unknown value View the MathML sourceθ10 of a subset θ1θ1 of the full set θ=(θ1,θ2)θ=(θ1,θ2) of structural parameters. While a fully parametric model, that is a family of probability distributions indexed by θ=(θ1,θ2)θ=(θ1,θ2) is needed to get a simulator, there is no hope to find any economic theoretical underpinnings for such parametric DSGE models which cannot be more than a crude idealization of the economic process. Unfortunately, the matching moment strategy of estimation is an indirect approach to inference about the structural parameters θθ which goes through a binding function β(θ)β(θ) relating the structural parameters θθ to some instrumental parameters ββ which can be directly estimated from their sample counterparts. Note that in this respect, II approach to nonlinear analytically intractable structural models is nothing but an extension of the old indirect least-squares approach to linear simultaneous equations models. In our nonlinear and misspecified structural model context, it is unfortunately highly hazardous to get a consistent estimator of the true unknown value View the MathML sourceθ10 of a subset θ1θ1 when solving with respect to θ1θ1 a sample and possibly simulation-based counterpart of the equations View the MathML sourceβ(θ1,θ2*)=β0 where β0β0 denotes the true unknown value of the instrumental parameters ββ (by definition easy to estimate) but View the MathML sourceθ2* is only a pseudo-true value of θ2θ2. The necessary condition, that is View the MathML sourceβ0=β(θ10,θ2*) precisely means that the structural model, albeit misspecified, encompasses the instrumental one. The requirement of encompassing typically means that, if we do not want to proceed under the maintained assumption that the structural model is true, we must be parsimonious with respect to the number of moments to match or more generally to the scope of macroeconomic evidence that is captured by the instrumental model, as parameterized by ββ, like for instance the coefficients of a vector autoregression. This is at odds with the efficiency kind of goal as advocated by Bansal et al. (1995) to endorse the EMM approach to calibration: “if a structural model is to be implemented and evaluated on statistical criteria i.e. one wants to take seriously statistical test and inference, the structural model has to face all empirically relevant aspects of the data”. We are not far to think on the contrary like Prescott (1983) that “if any observation can be rationalized with some approach, then that approach is no scientific” or at least like Lucas (1980) that “insistence on the ‘realism’ of an economic model subverts its potential usefulness in thinking about reality”. Economic reality may be interestingly captured by the parameters of interest θ1θ1 while there is no hope to find the Holy Grail of a fully parametric true model indexed by (θ1,θ2)(θ1,θ2). Then, as often stressed by calibrators, it is important to have in mind a hierarchy of moments, with first place given to some specific ββs like unconditional means, variances and correlations rather than more sophisticated characteristics of conditional probability distributions. The key point is that while a true parametric model defining a true unknown value View the MathML source(θ10,θ20) would by definition ensure the necessary encompassing condition, whatever the dimension of ββ (even with at the limit an infinite dimensional vector ββ of auxiliary parameters as for EMM), the equations View the MathML sourceβ(θ1,θ2*)=β0 are going to characterize the true unknown View the MathML sourceθ10 whatever the misspecification about θ2θ2, only if we have chosen a convenient instrumental model which does not capture what goes wrong in the paths simulated from the structural model endowed with the fictitious value View the MathML source(θ1,θ2*) of the structural parameters. This is the reason why we advocate in this paper the partial indirect inference (PII) approach. PII is well suited in case of partial encompassing. It means that only a subset of the encompassing equations View the MathML sourceβ(θ10,θ2*)=β0 appear to be fulfilled. By restricting ourselves to such a subset, we may have to renounce to the complete identification of the vector θθ of structural parameters. By contrast to a narrow view of econometric identification, this is typically something we can accept insofar as underidentification is only about some “pseudo-parameters” θ2θ2, that is to say quantities which are known to be poorly related to economic reality, as captured by our structural model. Then, as calibrators do, we propose to fix the value of these unidentified parameters to some “reasonable” levels. These “calibrated” values are needed to perform simulations for the determination of the binding function but do not contaminate a subset of equations for which the encompassing property turns out to be fulfilled. In other words, we find a rationale to the calibration practice within a well founded econometric methodology. A good reason not to apply a neutral moment matching to identify all the parameters is that it is along only some selected dimensions that we may hope to get meaningful quantitative assessments from our structural model. For example, as reminded by Hansen and Heckman (1996), some “particular time series frequencies could be deemphasized in adopting an estimation criterion because misspecification of a model is likely to contaminate some frequencies more than others (Hansen and Sargent, 1993)”. By still seeking econometric identification of all structural parameters θ1θ1 and θ2θ2, the econometrician runs the risk to contaminate the estimation of the parameters of interest θ1θ1 with the likely misspecification of the part of the model concerning θ2θ2. Amazingly, our PII kind of extension of Gouriéroux et al. (1993) theory of II fully concurs, even in the terminology, with the Hoover characterization of Lucas (1980) and Prescott (1983) “discipline of the calibration method”: it “comes from the paucity of free parameters (…)(…) in some sense, the calibration method would appear to be a kind of indirect estimation”. We claim more precisely that it is because the estimation of structural models is generally “indirect”, in the sense that it takes a binding function relating structural parameters to instrumental ones, that calibration matters to pin down some “key parameters” θ2θ2 from calibrator's knowledge rather than from an orthodox moment matching procedure. These key parameters are so because they define some components of θ2θ2, which prevent us from getting full encompassing an thus to estimate consistently the parameters of interest θ1θ1, when contaminated by the identification of θ2θ2. A third contribution of this paper is to propose a sequential approach to PII, in order to accommodate not only the calibration step but also the verification step of the common empirical practice for DSGE. More precisely, we do think as calibrators that the specification tests should only be focused on the reproduction of stylized facts the structural model is aimed to reproduce. But our additional discipline amounts to a second step of specification testing, once the parameters of interest θ1θ1 have been hopefully consistently estimated in a first step from matching moment simulated with a possibly calibrated θ2θ2. The second step of simulated moment matching (or minimization of any kind of economically meaningful loss function) with respect to these previously calibrated components aims at controlling the degree of misspecification at a reasonable level, that is there is no such thing like a gross inability of our structural model to reproduce the economically meaningful moments. Since the procedure is a two step one, we call it a sequential partial indirect inference (SPII). In our opinion, this two step simulated moments matching methodology remains exactly true to the calibrators’ point of view: reproducing some dimensions of interest under the constraint that some structural parameters of interest are consistently estimated. This is precisely because the requirement of consistency is maintained that the two steps are disentangled whatever the cost in terms of efficiency of a two-step procedure of estimation. Of course, if the structural model were well specified, a one step estimator of θ1θ1 and θ2θ2 jointly would be preferable. The aim of roughly reproducing broad economic reality of interest must not make us running the risk of inconsistently estimating the crucial structural parameters. Otherwise, it would be a purely data-based approach. We claim on the contrary (see e.g. our reinterpretation below of the Mehra and Prescott (1985) equity premium puzzle exercise) that consistent estimation of a few structural parameters is a binding constraint for calibrators. The second step of verification, as we perform it, is consistent with the Canova (1994) kind of interpretation of the calibration practice. The question asked is: “Given that the model is false, how true is it?”. As already mentioned, this paper is far to be the first to address the issue of a statistical appraisal of the calibration methodology. However, only a few papers have focused on the consequences of misspecification in simulated moments matching. While intriguing Bayesian approaches to calibration of misspecified models have been proposed by Canova (1994), Dejong et al. (1996), Geweke (1999) and Schorfheide (2000), we argue that SPII is the convenient way to accommodate it with a frequentist point of view. The paper is organized as follows. In Section 2, the issues of interest and the general framework to address them are defined through some template examples of the calibration literature. The statistical theory of PII is set up in Section 3. Section 4 is devoted to sequential extensions of PII and Section 5 concludes.

نتیجه گیری انگلیسی

The SPII methodology proposed in this paper aims at reconciling the calibration and verification steps proposed by the calibrationnist approach with their econometric counterparts, that is, estimation and testing procedures. We propose a general framework of multistep estimation and testing: • First, for a given (calibrated) value View the MathML sourceθ¯22 of some nuisance parameters, a consistent asymptotically normal estimator View the MathML sourceθ^1,TS1(θ¯22) of the vector θ1θ1 of parameters of interest is obtained by partial indirect inference. A pseudo-true value View the MathML sourceθ¯21 of some other nuisance parameters may also be consistently estimated by the same token. • Second, overidentification of the vector (θ1,θ21)(θ1,θ21) of structural parameters by the selected instrumental moments β1β1 provides a specification test of the pair (structural model, instrumental model). • Finally, the verification step, including a statistical assessment of the calibrated value View the MathML sourceθ¯22, can be performed through another instrumental model NψNψ. The proposed formalization enables us to answer most of the common statistical criticisms about the calibration methodology, insofar as one succeeds to split the model in some true identifying moment conditions and some nominal assumptions. The main message is twofold. First, acknowledging that any structural model is misspecified while aiming at producing consistent estimators of the true unknown value of some parameters of interest as well as robust predictions, one should rely, as informally advocated in calibration exercises, on parsimonious and well chosen dimensions of interest. Second, in so doing, it may be the case that simultaneous joint estimation of the true unknown value of the parameters of interest as well as of the pseudo-true value of the nuisance parameters is impossible. In this context, one should resort to a two step procedure that we call sequential partial indirect inference (SPII). This basically introduces a general loss function. This again corresponds to a statistical formalization of the common practice in calibration exercises using previous estimates and a priori selection.