دانلود مقاله ISI انگلیسی شماره 28662
ترجمه فارسی عنوان مقاله

بررسی مدل های تعادل عمومی تصادفی پویا بر اساس مقایسه توزیعی داده های شبیه سازی شده و تاریخی

عنوان انگلیسی
Evaluation of dynamic stochastic general equilibrium models based on distributional comparison of simulated and historical data
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
28662 2007 25 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Journal of Econometrics, Volume 136, Issue 2, February 2007, Pages 699–723

ترجمه کلمات کلیدی
چرخه های واقعی کسب و کار - خروجی - توزیع تجربی - مدل شبیه سازی شده - انتخاب مدل -
کلمات کلیدی انگلیسی
Real business cycles, Output, Empirical distribution, Simulated models, Model selection,
پیش نمایش مقاله
پیش نمایش مقاله  بررسی مدل های تعادل عمومی تصادفی پویا بر اساس مقایسه توزیعی داده های شبیه سازی شده و تاریخی

چکیده انگلیسی

We take as a starting point the existence of a joint distribution implied by different dynamic stochastic general equilibrium (DSGE) models, all of which are potentially misspecified. Our objective is to compare “true” joint distributions with ones generated by given DSGEs. This is accomplished via comparison of the empirical joint distributions (or confidence intervals) of historical and simulated time series. The tool draws on recent advances in the theory of the bootstrap, Kolmogorov type testing, and other work on the evaluation of DSGEs, aimed at comparing the second order properties of historical and simulated time series. We begin by fixing a given model as the “benchmark” model, against which all “alternative” models are to be compared. We then test whether at least one of the alternative models provides a more “accurate” approximation to the true cumulative distribution than does the benchmark model, where accuracy is measured in terms of distributional square error. Bootstrap critical values are discussed, and an illustrative example is given, in which it is shown that alternative versions of a standard DSGE model in which calibrated parameters are allowed to vary slightly perform equally well. On the other hand, there are stark differences between models when the shocks driving the models are assigned non-plausible variances and/or distributional assumptions.

مقدمه انگلیسی

In this paper, we merge recent econometric advances in bootstrapping time series and Kolmogorov type testing with recent developments in the evaluation of dynamic stochastic general equilibrium (DSGE) models. This is accomplished via the construction of a new tool for comparing the empirical joint distribution of historical time series with the empirical distribution of simulated time series. Since the seminal papers by Kydland and Prescott (1982), Long and Plosser (1983) and King et al., 1988a and King et al., 1988b, there has been substantial attention given to the problem of reconciling the dynamic properties of data simulated from DSGE, and in particular from real business cycle (RBC) models, with the historical record. A partial list of advances in this area includes: (i) the examination of how RBC simulated data reproduce the covariance and autocorrelation functions of actual time series (see e.g. Watson, 1993); (ii) the comparison of DSGE and historical spectral densities (see e.g. Diebold et al., 1998a); (iii) the evaluation of the difference between the second order time series properties of vector autoregression (VAR) predictions and out-of-sample predictions from DSGE models (see e.g. Schmitt-Grohe, 2000); (iv) the construction of Bayesian odds ratios for comparing DSGE models with unrestricted VAR models (see e.g. Chang et al., 2002, and Fernandez-Villaverde and Rubio-Ramirez, 2004); (v) the comparison of historical and simulated data impulse response functions (see e.g. Cogley and Nason, 1995); (vi) the formulation of “reality” bounds for measuring how close the density of a DSGE model is to the density associated with an unrestricted VAR model (see e.g. Bierens and Swanson, 2000); and (vii) loss function based evaluation of DSGE models (Schorfheide, 2000). The papers listed above are mainly concerned with the issue of model evaluation. Another strand of the literature is instead mainly concerned with providing alternatives to calibration (see e.g. DeJong et al., 2000 for a Bayesian perspective in which prior distributions are constructed around calibrated structural parameters). In most of the above papers, the issue of singularity (i.e. when the number of variables in the model is larger than the number of shocks) is circumvented by considering only a subset of variables, for which a non-degenerate distribution exists.1 Our work is closest to the first strand of literature. In particular, our paper attempts to add to the model evaluation literature by introducing a measure of the “goodness of fit” of RBC models that is based on applying standard notions of Kolmogorov distance and drawing on advances in the theory of the bootstrap.2 The papers cited above primarily address the case in which the objective is to test for the correct specification of some aspects of a given candidate model. In the case of DSGE models, however, it is usually crucial to account for the fact that all models are approximations, and so are misspecified. This is the reason why the testing procedure that we develop allows us to evaluate the relative degree of misspecification of a given group of competing models based on the comparison of empirical distribution functions of historical data with those of DSGE simulated data. The DSGE models of interest in our context are simulated using both calibrated parameters (with calibrated values suggested by KPR, for example), and parameters estimated by using actual data, along the lines of Christiano (1988) and Christiano and Eichenbaum (1992). One important feature of our approach is that we begin by fixing a given DSGE model as the “benchmark” model, against which all “alternative” models are compared. The comparison is done using a distributional generalization of White's (2000) reality check, which assesses whether at least one of the alternative models provides a more “accurate” approximation to the true cumulative distribution than does the benchmark model. One key element of our approach is that we measure “accuracy” in terms of square error, as in Corradi and Swanson (2005a). We also outline the relationship between our measure of accuracy and the Kullback–Leibler Information Criterion (KLIC). DSGE model evaluation based on KLIC measures of accuracy is considered by Fernandez-Villaverde and Rubio-Ramirez (2004) and Chang et al. (2002). For extensions of the methodology in this paper to the case of predictive density and conditional confidence interval accuracy evaluation, the reader is referred to Corradi and Swanson, 2005b and Corradi and Swanson, 2005c. As mentioned above, our statistic is based on comparison of historical and simulated distributions. The limiting distribution of the statistic is a functional over a Gaussian process with a covariance kernel that reflects the contribution of parameter estimation error. This limiting distribution is thus not nuisance parameter free, and critical values cannot be tabulated. In order to obtain valid asymptotic critical values, we suggest two block bootstrap procedures, each of which depends on the relative rate of growth of the actual and simulated sample size. In addition, we circumvent the issue of singularity by considering a subset of variables (and their lagged values) for which a non-singular distribution exists. Our testing framework can be used to address questions of the following sort: (i) For a given DSGE model, what is the relative usefulness of different sets of calibrated parameters for mimicking different dynamic features of output growth? (ii) Given a fixed set of calibrated parameters, what is the relative performance of DSGE models driven by shocks with a different marginal distribution? In order to illustrate how the proposed testing framework can be used, we consider the RBC model of Christiano (1988), characterized by flexible labor supply, capital depreciation, and two shocks—a permanent shock affecting technology and a transitory shock affecting preferences. Data are then simulated and various versions of the model are compared, in terms of their ability to reproduce the joint distribution of current output, lagged output, current hours worked, and lagged hours worked. The illustrations suggest that the methodology outlined in this paper provides a useful additional tool for examining the relevance of different RBC models vis a vis how well competing models capture the dynamic structural characteristics of the historical record. The rest of the paper is organized as follows. Section 2 outlines the testing framework, describes the test statistic, and shows that the limiting distribution of the statistic is a zero mean Gaussian process with a covariance kernel that reflects both the contribution of parameter estimation error and the time series structure of the data. This is all done under the assumption that all models may be misspecified. In Section 3, the construction of bootstrap critical values is outlined, and the first order validity of the block bootstrap is established under two different assumptions with respect to the limit of the ratio of the actual sample size and the simulated sample period. An empirical illustration is given in Section 4, and concluding remarks are gathered in Section 5. All proofs are collected in the appendix.

نتیجه گیری انگلیسی

In this paper we propose a test for comparing the joint distributions of historical time series with those simulated under a given DSGE model via a distributional generalization of the reality check of White (2000) in which we assess whether competing models are more accurate approximations to the “true” distribution than a given benchmark model, in a squared error sense. Two empirical versions of the block bootstrap are used to construct valid asymptotic critical values. Finally, an illustrative example is given in which the testing approach is applied to an RBC model, and it is found that RBC models are quite sensitive to distributional and error variance magnitude assumptions, but are less sensitive to small changes in primitive parameter values.