مدل سازی عمومی به ویژه از نوسانات نرخ ارز: ارزیابی پیش بینی
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|8349||2010||23 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 14476 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : International Journal of Forecasting, Volume 26, Issue 4, October–December 2010, Pages 885–907
The general-to-specific (GETS) methodology is widely employed in the modelling of economic series, but less so in financial volatility modelling, due to its computational complexity when many explanatory variables are involved. This study proposes a simple way of avoiding this problem when the conditional mean can appropriately be restricted to zero, and undertakes an out-of-sample forecast evaluation of the methodology applied to the modelling of the weekly exchange rate volatility. Our findings suggest that GETS specifications perform comparatively well in both ex post and ex ante forecasting as long as sufficient care is taken with respect to the functional form and the way in which the conditioning information is used. Also, our forecast comparison provides an example of a discrete time explanatory model being more accurate than the realised volatility ex post in 1-step-ahead forecasting.
Exchange rate variability is an issue of great importance to both businesses and policymakers. Businesses use volatility models as tools in their risk management and as inputs in derivative pricing, whereas policymakers use them to acquire knowledge about the impact of economic factors on exchange rate variability for informed policymaking. Most volatility models are highly non-linear, and thus require complex optimisation algorithms for their empirical application. For models with few parameters and few explanatory variables, this may not pose unsurmountable problems. However, as the number of parameters and explanatory variables increases, the resources required for reliable estimation and model validation multiply. Indeed, this may even become an obstacle to the application of certain econometric modelling strategies, as was argued by Granger and Timmermann (1999) and McAleer (2005), for example, regarding automated general-to-specific (GETS) modelling of financial volatility.1 GETS modelling is particularly well suited to explanatory econometric modelling, since it provides a systematic framework for statistical economic hypothesis testing, model development and model (re-)evaluation, and the methodology is relatively popular among large scale econometric model developers and proprietors. However, since the initial model formulation typically requires many explanatory variables, this poses challenges for computationally complex models at the outset. The recent developments by Doornik (2009) and Hendry, Johansen, and Santos (2008) might be a step towards overcoming some of the computational challenges associated with the maximum likelihood estimation of financial models when many variables are included in the variance specification. However, this is still to be investigated, since their work is on the conditional mean using ordinary least squares estimation. Meanwhile, in this study we overcome the computational challenges traditionally associated with the application of the GETS methodology in the modelling of financial volatility by modelling volatility within an exponential model of variability (EMOV), where the variability is defined as the squared returns. The parameters of interest can therefore be estimated consistently with ordinary least squares (OLS) under rather weak assumptions. This setup implies that the conditional mean is restricted to zero, but enables us in return to apply GETS to a general specification, with, in our case, a constant and 24 regressors, including lags of the log of squared returns, an asymmetry term, a skewness term, seasonality variables, and economic covariates. Compared with models belonging to the autoregressive conditional heteroscedasticity (ARCH) and stochastic volatility (SV) classes, we estimate and simplify our specification with little effort, and obtain a parsimonious encompassing specification with uncorrelated homoscedastic residuals and relatively stable parameters. Moreover, our out-of-sample forecast evaluation suggests that GETS specifications can be particularly valuable in conditional forecasting—as long as sufficient care is taken as to where and how the conditioning information enters, since the ex post EMOV specification performs particularly well. Another contribution of this study is a qualificatory note on the evaluation of explanatory economic models of financial volatility against estimates based on continuous time theory. Highly simplified, the return volatility forecasting literature can be divided in two parts: before and after the highly influential publication of Andersen and Bollerslev (1998). Although in-sample estimates suggest the widespread presence of ARCH, asymmetry effects, jumps, volume effects, and so on in financial returns volatility, models that include these effects tend to explain a very small proportion of the return variability out-of-sample (see Poon & Granger, 2003, for a review of the literature). Andersen and Bollerslev (1998) argued that this is because the standard estimates of volatility are very noisy, and suggested instead that forecasts of volatility should be evaluated against high frequency ex post estimates, for example the realised volatility (sums of intra-period squared returns). Andersen and Bollerslev (1998) were not the first to put forward this explanation and solution; nevertheless, they had the greatest impact. Subsequently, the general view that has emerged is that discrete time models of financial volatility should be evaluated against estimates derived from continuous time theory, not against the return variability (for example squared returns); see inter alia Andersen, Bollerslev, and Lange (1999), Andersen, Bollerslev, Diebold, and Labys (2003), Andersen, Bollerslev, and Meddahi (2005), Andersen, Bollerslev, Christoffersen, and Diebold (2006), and Hansen and Lunde, 2005 and Hansen and Lunde, 2006. As a consequence, little if any role is left for the residuals to play—directly or indirectly—in the forecast evaluation. This is the direct opposite of the GETS methodology, where the analysis of the residuals plays a key role in model evaluation and comparison, since any empirical model is a highly simplified representation of the data generating process. Here we qualify the view that discrete time models of financial volatility should be evaluated against estimates derived from continuous time theory. Specifically, we argue that this is particularly inappropriate in the evaluation of explanatory economic models of financial volatility. The rest of the paper is divided into four sections. The next section gives a brief exposition of the GETS methodology, explains why evaluations against high-frequency estimates based on continuous time theory in a sense run counter to the GETS methodology, and presents the EMOV and its relationships to the more common ARCH and SV models. We then present the data and empirical models in Section 3, while Section 4 contains the results of the ex post and ex ante out-of-sample forecast exercises. The ex post evaluation is of special interest in the current context. The GETS methodology is particularly well suited to the development of explanatory models, which are useful for conditional forecasting and scenario analysis more generally, and the accuracy of the ex post forecasts is an indication of the usefulness of the methodology for these purposes. Finally, in the last section we conclude and provide suggestions for further research.
نتیجه گیری انگلیسی
This study has evaluated the out-of-sample forecast accuracy of models of the weekly NOK/EUR volatility derived by means of the GETS methodology. The results suggest that such models produce unbiased ex post and ex ante forecasts, and that they perform comparatively well at all horizons. In particular, the explanatory GETS EMOV specification comes first at all horizons up to 6 weeks ahead for ex post forecasting, which is indicative of usefulness for conditional forecasting, and scenario analysis more generally, while for ex ante forecasting the GETS EMOV models come first up to 2 weeks ahead and still fare comparatively well thereafter. However, our results also suggest that the application of the GETS specification search by itself does not guarantee good forecasting models. Care is needed with respect to the functional form, and to how and where the conditioning information enters the mean and variance specifications. The rigour with which the GETS methodology is implemented might also be a factor. Another result in our comparison which is of interest is that explanatory ex post models are capable of providing better predictions of squared returns than the ex post forecasts of realised volatility 1 week ahead. Our findings suggest several lines for further research. First, the generality of our results must be established. Is GETS-modelling of the financial volatility useful for higher frequencies—which typically exhibit more volatility persistence—than weekly? On other exchange rates and for other financial assets? Second, contrary to Granger and Timmermann’s (1999) and McAleer’s (2005) assertions, automated GETS-modelling of financial volatility can readily be implemented and should be investigated more fully. Finally, one drawback of our approach (the EMOV framework) is that the conditional mean is restricted to zero, which means that predictability in the direction of exchange rate changes cannot be exploited. One interesting line of research would therefore be to make use of multi-step least squares estimators of conditional heteroscedasticity models, to avoid the numerical issues and problems associated with the GETS modelling of volatility, possibly combined with the procedures of Hendry and Krolzig (2005) and Hendry et al. (2008), in order to efficiently handle the many variables in the initial GUM.