میانگین مدلدر مدیریت ریسک با یک کاربرد برای بازارهای آتی
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
721 | 2009 | 26 صفحه PDF |
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Empirical Finance, Volume 16, Issue 2, March 2009, Pages 280–305
چکیده انگلیسی
This paper considers the problem of model uncertainty in the case of multi-asset volatility models and discusses the use of model averaging techniques as a way of dealing with the risk of inadvertently using false models in portfolio management. Evaluation of volatility models is then considered and a simple Value-at-Risk (VaR) diagnostic test is proposed for individual as well as ‘average’ models. The asymptotic as well as the exact finite-sample distribution of the test statistic, dealing with the possibility of parameter uncertainty, are established. The model averaging idea and the VaR diagnostic tests are illustrated by an application to portfolios of daily returns on six currencies, four equity indices, four ten year government bonds and four commodities over the period 1991–2007. The empirical evidence supports the use of ‘thick’ model averaging strategies over single models or Bayesian type model averaging procedures.
مقدمه انگلیسی
Multivariate models of conditional volatility are of crucial importance for optimal asset allocation, risk management, derivative pricing and dynamic hedging. However, their use in practice has been rather limited, particularly in the case of portfolios with a large number of assets. There are only a few published empirical studies that consider the performance of multivariate volatility models involving a large number of assets, and for operational reasons most of these studies focus on highly restricted versions of the multivariate generalized autoregressive conditional heteroscedastic (GARCH) model of Bollerslev (1986). The risk associated with possible model misspecification could then be sizeable. Also for riskmanagement purposes, the main focus is often on the tail behavior of the predictive density of the asset returns, and not simply to obtain the ‘best’ approximating volatility model. This in turn implies that a unified treatment of empirical portfolio analysis requires shifting the focus from a statistical to a decision-theoretic framework for model evaluation. This paper provides an integrated econometric approach to the portfolio optimization subject to the Value at Risk (VaR) constraint in the presence of model uncertainty, and the associated risk monitoring problem.1 In this paper we focus on uncertainty of multivariate volatility models and abstract from return prediction uncertainty already addressed extensively in the literature.2 One of the main contributions of the paper is to solve the mean-variance optimization problem subject to the VaR constraint when a probabilistic average of several models is used to take account of model uncertainty. This optimization strategy assumes the existence of conditional return volatilities, but allows the conditional distribution of returns to be non-Gaussian. The various practical issues involved in implementation of such an strategy are discussed and evaluated in the context of an empirical application. Many variants of the multivariate GARCH have been proposed in the literature. These include the conditionally constant correlation (CCC) model of Bollerslev (1990), the Risk-metrics specifications popularized by J.P.Morgan (1996) and used predominantly by practitioners, the orthogonal GARCH model of Alexander (2001), and the dynamic conditional correlation (DCC) model advanced by Engle (2002).3 Recent surveys are provided in Bauwens et al. (2003) and McAleer (2005). Multivariate stochastic volatility (SV) models have also been considered in the literature, with reviews by Ghysels et al. (1995) and Shephard (2005).4 We consider models frequently used by practitioners together with many models recently proposed in academic papers, and consider their empirical performance within a decision–theoretic framework. The highly restricted nature of the multivariate volatility models advanced in the literature could present a high degree of model uncertainty which ought to be recognized at the outset. This is particularly important since due to data limitations and operational considerations it is not possible to subject these models to rigorous statistical testing. Application of model selection procedures also involves additional risks that are difficult to assess a priori. This is especially true when the number of assets is moderately large, and it might well be that no single model choice would be satisfactory in practice. This paper considers model averaging as a risk diversification strategy in dealing with model uncertainty, and provides a detailed application of recent developments in model averaging techniques to multi-asset volatility models. Frequently used model selection criteria are the Akaike Information Criterion (AIC) and the Schwartz Bayesian Information Criterion (SBC). However, such a two-step procedure is subject to the pre-test (selection) bias problem and tends to under-estimate the uncertainty that surrounds the forecasts. Of course, the use of model averaging techniques in econometrics is not new and dates back to the work of Granger and Newbold (1977) on forecast combination.5 However, this literature focusses on combining point forecasts and does not address the problem of combining forecast probability distribution functions which is relevant in risk management. Concerning model evaluation, the standard forecast evaluation techniques that focus on metrics such as root mean square forecast errors (RMSFE), also run into difficulties when considering volatility models. Since volatility is not directly observable, it is often proxied by the square of daily returns or more recently by the standard error of intra-daily returns, known as realized volatility (see, for example, Andersen et al. (2003)). In multi-asset contexts the use of standard metrics such as RMSFE is further complicated by the need to select weights to be attached to errors in forecasts of individual asset volatilities and their cross-volatility correlations and the choice of such weights is not innocuous in a multivariate framework (see Pesaran and Skouras (2002)). Here we develop a simple criterion for evaluation of alternative volatility forecasts by examining the Value-at-Risk (VaR) performance of their associated portfolios. Our test, which can be applied to individual as well as to average models, belongs to a class of so-called unconditional coverage tests, the most important case of which is the Kupiec (1995) binomial test. In contrast to the existing literature, though, we formally establish both the asymptotic as well as the exact finite-sample distribution of our test statistics. Further, we provide formal conditions that ensure that the asymptotic distribution of the familiar VaR diagnostic test statistics do not depend on the sampling variability associated with parameter estimation. Conditional coverage tests (see Christoffersen (1998)) and density forecast tests (Crnkovic and Drachman (1997) and Berkowitz (2001)) could also be adapted to our model averaging framework, although the related distribution theory will need to be established. For a review of existing approaches to the evaluation of the VaR estimates see Andersen et al. (2006). The VaR based diagnostic tests developed in this paper can be used both for risk monitoring of a given portfolio as well as for construction of optimal (in the VaR sense) portfolios. The remainder of the paper is organized as follows: the decision problem that underlies the VaR analysis is set out in Section 2. Section 3 provides a brief outline of the different types of multivariate volatility models considered in the paper. Several approaches to model averaging are reviewed and discussed in Section 4. Section 5 introduces the Value-at-Risk (VaR) diagnostic test and establishes its finite-sample as well as its asymptotic distribution. Section 6 provides a detailed empirical analysis using daily returns for eighteen futures contracts covering equity indices, government bonds, exchange rates and commodities over the period 2 January 1991 to 11 July 2007. Section 7 concludes with a summary of the main results and suggestions for future research. The mathematical proofs and a description of the multivariate volatility models are provided in three appendices.
نتیجه گیری انگلیسی
This paper considers the problem of model uncertainty in the context of multivariate volatility models and notes that it is particularly important given the highly restrictive nature of these models that are used in practice. To deal with model uncertainty we advocate the use of model averaging techniques where an ‘average’ model is constructed by combining the predictive densities of the models under consideration, using a set of weights that reflect the models’ relative in-sample performance. We consider ‘thick’ modelling as well as (approximate) Bayesian modelling frameworks. Second, the paper proposes a simple decision-based model evaluation technique that compares the volatility models in terms of their Value-at-Risk performance. The proposed test is applicable to individual as well as to average models, and can be used in a variety of contexts. Under mild regularity conditions, the test is shown to have a Binomial distribution when evaluation sample (T1) is finite and T0 (the estimation sample) is sufficiently large. The proposed test converges to a standard Normal variate provided T1 / T0 + 1 / T1 → 0, a condition also encountered in the forecast evaluation literature that uses the root mean square error as an evaluation criterion, as discussed in West (1996). The proposed VaR test is invariant to the portfolio weights and is shown to be consistent under departures from the null hypothesis. The Binomial version of the VaR test could have important applications in credit risk literature where the evaluation samples are typically short. In the empirical application we experimented with AIC and SBC weights and found that, due to the large sample sizes available, they led to very similar results with the selected models often totally dominating the rest. The model most often selected by both criteria turned out to be the TDCC model. In out of sample evaluation, only the TDCC model managed to pass the VaR diagnostic tests. Interesting enough, the simplest of all data filters used in this paper, namely the Equal Weighted Moving Average filter also performed well; doing better than other data filters as well as the O-GARCH specifications. In general, the ‘thick’ modelling approach turned out to be the most reliable within the class of models and model average strategies that we considered. Thick model averaging strategies consistently had low VaR exceedance frequencies (relative to most single models), whilst retaining high information ratios. Overall, the only strategy that was not rejected by our VaR diagnostic tests was the equal-weighted average model based on the top 25 models (ranked by AIC) and assuming Student t innovations with 7 degrees of freedom. Finally, while model averaging provides a useful alternative to the two-step model selection strategy, it is nevertheless subject to its own form of uncertainty, namely the choice of the space of models to be considered and their respective weights. It is therefore important that applications of model averaging techniques are investigated for their robustness to such choices. In the case of our application it is clearly desirable to consider also other forms of multivariate volatility models, which could be the subject of future research.