دانلود مقاله ISI انگلیسی شماره 777
ترجمه فارسی عنوان مقاله

# ارزیابی موثر از پیش بینی تراکم چندبعدی در زمان های مختلف، با برنامه های کاربردی برای مدیریت ریسک

عنوان انگلیسی
Efficient evaluation of multidimensional time-varying density forecasts, with applications to risk management
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
777 2012 10 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : International Journal of Forecasting, Volume 28, Issue 2, April–June 2012, Pages 343–352

ترجمه کلمات کلیدی
ارزیابی پیش بینی تراکم چند متغیره - تحول جدایی ناپذیر احتمال - ارزش چند بعدی ریسک - شبیه سازی مونت کارلو -
کلمات کلیدی انگلیسی
پیش نمایش مقاله

#### چکیده انگلیسی

We propose two simple evaluation methods for time-varying density forecasts of continuous higher-dimensional random variables. Both methods are based on the probability integral transformation for unidimensional forecasts. The first method tests multinormal densities and relies on the rotation of the coordinate system. The advantages of the second method are not only its applicability to arbitrary continuous distributions, but also the evaluation of the forecast accuracy in specific regions of its domain, as defined by the user’s interest. We show that the latter property is particularly useful for evaluating a multidimensional generalization of the Value at Risk. In both simulations and an empirical study, we examine the performances of the two tests.

#### مقدمه انگلیسی

The evaluation of the accuracy of forecasts occupies a prominent place in the finance and economics literature. However, most of this body of literature (e.g., Diebold & Lopez, 1996) focuses on the evaluation of point forecasts, rather than interval or density forecasts. The driving force for this over-focus is that, until recently, point forecasts appeared to serve the requirements of the forecast users well. However, there is increasing evidence that a more comprehensive approach is needed. One example is the Value at Risk (VaR), which is defined as the maximum loss on a portfolio over a certain period of time that can be expected with a certain probability. When the returns are normally distributed, the VaR of a portfolio is a simple function of the variance of the portfolio.1 In this case, normality justifies the use of point forecasts for the variance. However, when the return distribution is non-normal, as is now the general consensus, the VaR of a portfolio is determined not just by the portfolio variance, but by the entire conditional distribution of returns. More generally, decision making under uncertainty with an asymmetric loss function and non-Gaussian variables involves density forecasts (see Guidolin and Timmermann, 2005 and Tay and Wallis, 2000, for a survey and discussion of density forecasting applications in finance and economics). The increasing importance of forecasts of the entire (conditional) density naturally raises the issue of forecast evaluation. Although the relevant literature is developing at a rapid rate, it is still in its infancy. This is somewhat surprising, considering that the crucial tools which are employed date back a few decades. Indeed, a key contribution by Diebold, Gunther, and Tay (1998) relies on the probability integral transformation (PIT) result from the work of Rosenblatt (1952). Diebold et al. point out that the correct density is weakly superior to all forecasts. This suggests that the forecasts should be evaluated in terms of their correctness, as this is independent of the loss function. To this end, Diebold et al. (1998) employ the PITs of the univariate density forecasts, which, if accurate, are i.i.d. standard uniform. They measure the forecast accuracy by the distance between the empirical distribution of the PITs and the 45° line, and argue that the visual inspection of this distance may provide valuable insights into the deficiencies of the model and possible ways of improving it. Obviously, standard goodness-of-fit tests can be applied to the PITs directly (see Noceti, Smith, & Hodges, 2003, for a comparison of the existing goodness-of-fit tests). Additional tests have been proposed by Anderson, Hall, and Titterington (1994), Bai (2003), Berkowitz (2001), Granger and Pesaran (1999), Hong (2001), Hong and Li (2003), Hong, Li, and Zhao (2007), Li (1996) and Li and Tkacz (2001). The existing evaluation methods of multidimensional density forecasts (MDF) rely on the advances made in the univariate case. Diebold, Hahn, and Tay (1999) extend the PIT idea to multivariate forecasts by factoring the multivariate probability density function (PDF) into its conditionals and computing the PIT for each conditional. As in the univariate case, the PIT of these forecasts is i.i.d. uniform if the sequence of forecasts is correct. Clements and Smith, 2000 and Clements and Smith, 2002 extend Diebold et al.’s (1999) idea and propose two tests based on the product and ratio of the conditionals and marginals. While the latter tests perform well when there is correlation misspecification, they perform worse than the original test by Diebold et al. (1999) when such misspecification is absent. However, both approaches rely on the factorization of each period’s forecasts into their conditionals, which may not be practical for some applications (e.g., for numerical approximations of density forecasts). Moreover, these approaches assume that the forecasting model is correct under the null hypothesis. This assumption has important implications for the evaluation tools employed, particularly in relation to parameter estimation uncertainty. Recognising this issue, another strand of the MDF evaluation literature has recently gained momentum. This body of literature allows for dynamic misspecification and/or parameter estimation uncertainty, and includes important contributions by Bai and Chen (2008), Chen and Hong (2010) and Corradi and Swanson (2006b) inter alia. Corradi and Swanson (2006b) construct Kolmogorov-type conditional distribution tests in the presence of both dynamic misspecification and parameter estimation uncertainty. While their testing framework is flexible, it suffers from the fact that the limiting distribution is not free of nuisance parameters, and bootstrapping is needed to obtain valid critical values. Bai and Chen (2008) and Chen and Hong (2010) propose MDF evaluation tests that, under certain conditions, deal with the parameter estimation uncertainty. For example, Bai and Chen (2008) use the KK-transformation of Khmaladze (1981) to remove the effect of parameter estimation, so that a distribution-free test can be constructed. However, they still rely on the factorization of the joint density, and only apply this procedure to the multivariate normal and multivariate-tt distributions, in which case they obtain closed-form results. We discuss these issues in more detail in Section 3, and refer the interested reader to Corradi and Swanson (2006a) and Mecklin and Mundfrom (2004) for further insights into density forecast evaluation. Broadly speaking, this paper belongs to the body of literature established by Clements and Smith, 2000 and Clements and Smith, 2002 and Diebold et al., 1998 and Diebold et al., 1999, which does not account for parameter estimation uncertainty. This approach also dominates the parametric-VaR area of the risk management literature, in which we are mainly interested (see for example Gourieroux & Jasiak, 2010, chap.10). Thus, in the simulations and empirical examples, we ignore the parameter estimation uncertainty and potential dynamic misspecification, but acknowledge that these could be important. Finally, we stress that forecasts may vary over time, making parameter estimation and forecast evaluation based on the laws of large numbers unfeasible. This paper makes two important contributions. Firstly, it proposes two new tests for evaluating multidimensional, time-varying density forecasts, which — like their counterparts — may suffer from parameter estimation error and dynamic misspecification, although they are simpler and more flexible. Secondly, to the best of our knowledge, it is the first to formalise and propose a theoretical framework for testing the accuracy of a multidimensional VaR (MVaR). This framework is particularly important for examining multiple sources of tail risk. The outline of the remainder of this paper is as follows. In Section 2, we discuss an evaluation procedure for multinormal density forecasts. Section 3 presents a test for arbitrary continuous densities, while Section 4 discusses the results of Monte Carlo simulations and an empirical application for the newly proposed tests. Finally, Section 5 concludes.

#### نتیجه گیری انگلیسی

The focus of the forecasting literature has recently shifted to interval and density forecasting. This shift has been motivated by applications in finance and economics, as well as by the realization that density and interval forecasts convey more information than point forecasts. Density forecasts naturally raise the question of evaluation. While efficient evaluation techniques for the univariate case have developed rapidly, the literature on multivariate density forecast evaluation remains limited. Indeed, the Diebold et al. (1999) PIT test remains the main reference, with extensions having been proposed by Clements and Smith, 2000 and Clements and Smith, 2002. One drawback of these approaches is that they rely on the PDF factorization into conditionals and marginals, which may prove challenging even for simple functions. In this paper, we provide flexible and intuitive alternative tests of the multivariate forecast accuracy that rely on the univariate PIT idea and avoid the cumbersome decomposition into conditionals and marginals. The framework is particularly important for examining the multiple sources of tail risk encapsulated in MVaR. We performed Monte Carlo simulations and an empirical case study that demonstrated the application of both procedures. Finally, regarding the sources of forecast errors, we expect the parameter estimation uncertainty to be of secondary importance relative to dynamic misspecification (Chatfield, 1993). However, shedding light on the power of the proposed test in the presence of forecast inaccuracy requires a formal investigation, which may suggest a possible avenue for future research.