تجزیه و تحلیل سری زمان برای بازار مالی ملت داون
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|14296||2011||13 صفحه PDF||سفارش دهید||9764 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Banking & Finance, Volume 35, Issue 8, August 2011, Pages 1879–1891
There appears to be a consensus that the recent instability in global financial markets may be attributable in part to the failure of financial modeling. More specifically, it is alleged that current risk models have failed to properly assess the risks associated with large adverse stock price behavior. In this paper, we first discuss the limitations of classical time series models for forecasting financial market meltdowns. Then we set forth a framework capable of forecasting both extreme events and highly volatile markets. Based on the empirical evidence presented in this paper, our framework offers an improvement over prevailing models for evaluating stock market risk exposure during distressed market periods.
The forecasting of the future behavior of the price of financial instruments is an essential activity in the implementation of risk management and portfolio allocation. The debate between the financial industry and regulators involves whether the sophisticated mathematical and statistical tools that have been employed in risk management and valuation of complex financial instruments have played a role in the recent crisis. In particular, risk measures such as value-at-risk (VaR) and black-box models for assessing the risks that institutional investors and regulated financial entities are exposed to have been singled out as the culprits (see Turner, 2009 and Sheedy, 2009). It is within this context that we discuss in this paper a market model that is capable of explaining highly volatile periods. We will demonstrate that the proposed model together with a measure of risk, known as the average value-at-risk (AVaR), offers a more reliable risk assessment, particularly during financial crises. Furthermore, we will try to explain how “25-standard-deviation events” in the words of David Viniar, chief financial officer of Goldman Sachs, can occur. We do so by measuring the probability of occurrence of market crashes by looking at time series data and showing that this probability strictly depends on the distributional assumption. We then compare these probabilities to the “high-standard-deviation events” given by the normal probability distribution that is typically assumed. In order to obtain a good forecast for the distribution of returns, prediction of future market volatility is critical. Most of the recent empirical studies have shown that the amplitude of daily returns varies across time. Moreover, there is ample empirical evidence that if volatility is high, it remains high, and if it is low, it remains low. This means that volatility moves in clusters and for this reason it is important to find a way to explain such observed patterns. This behavior, referred to as “volatility clustering,” refers to the tendency of large changes in asset prices (either positive or negative) to be followed by large changes, and small changes to be followed by small changes. The volatility clustering effect can be captured by the autoregressive conditional heteroskedastic (ARCH) and the generalized ARCH (GARCH) models formulated by Engle (1982) and Bollerslev (1986), respectively. However, in this paper we provide empirical evidence that suggests that GARCH models based on the normal distribution would not have performed well in predicting real-world market crashes such as Black Monday (October 19, 1987) and, more recently, the global economic crisis attributable to the subprime mortgage meltdown in 2007 and the Lehman Brothers failure in the latter half-year of 2008. One reason for the poor performance is due to the assumption that the innovation of the GARCH model is normally distributed. Asset management and pricing models require the proper modeling of the return distribution of financial assets. While the return distribution used in the traditional theories of asset pricing, such as the capital asset pricing model, is the normal distribution, numerous studies that have investigated the empirical behavior of asset returns in financial markets throughout the world reject the hypothesis that asset return distributions are normally distributed. Returns from financial assets show well-defined patterns of leptokurtosis and skewness that cannot be captured by the normality assumption. The non-normal assumption has been recently considered by Sorwar and Dowd, 2010 and Fajardo and Farias, 2010, and Bedendo et al. (2010) to empirically investigate option pricing models. To capture extreme events that cannot be described by the normal distribution, extreme value theory (EVT) has been proposed for measuring financial risk (see, for example, Neftci, 2000, Bali, 2003 and Gupta and Liang, 2005). Recently, (Bali, 2007) developed a conditional EVT-based VaR estimate and found that it performs better than traditional approaches. The EVT-based VaR is compared with other methods in Marinelli et al. (2007) and Rachev et al. (2010). Although this field offers potential, the EVT approach cannot be applied in a no-arbitrage framework. This is because extreme value distributions and generalized Pareto distributions (as Student-t distributions) do not lead to semi-martingale processes and therefore it is impossible to find an equivalent martingale measure to price options. On the other hand, enhanced GARCH models with non-normal innovation distributions have been proposed within the no-arbitrage framework. For example, Menn and Rachev (2009) used GARCH models with α-stable innovations and the smoothly truncated α-stable innovations for option pricing. A new class of distributions, the tempered stable distribution, has been proposed recently to deal with the drawbacks of the α-stable distribution and has been applied to option pricing within the no-arbitrage framework (see Kim et al., 2008 and Bianchi et al., 2010a). Most importantly, a suitable measure has to be employed to evaluate market risk. The VaR measure has been adopted as a standard risk measure in the financial industry, having been adopted by regulators to determine the capital requirements for both banking and trading books (see Kiff et al., 2007). However, the limitations of the VaR measure have been well documented in the academic literature, as well as among regulators and risk managers (see Bookstaber, 2009). Criticisms of this risk measure include: (1) the normal distribution assumption is inadequate for forecasting extreme events, (2) a short sample of historical observations is insufficient to assess the risk one day ahead, and (3) it is difficult to infer future risk from past observed patterns, particularly, under stressed scenarios. In this paper, we address these three criticisms by (1) considering a ARMA-GARCH model with non-normal innovation, (2) estimating the model with a sample including 10 years of daily data (including a more realistic measure of risk), principally focusing on the negative tail, and (3) backtesting the model during market shocks. By doing so, we hope to provide market participants with more reliable mathematical and statistical tools that can be used to try to understand complex financial market behavior. These tools cannot be used as black boxes; market players have to understand them to avoid financial debacles. The risk measure we use in this study is AVaR, which is the average of VaRs larger than the VaR for a given tail probability. AVaR, also called conditional value-at-risk (CVaR),1 is a superior risk measure to VaR because it satisfies all axioms of a coherent risk measure and it is consistent with preference relations of risk-averse investors (see Rachev et al., 2007). The closed-form solution for AVaR for the α-stable distribution, the skewed-t distribution, and the infinitely divisible distributions containing tempered stable distributions have been derived by Stoyanov et al., 2006 and Dokov et al., 2008, and Kim et al. (2010b), respectively. Hence, in this paper, we discuss autoregressive moving average (ARMA) GARCH models with α-stable and tempered stable innovations and then assess the forecasting performance of these models by comparing them to other time series models that assume a normal innovation. We empirically test the performance of these models for the S&P 500 index during stressed financial markets. The dataset includes the following stock market crashes: October 1987, October 1997, the turbulent period around the Asian Crisis in 1998 through 1999, the burst of the “dotcom bubble,” and the recent subprime mortgage crisis together with the Lehman Brothers failure. We present VaR values for the index for all of these periods. In our backtests of VaR, we evaluate the accuracy of the VaR models. Finally, we present a closed-form solution to the AVaR for the ARMA-GARCH model with tempered stable innovations, and compute AVaR values for the index. The remainder of this paper is organized as follows. ARMA-GARCH models with the α-stable and tempered stable innovations are presented in Section 2. In Section 3, we discuss parameter estimation of the ARMA-GARCH models and forecasting return distributions for the index for daily returns. The VaR values and the backtesting of the ARMA-GARCH models with Student-t, α-stable, and tempered stable innovations are presented and the results then compared to the classical models such as the equally weighted moving average model and ARMA-GARCH model with normal innovations. The closed-form solution of the AVaR measure for the ARMA-GARCH model with tempered stable innovations is presented in Section 4, together with values of the AVaR for the index. In Section 5, we summarize our principal findings. In Appendix, we briefly review the three tempered stable distributions examined in this paper.
نتیجه گیری انگلیسی
In this paper, we discussed models with stable and tempered stable innovations, and provided an assessment of their forecasting power relative to other models widely used in the industry. The proposed models are applied to the analysis of the S&P 500 index during highly volatile markets. Our first finding is that the time series models based on the assumption of a normal innovation do not provide a reliable forecast of the future distribution of returns, even if they account for volatility clustering. The time series model based on the Student-t innovations are rejected empirically. In particular, our empirical evidence indicates that time series models with stable and tempered stable innovations have better predictive power in measuring market risk compared to standard models based on the normal distribution assumption. We also analyzed the behavior of VaR based on different distributional assumptions. We backtested VaR by considering the last four years of log returns for the S&P 500. Based on the Christoffersen Likelihood Ratio test, the two normal models investigated were rejected; however, the same test did not reject the three non-normal models investigated. Moreover, we investigated the relative difference between VaR values for three non-normal models compared to two normal models. By backtesting and studying relative differences, we concluded that the CTS-ARMA-GARCH model is the best among the five models investigated. Finally, after deriving a closed-form solution for AVaR for the ARMA-GARCH models with tempered stable innovation, we applied this formula to calculate daily AVaRs for the CTS-ARMA-GARCH models for the last four years of our study period. We found that spreads between VaR (or AVaR) with the normal-ARMA-GARCH model and AVaR with the CTS-ARMA-GARCH model increased one year prior to the crash of 2008. The CTS-based model shows a similar risk assessment with respect to the t-model. Given the tail properties of the CTS distribution, the capital charge needed by a risk manager who uses a CTS-based model is less than that required by a risk manager who uses a t-based model (or an α-stable model), and at the same time is greater than the capital required by a risk manager who employs a normal-based model. In contrast to a t- or EVT-based model, the CTS assumption allows one to define a market model to price options.