ریسک سقوط بازار سهام ایالات متحده، 1926-2010
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|15779||2012||31 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Financial Economics, Volume 105, Issue 2, August 2012, Pages 229–259
This paper examines how well alternate time-changed Lévy processes capture stochastic volatility and the substantial outliers observed in U.S. stock market returns over the past 85 years. The autocorrelation of daily stock market returns varies substantially over time, necessitating an additional state variable when analyzing historical data. I estimate various one- and two-factor stochastic volatility/Lévy models with time-varying autocorrelation via extensions of the Bates (2006) methodology that provide filtered daily estimates of volatility and autocorrelation. The paper explores option pricing implications, including for the Volatility Index (VIX) during the recent financial crisis.
What is the risk of stock market crashes? Answering this question is complicated by two features of stock market returns: the fact that conditional volatility evolves over time, and the fat-tailed nature of daily stock market returns. Each issue affects the other. Which returns are identified as outliers depends upon that day's assessment of conditional volatility. Conversely, estimates of current volatility from past returns can be disproportionately affected by outliers such as the 1987 crash. In standard generalized autoregressive conditional heteroskedasticity (GARCH) specifications, for instance, a 10% daily change in the stock market has one hundred times the impact on conditional variance revisions of a more typical 1% move. This paper explores whether recently proposed continuous-time specifications of time-changed Lévy processes are a useful way to capture the twin properties of stochastic volatility and fat tails. The use of Lévy processes to capture outliers dates back at least to the Mandelbrot (1963) use of the stable Paretian distribution, and many specifications have been proposed, including the Merton (1976) jump-diffusion, the Madan and Seneta (1990) variance gamma, the Eberlein, Keller, and Prause (1998) hyperbolic Lévy, and the Carr, German, Madan, and Yor (2002) CGMY process. As all of these distributions assume identically and independently distributed (i.i.d.) returns, however, they are unable to capture stochastic volatility. More recently, Carr, German, Madan, and Yor (2003) and Carr and Wu (2004) have proposed combining Lévy processes with a subordinated time process. The idea of randomizing time dates back at least to Clark (1973). Its appeal in conjunction with Lévy processes reflects the increasing focus in finance – especially in option pricing – on representing probability distributions by their associated characteristic functions. Lévy processes have log characteristic functions that are linear in time. If the time randomization depends on underlying variables that have an analytic conditional characteristic function, then the resulting conditional characteristic function of time-changed Lévy processes is also analytic. Conditional probability densities, distributions, and option prices can then be numerically computed by Fourier inversion of simple functional transforms of this characteristic function. Thus far, empirical research on the relevance of time-changed Lévy processes for stock market returns has largely been limited to the special cases of time-changed versions of Brownian motion and the Merton (1976) jump-diffusion. Furthermore, there has been virtually no estimation of newly proposed time-changed Lévy processes solely from time series data.1 Papers such as Carr, German, Madan, and Yor (2003) and Carr and Wu (2004) rely on option pricing evidence to provide empirical support for their approach, instead of providing direct time series evidence. The reliance on options data is understandable. Because the state variables driving the time randomization are not directly observable, time-changed Lévy processes are hidden Markov models, creating a challenging problem in time series econometrics. Using option prices potentially identifies realizations of those latent state variables, converting the estimation problem into the substantially more tractable problem of estimating state space models with observable state variables. While options-influenced parameter and state variable estimates should be informative under the hypothesis of correct model specification, the objective of this paper is to provide estimates of crash risk based solely upon time series analysis. Such estimates are of interest in their own right, and are useful for testing the central empirical hypothesis in option pricing: whether option prices are, in fact, compatible with the underlying time series properties of the underlying asset, after appropriate risk adjustments. Testing the compatibility hypothesis is more difficult under joint options/time series estimation approaches that are premised upon compatibility. Furthermore, option-based and joint estimation approaches are constrained by the availability of options data only since the 1980s, whereas time series estimation can exploit a longer history of extreme stock market movements.2 For instance, it has been asserted that deep out-of-the-money index put options appear overpriced, based on their surprisingly large negative returns since the 1987 crash. But all such tests require reliable estimates of downside risk; and it can be difficult to establish whether puts are overpriced based only on post-1987 data.3 Risk-adjusted time series estimates of conditional distributions can also provide useful real-time valuations of option prices, for comparison with observed option prices. At the end of the paper I compare the options-based Volatility Index (VIX) measure of volatility with time series estimates, during a 2007–2010 period spanning the recent financial crisis. This paper uses the Bates (2006) approximate maximum likelihood (AML) methodology for estimation of various time-changed Lévy processes over 1926–2006, and for out-of-sample fits over 2007–2010. AML is a filtration methodology that recursively updates conditional characteristic functions of latent variables over time given observed data. Filtered estimates of the latent variables are directly provided as a by-product, given the close link between moments and characteristic functions. The methodology's focus on characteristic functions makes it especially useful for estimating Lévy processes, which typically lack closed-form probability density functions. The paper primarily focuses on the time-changed CGMY process, which nests other Lévy processes as special cases. The approach is also compared with the stochastic volatility processes with and without normally distributed jumps previously estimated in Bates (2006). A concern with any extended data set is the possibility that the data generating process might not be stable over time. Indeed, this paper identifies substantial instability in the autocorrelation of daily stock market returns. Autocorrelation estimates appear to be nonstationary, and peaked at the extraordinarily high level of 35% in 1971 before trending downward to the near-zero values observed since the 1980s. The instability is addressed directly, by treating autocorrelation as another latent state variable to be estimated from observed stock market returns. The paper also uses subsample estimation to test for (and find) apparent instabilities or specification issues in the one-factor volatility process used. Given these issues, I estimate a two-factor concatenated model of volatility evolution, which can be interpreted as a model of parameter drift in the unconditional mean of the one-factor variance process. Finally, I examine the sensitivity of volatility filtration and option prices to the use of different data sets and volatility models. Overall, the time-changed CGMY process is found to be a slightly more parsimonious alternative to the Bates (2006) approach of using finite-activity stochastic-intensity jumps drawn from a mixture of normals, although the fits of the two approaches are very similar. Interestingly, one cannot reject the hypothesis that stock market crash risk is adequately captured by a time-changed version of the Carr and Wu (2003) log-stable process. That model's implications for upside risk, however, are strongly rejected, with the model severely underpredicting the frequency of large positive outliers. Section 2 progressively builds up the time series model used in estimation. Section 2.1 discusses basic Lévy processes and describes the processes considered. Section 2.2 discusses time changes, the equivalence to stochastic volatility, and the leverage effect. Section 2.3 contains further modifications of the model to capture time-varying autocorrelations and day-of-the-week effects. Section 2.4 describes how the model is estimated, using the Bates (2006) AML estimation methodology for hidden Markov models. Section 3 describes the data on excess stock market returns over 1926–2010 and presents parameter estimates, diagnostics, and filtered estimates of latent autocorrelation and volatility. Given results from the diagnostics, I develop and estimate a two-factor variance model in Section 3.7. Section 4 examines option pricing implications, and Section 5 concludes.
نتیجه گیری انگلیسی
This paper provides estimates of the time-changed Carr, Geman, Madan, and Yor (2003) CGMY Lévy process based on stock market excess returns, and compares them to the time-changed finite-activity jump-diffusions previously examined by Bates (2006). I draw the following three conclusions. First, it is important to recognize the fat-tailed properties of returns when filtering latent variables. Failure to do so makes latent variable estimates excessively sensitive to daily outliers larger than three standard deviations and affects parameter estimates—especially the parameters of the volatility process. However, such major outliers are relatively rare. Conditional volatility estimates from the less fat-tailed distributions [the Heston (1993) stochastic volatility model; the Carr and Wu (2003) log-stable model] diverge substantially from those of other distributions only in the weeks following large outliers. Second, it is not particularly important which fat-tailed distribution one uses. Estimates of the volatility process parameters and realizations are virtually unchanged across most specifications, while the option pricing implications are virtually identical for all but the deepest out-of-the-money options. Third, conditional upon no recent outliers, even the Heston stochastic volatility model fits option prices similarly to the jump models for all but deep out-of-the-money options. For these stochastic volatility or stochastic intensity models, the estimated tilt of the volatility smirk for near-the-money options (±2 standard deviations) appears primarily driven by the leverage effect. I also present evidence of some structural shifts over time in the data generating process. Most striking is the apparently nonstationary evolution of the first-order autocorrelation of daily stock market returns, which rose from near-zero in the 1930s to around 35% in 1971 before drifting down again to near-zero values at the end of the 20th century, and even negative in the 21st. The high autocorrelation estimates in the 1960s and 1970s are clearly attributable to a stale-price problem from low stock turnover and are of substantial importance when assessing historical stock market volatility. The paper develops methods of dealing with time-varying autocorrelation, by treating it as an additional latent state variable to be filtered from observed data. Furthermore, the paper develops a nonaffine model (Model 2) of evolving autocorrelation that can nevertheless be easily estimated on time series data. The model generates an affine risk-neutral process for pricing index options and is consistent with the inverse relationship between autocorrelation and volatility found by LeBaron (1992). Finally, the paper also shows longer-term swings in volatility, which are modeled using a two-factor concatenated volatility model. Estimating a latent variable (the central tendency) underlying another latent variable (spot variance) underlying daily stock market returns is perforce imprecise. Nevertheless, the two-factor model usefully highlights the misleading precision of multi-period forecasts from one-factor variance models. One-factor models erroneously predict tight confidence intervals for implicit volatility at longer maturities, given hypothesized volatility mean reversion to an identifiable mean. The two-factor model estimates spot volatilities and term structures of implicit volatilities more accurately than the one-factor model—with, however, substantial gaps remaining on average between observed and estimated at-the-money implicit volatilities. Alternate data sources could yield more accurate assessments of spot variance and of its central tendency: intradaily realized variances, for instance, or the high-low range data examined by Alizadeh, Brandt, and Diebold (2002). This paper has focused upon daily returns because of its focus on daily crash risk, but the AML methodology can equally be applied to estimating conditional volatilities from those alternative data. Realized variances are noisy signals of latent conditional variance when intradaily jumps are present, indicating the need for filtration methodologies such as AML. Such applications are potential topics for future research.