مدل سازی اقتصاد سنجی در امور مالی و مدیریت ریسک : بررسی اجمالی
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
714 | 2008 | 4 صفحه PDF |
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Econometrics, Volume 147, Issue 1, November 2008, Pages 1–4
چکیده انگلیسی
This paper gives an overview about the sixteen papers included in this special issue. The papers in this special issue cover a wide range of topics. Such topics include discussing a class of tests for correlation, estimation of realized volatility, modeling time series and continuous-time models with long-range dependence, estimation and specification testing of time series models, estimation in a factor model with high-dimensional problems, finite-sample examination of quasi-maximum likelihood estimation in an autoregressive conditional duration model, and estimation in a dynamic additive quantile model.
مقدمه انگلیسی
Significant theoretical, computational and empirical progress has been made over the past two decades in the use of modern time series techniques to analyse financial data that exhibit (possible) nonstationarity, structural breaks and long-range dependence, as well as in the study of univariate and multivariate risk modelling and management. The various models and methods used have been largely based on parametric, nonparametric and semiparametric nonlinear time series models for both continuous and discrete time series processes. Recent developments include new model estimation and specification testing methods for time series data with possible nonstationarity and long-range dependence. Gao (2007) provides a recent survey of such developments in both the econometrics and statistics literature. Li and Racine (2007) discuss various recent developments in nonparametric and semiparametric econometrics, with a number of applications in financial econometrics. Several survey papers have been devoted to a wide variety of nonlinear diffusion models, as well as to univariate and multivariate stochastic volatility models for both discrete and continuous time processes (see, for example, McAleer (2005), Asai et al. (2006), Gao (2007), Allen et al. (in press), and McAleer and Mederiros (2008)). The special issue of this journal on econometric modelling in finance and risk management focuses on presenting the discussion and application of several recently developed techniques to a wide ranging audience drawn from both the theoretical and applied fields. The main theme of this special issue is the modelling, management and assessment of risk, with special emphasis on the time series and (ultra) high frequency data aspects of risk, and the econometric analysis and forecasting of risk. The sixteen papers in this special issue cover this broad topic from various perspectives: the first paper proposes a general class of tests for correlation, with three papers examining realized volatility, five papers on modelling discrete time series econometric models and continuous time volatility models with possible nonstationarity and long-range dependence, four papers examining estimation and specification testing problems applying nonparametric and semiparametric methods, one paper proposing a general estimation method to deal with high-dimensional problems in a factor model, one paper discussing large sample and finite sample properties of quasi-maximum likelihood estimation for the logarithmic autoregressive conditional duration model, and the last paper introducing a dynamic additive quantile model and discussing its financial applicability. In the next section, we provide an overview of econometric modelling in finance and risk management by leading experts in the fields of time series econometrics and financial econometrics.
نتیجه گیری انگلیسی
The first paper in the special issue is by Peter Robinson, who discusses a general class of tests for correlation in time series, spatial, spatio-temporal and cross-sectional data. The author motivates the focus of his attention by reviewing how computational and theoretical difficulties of point estimation mount as one moves from regularly-spaced time series data, through forms of irregular spacing, and to spatial data of various kinds. A broad class of computationally simple tests is justified in specializing Lagrange multiplier tests against parametric departures of various kinds. Their forms are illustrated in the case of several models for describing correlation in various kinds of data. The initial focus assumes homoscedasticity, but Robinson also extends, and makes the tests more robust, by including nonparametric heteroscedasticity. The next three papers discuss various important and topical issues that are encountered in modelling realized volatility for ultra high frequency data. Yacine Ait-Sahalia and Loriano Mancini compare the forecasts of Quadratic Variation given by Realized Volatility (RV) and the Two Scales of Realized Volatility (TSRV) that are computed from high frequency data in the presence of market microstructure noise, under several different assumed dynamics for the volatility process and assumptions regarding the noise. The authors show that TSRV largely out-performs RV, whether analysing various metrics based on bias, variance, RMSE or out-of-sample forecasting ability. An empirical application to all DJIA stocks confirms the simulation results. Federico Bandi, Jeffrey Russell and Chen Yang evaluate and compare the quality of several recently-proposed estimators in the context of a relevant economic metric, that is, profits from option pricing and trading. Using forecasts obtained by virtue of alternative volatility estimates, agents price short-term options on the S&P 500 index before trading with each other at average prices. The agents’ average profits and the Sharpe ratios of the profits constitute the criteria used to evaluate alternative volatility estimates and their corresponding forecasts. For the data used, the authors find that estimators with superior finite sample mean-squared-error properties generate higher average profits and higher Sharpe ratios, in general. Their results confirm that, even from a forecasting standpoint, there is scope for optimising the finite sample properties of alternative volatility estimators, as advocated in their recent research on modelling ultra high frequency data. Ilze Kalnina and Oliver Linton propose an econometric model that captures the effects of market microstructure factors on a latent price process. In particular, the authors allow for correlation between the measurement error and the return process, and they allow the measurement error process to have diurnal heteroscedasticity. The authors propose a modification of the TSRV estimator of quadratic variation, and then show that this estimator is consistent, with a rate of convergence that depends on the size of the measurement error, but is no worse than View the MathML sourcenˆ{−1/6}. They investigate in interesting simulation experiments the finite sample performance of various proposed implementations. In the next five papers, nonlinear models with possible nonstationarity and long-range dependence are discussed. The paper by Richard Baillie and George Kapetanios is motivated by recent evidence that many univariate economic and financial time series have both nonlinear and long memory characteristics. Hence, their paper considers a general nonlinear, smooth transition regime autoregression which is embedded within a strongly dependent, long memory process. A time domain MLE with simultaneous estimation of the long memory, linear AR and nonlinear parameters is shown to have desirable asymptotic properties. The Bayesian and Hannan-Quinn information criteria are shown to provide consistent model selection procedures. The paper also considers an alternative two step estimator, where the original time series is fractionally filtered from an initial semi-parametric estimate of the long memory parameter. Simulation evidence indicates that the time domain MLE is generally superior to the two step estimator. The paper also includes some applications of the methodology and estimation of a fractionally integrated, nonlinear autoregressive ESTAR model to forward premium and real exchange rates. The paper by Isabel Casas and Jiti Gao discusses a continuous-time stochastic volatility model with possible long-range dependence. A new econometric estimation method is proposed to deal simultaneously with any possible short-range dependence, intermediate-range dependence and long-range dependence. The estimation method proposed is based on a continuous-time version of the Gauss–Whittle objective function in order to find parameter estimates that minimize the discrepancy between the spectral density and the data periodogram. The asymptotic properties of the proposed estimation method are also established. The proposed estimation method is then implemented using both simulated and real data examples. Giuseppe Cavaliere and Robert Taylor consider tests for the null hypothesis of (trend) stationarity against the alternative of a change in persistence at some (known or unknown) point in the observed sample, either from I (0) to I (1) behaviour. They show that in circumstances where the innovation process displays non-stationary unconditional volatility of a very general form, which includes single and multiple volatility breaks as special cases, the ratio based statistics used to test for persistence change do not have pivotal limiting null distributions. Numerical evidence suggests that this can cause severe oversizing in the tests. In practice, it may therefore be hard to discriminate between persistence change processes and processes with constant persistence but which display time-varying unconditional volatility. The authors solve the identified inference problem by proposing wild bootstrap-based implementations of the tests. Monte Carlo evidence suggests that the bootstrap tests perform well in finite samples. An empirical illustration using US price inflation data is provided. In the paper by Offer Lieberman and Peter Phillips, an infinite-order asymptotic expansion is given for the autocovariance function of a general stationary long-memory process with memory parameter d∈(−1/2,1/2)d∈(−1/2,1/2). The class of spectral densities considered includes, as a special case, the stationary and invertible ARFIMA(p,d,qp,d,q) model. The leading term of the expansion is of the order O(1/k1−2d)O(1/k1−2d), where kk is the autocovariance order, consistent with the well known power law decay for such processes, and is shown to be accurate to an error of O(1/k3−2d)O(1/k3−2d). The derivation uses Erdelyi’s expansion for Fourier-type integrals when there are critical points at the boundaries of the range of integration, where the frequencies are {0,2p}{0,2p}. Numerical evaluation shows that the expansion is accurate even for small kk in cases where the autocovariance sequence decays monotonically, and in other cases for moderate to large kk. The approximations are easy to compute across a variety of parameter values and models. In the paper by Michael McAleer and Marcelo Medeiros, the authors propose a flexible model to capture nonlinearities and long-range dependence in time series dynamics. The new model is a multiple regime smooth transition extension of the Heterogeneous Autoregressive (HAR) model, which is specifically designed to model the behavior of the volatility inherent in financial time series. The model is able to describe simultaneously long memory, as well as sign and size asymmetries. A sequence of tests is developed to determine the number of regimes, and an estimation and testing procedure is presented. Monte Carlo simulations evaluate the finite sample properties of the proposed tests and estimation procedures. The authors apply the model to several Dow Jones Industrial Average index stocks using transaction level data from the Trades and Quotes database that covers ten years of data. Their results find strong support for long memory, and both sign and size symmetries. Furthermore, the new model, when combined with the linear HAR model, is viable and flexible for the purposes of forecasting volatility. In the fields of time series econometrics and financial econometrics using nonparametric and semiparametric methods, there are four papers in this special issue. The first paper in this category by Zongwu Cai and Xian Wang considers a new nonparametric estimation method for the conditional value-at-risk and expected shortfall functions. Conditional value-at-risk is estimated by inverting the weighted double kernel local linear estimate of conditional distribution function. The nonparametric estimator of the conditional expected shortfall is constructed by a plugging-in method. The asymptotic normality and consistency of the proposed nonparametric estimators are established at both the boundary and interior points for time series data. It is shown that the weighted double kernel local linear conditional distribution estimator not only preserves the good properties from both the double kernel local linear and weighted Nadaraya–Watson estimators, but also has the additional advantages and properties of always being a distribution, continuous and differentiable. Moreover, an ad hoc data-driven bandwidth selection method is proposed, based on the nonparametric version of the Akaike information criterion. Finally, an empirical example illustrates the usefulness of the proposed estimators. In the paper by Jiti Gao and Isabel Casas, the authors propose two new tests for the specification of both the drift and diffusion functions in a class of semiparametric continuous-time financial econometric models. Theoretically, the authors establish some asymptotic consistency results for the proposed test. Practically, a simple selection procedure for the bandwidth parameter involved in each of the proposed tests is established, based on the assessment of the power function of the appropriate test. It would seem that the proposed method is the first approach of this kind in the specification of continuous-time financial econometrics. The proposed theory is supported by good small and medium sample results. The third paper in this category by Dennis Jansen, Qi Li, Zijun and Jian Yang considers using a flexible semiparametric varying coefficient model specification to examine the role of fiscal policy on US asset markets (namely stocks, corporate and Treasury bonds). The authors consider two possible roles of fiscal deficits (or surpluses): as a separate direct information variable and as an (indirect) conditioning information variable indicating binding constraints on monetary policy actions. The results show that the impact of monetary policy on the stock market varies, depending on fiscal expansion or contraction. The impact of fiscal policy on corporate and Treasury bond yields follow similar patterns as in the equity market. The results are consistent with the notion of strong interdependence between monetary and fiscal policies. The last paper by Wolfgang Polonik and Qiwei Yao proposes two new types of nonparametric tests for investigating multivariate regression functions. The tests are based on cumulative sums coupled with either minimum volume sets or inverse regression ideas, involving no multivariate nonparametric regression estimation. The methods proposed facilitate the investigation for different features such as situations where the multivariate regression function is (i) constant, (ii) of a bathtub shape and (iii) in a given parametric form. The inference based on these tests may be further enhanced through associated diagnostic plots. Although the potential use of those ideas is much wider, they focus on inference for multivariate volatility functions in this paper, that is, they test for (i) heteroscedasticity, (ii) the so-called ‘smiling effect’ and (iii) some parametric volatility models. The asymptotic behaviour of the proposed tests is investigated, and their practical feasibility is shown via simulation studies. The authors further illustrate their methods with real financial data. The last category contains three papers. The first paper by David Allen, Felix Chan, Michael McAleer and Shelton Peiris examines the finite sample properties of the Quasi-Maximum Likelihood Estimator (QMLE) of the Logarithmic Autoregressive Conditional Duration (Log-ACD) model. Tests of the consistency and asymptotic normality of the QMLE for the log-ACD model using several probability distributions, including a log-normal density, are presented. This is an important issue as the Log-ACD is used widely for testing various market microstructure models and effects. Knowledge of the distribution of the QMLE is crucial for purposes of drawing valid inferences and diagnostic checking. The theoretical results developed in the paper are evaluated using Monte Carlo experiments. The experimental results also provide insight into the finite sample properties of the Log-ACD model under different distributional assumptions. Finally, this paper presents two extensions to the Log-ACD model to accommodate asymmetric effects. The practical usefulness of the new models is evaluated using empirical data from Australian stocks. The second paper by Jianqing Fan, Yingying Fan and Jinchi Lv proposes a general estimation method to deal with high-dimensional problems in a factor model. The authors examine covariance matrix estimation in the asymptotic framework that the dimensionality pp tends to μμ as the sample size nn increases. Motivated by the Arbitrage Pricing Theory in finance, a multi-factor model is employed to reduce dimensionality and to estimate the covariance matrix. The factors are observable and the number of factors KK is allowed to grow with pp. The authors investigate the impact of pp and KK on the performance of the model-based covariance matrix estimator. Under mild assumptions, the authors establish convergence rates and asymptotic normality of the model-based estimator, and its performance is compared with that of the sample covariance matrix. The authors then identify situations under which the factor approach increases the performance substantially or marginally. The impacts of covariance matrix estimation on optimal portfolio allocation and portfolio risk assessment are also analysed. The asymptotic results are supported by a comprehensive simulation study. The last paper by Christian.Gourieroux and Joann Jasiak introduces the Dynamic Additive Quantile (DAQ) model that ensures the monotonicity of conditional quantile estimates. The DAQ model is easily estimated and can be used for computation and updating of Value-at-Risk. An asymptotically efficient estimator of the DAQ model is obtained by maximizing an objective function that is based on the inverse KLIC measure. An alternative estimator proposed in the paper is consistent, but is generally not fully efficient. Goodness-of-fit tests and diagnostic tools for the assessment of the model are also provided. For purposes of illustration, the DAQ model is estimated from a series of returns on the Toronto Stock Exchange (TSX) market index.