یک روش رگرسیون بردار پشتیبانی از چند مرکز برای پیش بینی قیمت بازار سهام
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
19424 | 2011 | 10 صفحه PDF |
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Expert Systems with Applications, Volume 38, Issue 3, March 2011, Pages 2177–2186
چکیده انگلیسی
Support vector regression has been applied to stock market forecasting problems. However, it is usually needed to tune manually the hyperparameters of the kernel functions. Multiple-kernel learning was developed to deal with this problem, by which the kernel matrix weights and Lagrange multipliers can be simultaneously derived through semidefinite programming. However, the amount of time and space required is very demanding. We develop a two-stage multiple-kernel learning algorithm by incorporating sequential minimal optimization and the gradient projection method. By this algorithm, advantages from different hyperparameter settings can be combined and overall system performance can be improved. Besides, the user need not specify the hyperparameter settings in advance, and trial-and-error for determining appropriate hyperparameter settings can then be avoided. Experimental results, obtained by running on datasets taken from Taiwan Capitalization Weighted Stock Index, show that our method performs better than other methods.
مقدمه انگلیسی
Accurate forecasting of stock prices is an appealing yet difficult activity in the modern business world. Many factors influence the behavior of the stock market, including both economic and non-economic. Therefore, stock market forecasting is regarded as one of the most challenging topics in business. In the past, methods based on statistics were proposed for tackling this problem, such as the autoregressive (AR) model (Champernowne, 1948), the autoregressive moving average (ARMA) model (Box & Jenkins, 1994), and the autoregressive integrated moving average (ARIMA) model (Box & Jenkins, 1994). These are linear models which are, more than often, inadequate for stock market forecasting, since stock time series are inherently noisy and non-stationary. Recently, nonlinear approaches have been proposed, such as autoregressive conditional heteroskedasticity (ARCH) (Engle, 1982), generalized autoregressive conditional heteroskedasticity (GARCH) (Bollerslev, 1986), artificial neural networks (ANN) (Hansen and Nelson, 1997, Kim and Han, 2008, Kwon and Moon, 2007, Qi and Zhang, 2008 and Zhang and Zhou, 2004), fuzzy neural networks (FNN) (Chang and Liu, 2008, Oh et al., 2006 and Zarandi et al., 2009), and support vector regression (SVR) (Cao and Tay, 2001, Cao and Tay, 2003, Fernando et al., 2003, Gestel et al., 2001, Pai and Lin, 2005, Tay and Cao, 2001, Valeriy and Supriya, 2006 and Yang et al., 2002). ANN has been widely used for modeling stock market time series due to its universal approximation property (Kecman, 2001). Previous researchers indicated that ANN, which implements the empirical risk minimization principle, outperforms traditional statistical models (Hansen & Nelson, 1997). However, ANN suffers from local minimum traps and difficulty in determining the hidden layer size and learning rate. On the contrary, SVR, proposed by Vapnik and his co-workers, has a global optimum and exhibits better prediction accuracy due to its implementation of the structural risk minimization principle which considers both the training error and the capacity of the regression model (Cristianini and Shawe-Taylor, 2000 and Vapnik, 1995). However, the practitioner has to determine in advance the type of kernel function and the associated kernel hyperparameters for SVR. Unsuitably chosen kernel functions or hyperparameter settings may lead to significantly poor performance (Chapelle et al., 2002, Duan et al., 2003 and Kwok, 2000). Most researchers use trial-and-error to choose proper values for the hyperparameters, which obviously takes a lot of efforts. In addition, using a single kernel may not be sufficient to solve a complex problem satisfactorily, especially for stock market forecasting problems. Several researchers have adopted multiple-kernels to deal with these problems (Bach et al., 2004, Bennett et al., 2002, Crammer et al., 2003, Gönen et al., 2008, Lanckriet et al., 2004, Ong et al., 2005, Rakotomamonjy et al., 2007, Rakotomamonjy et al., 2008, Sonnenburg et al., 2006, Szafranski et al., 2008, Tsang and Kwok, 2006 and Wang et al., 2008). The simplest way to combine multiple-kernels is by averaging them. But each kernel having the same weight may not be appropriate for the decision process, and therefore the main issue concerning multiple-kernel combination is to determine optimal weights for participating kernels. Lanckriet et al. (2004) used a linear combination of matrices to combine multiple-kernels. They transformed the optimization problem into a semidefinite programming (SDP) problem, which, being convex, has a global optimum. However, the amount of time and space required by this method is demanding. Other multiple-kernel learning algorithms include Bach et al., 2004, Sonnenburg et al., 2006, Rakotomamonjy et al., 2007, Rakotomamonjy et al., 2008, Szafranski et al., 2008 and Gönen et al., 2008. These approaches deal with large-scale problems by iteratively using the sequential minimal optimization (SMO) algorithm (Platt, 1999) to update Lagrange multipliers and kernel weights in turn, i.e., Lagrange multipliers are updated with fixed kernel weights and kernel weights are updated with fixed Lagrange multipliers alternatively. Although these methods are faster than SDP, they are likely to suffer from local minimum traps. Multiple-kernel learning based on hyperkernels has also been studied (Ong et al., 2005 and Tsang and Kwok, 2006). Tsang and Kwok (2006) reformulated the problem as a second-order cone programming (SOCP) form. Crammer et al. (2003) and Bennett et al. (2002) used boosting methods to combine heterogeneous kernel matrices. We propose a regression model, which integrates multiple-kernel learning and SVR, to deal with the stock price forecasting problem. A two-stage multiple-kernel learning algorithm is developed to optimally combine multiple-kernel matrices for SVR. This learning algorithm applies SMO (Platt, 1999) and the gradient projection method (Bertsekas, 1999) iteratively to obtain Lagrange multipliers and optimal kernel weights. By this algorithm, advantages from different hyperparameter settings can be combined and overall system performance can be improved. Besides, the user need not specify the hyperparameter settings in advance, and trial-and-error for determining appropriate hyperparameter settings can then be avoided. Experimental results, obtained by running on datasets taken from Taiwan Capitalization Weighted Stock Index (TAIEX), which is a stock market index for companies traded on the Taiwan Stock Exchange, show that our method performs better than other methods. The rest of this paper is organized as follows. Section 2 presents basic concepts about support vector regression. Section 3 describes our proposed multiple-kernel support vector regression approach for stock price forecasting. Experimental results are presented in Section 4. Finally, a conclusion is given in Section 5.
نتیجه گیری انگلیسی
We have proposed a multiple-kernel support vector regression approach for stock market price forecasting. A two-stage multiple- kernel learning algorithm is developed to optimally combine multiple-kernel matrices for support vector regression. The learning algorithm applies sequential minimal optimization and gradient projection iteratively to obtain Lagrange multipliers and optimal kernel weights. By this algorithm, advantages from different hyperparameter settings can be combined and overall system performance can be improved. Besides, the user need not specify the hyperparameter settings in advance, and trial-and-error for determining appropriate hyperparameter settings can then be avoided. Experimental results, obtained by running on datasets taken from Taiwan Capitalization Weighted Stock Index, have shown that our method performs better than other methods.