در رگرسیون بردار پشتیبانی لاگرانژی
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
25292 | 2010 | 9 صفحه PDF |

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Expert Systems with Applications, Volume 37, Issue 12, December 2010, Pages 8784–8792
چکیده انگلیسی
Prediction by regression is an important method of solution for forecasting. In this paper an iterative Lagrangian support vector machine algorithm for regression problems has been proposed. The method has the advantage that its solution is obtained by taking the inverse of a matrix of order equals to the number of input samples at the beginning of the iteration rather than solving a quadratic optimization problem. The algorithm converges from any starting point and does not need any optimization packages. Numerical experiments have been performed on Bodyfat and a number of important time series datasets of interest. The results obtained are in close agreement with the exact solution of the problems considered clearly demonstrates the effectiveness of the proposed method.
مقدمه انگلیسی
Support vector machine (SVM) methods based on statistical learning theory (Vapnik, 2000) have been successfully applied to many problems of practical importance (Guyon et al., 2002 and Osuna et al., 1997) due to its high generalization performance over other learning methods. It is well known that the standard SVM formulation (Burges, 1998 and Cristianini and Shawe-Taylor, 2000) leads to the solution of a quadratic programming problem with linear inequality constraints and that the problem will have an unique solution. With the combined advantages of generalization performance and unique solution SVM becomes an attractive method on problems of interest. The goal of regression problem is in determining the underlying mathematical relationship between the given input observations and their output values. Regression models have been successfully applied in many important fields of study such as economics, engineering and bioinformatics. By the introduction of ε-insensitive error loss function proposed by Vapnik (2000) SVM methods have been successfully applied to regression problems ( Mukherjee, Osuna, & Girosi, 1997; Muller et al., 1999 and Tay and Cao, 2001). Considering the 2-norm error loss function instead of the usual 1-norm and maximizing the margin with respect to both the orientation and the relative location to the origin of the bounding planes, Mangasarian and Musicant, 2001a and Mangasarian and Musicant, 2001b studied “equivalent” SVM formulations for classification problems. This formulation leads to solving a positive-definite dual problem having only the non-negative constraints of the dual variables. Further their work on the study of formulating the problem of machine learning and data mining as an unconstrained minimization problem whose objective function is strongly convex and obtaining its solution using finite Newton method (see Fung and Mangasarian, 2003 and Mangasarian, 2002). Since the objective function is not twice differentiable in this formulation by applying a smoothing technique a new SVM formulation called smooth SVM (SSVM) has been proposed in Lee and Mangasarian (2001). For the study on an extension of SSVM to ε-insensitive error loss based support vector regression (SVR) problems (see Lee, Hsieh, & Huang, 2005). Finally for the extension of the Active set SVM (ASVM) ( Mangasarian & Musicant, 2001b) method proposed for classification problems to SVR problems we refer the reader to Musicant and Feinberg (2004). Motivated by the study of Lagrangian SVM (Mangasarian & Musicant, 2001a) for classification problems we propose in this paper Lagrangian ε-insensitive SVR formulation. The main advantage of our approach in comparison with the standard SVR formulation is that the solution of the problem is obtained by taking the inverse of a matrix at the beginning of the iteration rather than solving a quadratic programming problem. In order to verify the effectiveness of the proposed method a number of problems of practical importance are considered. It is observed that the results obtained are in close agreement with the exact solution of the problems considered. Throughout in this work all the vectors are assumed as column vectors. For any two vectors x, y in the n-dimensional real space Rn the inner product of the vectors will be denoted by xty where xt is the transpose of the vector x. When x is orthogonal to y we write x ⊥ y. The 2-norm of a vector x and a matrix Q will be denoted by ∥x∥ and ∥Q∥ respectively. For any vector x ∈ Rn, x+ is a vector in Rn obtained by setting all the negative components of x to zero. For matrices M ∈ Rm×n and N ∈ Rn×ℓ, the kernel matrix K of size m × ℓ is denoted by K = K(M, N). The identity matrix of appropriate size is denoted by I and the column vector of ones of dimension m by e. The paper is organized as follows. In Section 2 the linear and nonlinear SVR formulations for the standard 1-norm and 2-norm are introduced. By considering the Karush–Kuhn–Tucker (KKT) conditions the Lagrangian SVR algorithm is formulated in Section 3 and its convergence follows from the result of Mangasarian and Musicant (2001a). Numerical experiments have been performed on Bodyfat, Mackey–Glass, IBM, Google, Citigroup datasets and their results are compared with the exact solutions in Section 4. Finally we conclude our work in Section 5.
نتیجه گیری انگلیسی
A new iterative Lagrangian support vector regression algorithm is proposed in this paper. The effectiveness of the proposed method is demonstrated by performing numerical experiments on a number of interesting datasets. The algorithm requires the inverse of a matrix of order equals to twice the number of input samples, i.e., the number of non-negativity constraints of the dual variables, at the beginning of the algorithm. However by considering this matrix as a block matrix the algorithm is reformulated so that the solution is obtained by taking the inverse of a matrix of order equals to the number of input samples. Future work will be on the study of the implicit Lagrangian formulation (Mangasarian & Solodov, 1993) for the above dual problem considered.