دانلود مقاله ISI انگلیسی شماره 25644
عنوان فارسی مقاله

رگرسیون بردار پشتیبانی محدب چند متغیره با برنامه نویسی نیمه معین

کد مقاله سال انتشار مقاله انگلیسی ترجمه فارسی تعداد کلمات
25644 2012 8 صفحه PDF سفارش دهید محاسبه نشده
خرید مقاله
پس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید.
عنوان انگلیسی
Multivariate convex support vector regression with semidefinite programming
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Knowledge-Based Systems, Volume 30, June 2012, Pages 87–94

کلمات کلیدی
رگرسیون بردار پشتیبانی - شکل محدود - تحدب - برنامه نویسی نیمه قطعی - محدودیت های نابرابری ماتریس خطی
پیش نمایش مقاله
پیش نمایش مقاله رگرسیون بردار پشتیبانی محدب چند متغیره با برنامه نویسی نیمه معین

چکیده انگلیسی

As one of important nonparametric regression method, support vector regression can achieve nonlinear capability by kernel trick. This paper discusses multivariate support vector regression when its regression function is restricted to be convex. This paper approximates this convex shape restriction with a series of linear matrix inequality constraints and transforms its training to a semidefinite programming problem, which is computationally tractable. Extensions to multivariate concave case, ℓ2-norm Regularization, ℓ1 and ℓ2-norm loss functions, are also studied in this paper. Experimental results on both toy data sets and a real data set clearly show that, by exploiting this prior shape knowledge, this method can achieve better performance than the classical support vector regression.

مقدمه انگلیسی

Avoiding strong prior assumptions on functional forms, nonparametric regression methods are powerful tools for data description and exploration. The major advantage of these methods is that they do not require analysts to explicitly specify a given parametric structure on the data; instead, the data are allowed to “speak for themselves”. Nonparametric regression methods are attracting increasing attention from many areas [45] and [10]. Even though an analyst may not know the exact form of the relationship, he/she usually has some prior knowledge of its shape, such as monotone and convex. Typical examples appear in economics (utility function, production or cost functions), medicine (dose response experiments) or biology (growth curves). For a rational consumer, his/her utility function is widely recognized to be non-decreasing and concave. By fitting an explicitly specified function, parametric methods can certainly obtain estimation that satisfies prior-known shape restrictions. But parametric methods are vulnerable to model specification error. It is widely recognized by nonparametric statisticians that shape-restricted nonparametric regression can better predict the relationship between predictors and responses. Shape-restricted nonparametric regression dates back to seminal works [15] and [7]. The first paper dealt with least squares estimation of a concave function, while the second discussed the estimation of monotone functions. Since then, a lot of results on different shape-restricted nonparametric regression methods have been published. One can refer to [8] and, more recently, [49] for the literature on isotonic regression, i.e. monotone regression. In case of convex or concave regression, some statistical properties of concave regression [15], such as consistency, rate of convergence and asymptotic distribution, have been analyzed by [13], [23] and [12]. There is vast literature on shape-restricted nonparametric regression. The mainly concerned shapes include monotonicity, convexity, concavity, super-modularity, unimodality, etc. Alternative to least-squares minimization, many spline, kernel smoothing and wavelets-based techniques have been applied in shape-restricted nonparametric regression. This paper focuses on estimating a multivariate regression function when it is known to be convex or concave. The extension from univariate convex nonparametric regression to multivariate convex regression is not straightforward. For univariate nonparametric regression, the convex shape restriction requires its second-order differential to be non-negative. While in d-dimension (d ⩾ 2) multivariate nonparametric regression, this convex shape restriction is equivalent to positive semidefinite requirements of Hessian matrix on its domain. Though univariate convex nonparametric regression has been extensively researched, there are few work on multivariate case, except [2], [26], [27], [1], [17] and [33]. Matzkin [26] and [27] considered the case where the variable of interest was discrete. In [2] and [1], the regression function was assumed to be contaminated by errors with specified distributions, so it could be obtained by maximum likelihood estimation. Instead of specifying error distributions, [17] and [33] acquired regression function by least squares minimization. The most important obstacle of the above methods is that they can only obtain piece-linear surface, which is not differentiable at knots. This paper proposes an alternative nonparametric estimation of multivariate convex or concave regression function, which is based on support vector regression [35]. Support vector regression belongs to kernels methods [32] and [34], which have found successful applications in many areas such as credit risk evaluation [44] and [21], fraud detection [28] and time series forecasting[47] and [46]. To achieve nonlinear regression, support vector regression forms an optimal linear regressive hyperplane by a mapping from the input space to a higher-dimension feature space. Support vector regression has great advantage over [2], [1], [17] and [33], because its regression function is smooth and differentiable everywhere. Incorporating qualitative prior shape knowledge into support vector regression has been explored by [29] and [39]. [29] built monotone least squares support vector machine, which imposed the monotonicity-related constraints on every pair monotone samples. [39] also analyzed support vector regression when the derivatives of the regression function was restricted to be boundary, which included monotone, convex and concave shape restrictions. But [39] can not be generalized to bivariate or multivariate convex and concave cases, which needs semidefinite constraints, instead of common linear inequality constraints. It is also widely known in machine learning and neural networks area that one can achieve better performance by incorporating prior knowledge [19]. Literature on prior knowledge based support vector machine includes [11], [29], [25] and [20]. This paper employs semidefinite programming to solve convex or concave multivariate support vector regression. Applications of semidefinite programming also can be found in [18]. In our method, the convex shape restriction, approximated by a series of linear matrix inequality constraints on every training points, can force the regression function to be convex or concave. By this way, this method uses this prior shape knowledge on the function between predictors and responses. We expect that this exploitation of prior qualitative shape knowledge can improve out-of-sample regression performance. It is obvious that this method has two novelties. First, compared with parametric methods, its nonparametric characteristics can efficiently avoid model specification error. Second, maximization of the additional regularization term enables the method to minimize not only the empirical error, but also the generalization error. The paper is organized as follows. Section 2 introduces the main idea of solving multivariate convex or concave support vector regression by semidefinite programming. The capability of our method is verified by two artificial data sets and one real data set in Section 3. Other variants, including loss functions and regularization terms, are analyzed in Section 4. The paper is concluded in Section 5. All vectors are column vectors written in boldface and lowercase letters whereas matrices are boldface and uppercase, except for the i th row of a matrix A that is denoted Ai. The vectors 0 and 1 are vectors of appropriate dimensions with all their components respectively equal to 0 and 1. I is the identity matrix with appropriate size. The matrix X∈RN×dX∈RN×d contains all the training samples xi, i = 1, … , N , as rows. The vector y∈RNy∈RN contains all the target values y i for these samples. k:Rd×Rd→Rk:Rd×Rd→R is the kernel function. For A∈Rm×dA∈Rm×d and B∈Rn×dB∈Rn×d containing d -dimensional sample vectors, the kernel K (A, B) maps Rm×d×Rn×dRm×d×Rn×d to Rm×nRm×n with K(A, B)i,j = k(Ai, Bj). The kernel matrix K(X, X) will be written K for short. For a vector v, diag(v) is the diagonal matrix with the components of v on its diagonal. For a collection of symmetric matrices A1, … , Ai, diag(A1, … , Ai) is the block diagonal matrix with the diagonal blocks A1, … , Ai. For any symmetric square matrix A, A ⪰ 0 means that A is positive semidefinite and A ⪯ 0 means that A is negative semidefinite.

نتیجه گیری انگلیسی

This paper discusses multivariate support vector regression under the convex shape restriction. To make the fitting surface convex, a series of linear matrix inequality constraints are imposed on each training point. The motivation is to force the Hessian matrix at each training point to be positive semidefinite. The training problem can be expressed by a semidefinite programming with some linear inequality constraints and N linear matrix inequality constraints. Experimental results on both two toy data sets and one real data sets show that these linear matrix equality constraints can force the fitting function to be convex at the training points and help to achieve better fitting performance than the classical support vector regression. Some variants on loss functions and regularization terms are also studied in this paper.

خرید مقاله
پس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید.