روش اعتبار سنجی جدید برای بهبود استاندارد و زمان آموزش رگرسیون بردار پشتیبانی از چند پارامتر
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|25654||2012||8 صفحه PDF||سفارش دهید||5806 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Expert Systems with Applications, Volume 39, Issue 9, July 2012, Pages 8220–8227
The selection of hyper-parameters in support vector regression algorithms (SVMr) is an essential process in the training of these learning machines. Unfortunately, there is not an exact method to obtain the optimal values of SVMr hyper-parameters. Therefore, it is necessary to use a search algorithm and sometimes a validation method in order to find the best combination of hyper-parameters. The problem is that the SVMr training time can be huge in large training databases if standard search algorithms and validation methods (such as grid search and K-fold cross validation), are used. In this paper we propose two novel validation methods which reduce the SVMr training time, maintaining the accuracy of the final machine. We show the good performance of both methods in the standard SVMr with 3 hyper-parameters (where the hyper-parameters search is usually carried out by means of a grid search) and also in the extension to multi-parametric kernels, where meta-heuristic approaches such as evolutionary algorithms must be used to look for the best set of SVMr hyper-parameters. In all cases the new validation methods have provided very good results in terms of training time, without affecting the final SVMr accuracy.
The support vector regression algorithm (SVMr) (Smola & Schölkopf, 1998) is a robust methodology in statistical machine learning, that has been successfully applied to solve regression problems (He et al., 2008, Lázaro et al., 2005, Mohandes et al., 2004 and Wu et al., 2008). The SVMr uses kernel theory ( Smola & Schölkopf, 1998) to increase the quality of regression models and, in most cases can be solved as a convex optimization problem. Several fast algorithms can be used to carry out the SVMr training, such as the sequential minimal optimization algorithm ( Smola & Schölkopf, 1998). In spite of this, the time for training a SVMr model can be very high due to the SVMr performance heavily depends on the choice of several hyper-parameters, necessary to define the optimization problem and the final SVMr model. There are different approaches focused on reducing this hard computation time of the SVMr model: in Guo and Zhang (2007) a method based on reducing the number of the samples included in the SVMr training is proposed, and in Zhao, Sun, and Zou (2010) a similar idea is applied to multi-parametric kernel SVMr. In Zhao and Sun (2010) a different methodology is applied, based on approximating the SVMr solution instead of solving the optimization problem in an exact way. In Ortiz-Garcia, Salcedo-Sanz, Pérez-Bellido, and Portilla-Figueras (2009) an approach to reduce the SVMr training time based on reducing the hyper-parameters search space is proposed. The search of the best set of SVMr hyper-parameters is maybe the most time-consuming process in SVMr training: since there is not an exact method to obtain the optimal set of SVMr hyper-parameters, exhaustive search or meta-heuristic based algorithms are usually applied (Akay, 2009, Hou and Li, 2009 and Wu et al., 2009). In both cases, two processes must be used to find a good combination of SVMr hyper-parameters: a search algorithm and a validation method. Maybe the most used search algorithm applied to obtain SVMr hyper-parameters is the Grid Search (GS) (Akay, 2009), where the search space of parameters is divided into groups of possible parameters to be tested (usually, an uniform partition of the search space is considered). This algorithm can be easily implemented, however, it has an important drawback: when the number of hyper-parameter combinations is large, the training time becomes high, even considering only the three standard SVMr hyper-parameters in the search, i.e, C, ϵ and γ. In the case of multi-parametric kernel optimization ( Friedrichs and Igel, 2005 and Zhao and Sun, 2011), with N hyper-parameters to be optimized (C, ϵ, γm), the GS approach is computationally not affordable, and meta-heuristic approaches such as evolutionary algorithms ( Eiben & Smith, 2003) are usually employed to do this task ( Friedrichs and Igel, 2005 and Rojas and Fernández-Reyes, 2005). Independently of the type of algorithm used to carry out the search of the SVMr hyper-parameters, it needs a process to evaluate the goodness of every set of tested parameters (C, ϵ, γ) in the case of the standard SVMr, or (C, ϵ, γm) in the case of the multi-parametric SVMr. This is the aim of the validation method which is a crucial part of the training process of a SVMr. A given validation method must select the best values for the vector (C, ϵ, γ) or (C, ϵ, γm) from the training data. A wrong evaluation at this time could produce over-fitting in the final SVMr model, so its performance in an independent test set will be poor. Most of the authors use traditional validation methods such as K-fold cross validation, Leave One Out, Bootstrap, etc., but these validation techniques are not focused on improving the SVMr training time, so it can be really high in applications with large training databases. Note that this high training time is specially important in multi-parametric SVMr, since, as has been mentioned before, the hyper-parameters’ search in this approach is usually carried out with evolutionary computation or a similar algorithm. In this paper we propose two new validation methods that considerably reduce the training time of the SVMr, maintaining in most cases its performance in terms of accuracy. The first new validation technique, called percentage cross validation, is based on splitting the initial training set into two subsets with different percentage of samples, one with N% of the samples and another with the rest of the samples (100 − N%), obtaining two models from these subsets and testing them in the complementary set. This process is repeated several times by increasing the value of N in each step. The second method is called generalized predictive cross validation, and it is based on testing the behavior of sub-models which are created using predictions of an input set. We will test the proposed cross validation methods in several regression problems, extracted from UCI and StatLib machine learning repositories. Moreover, we will solve all the problems considered with both the standard SVMr and the multi-parametric SVMr with evolutionary training in order to show the goodness of the proposed cross validation methods in both cases. The structure of the rest of the paper is the following: next section presents the mathematical foundations of the standard SVMr and the multi-parametric model considered in this paper. Section 3 shows the validations method proposed as an alternative to traditional methods. K-fold cross validation is also introduced in this section as a reference (standard) algorithm for cross validation. Section 4 presents the performance of the two different SVMr approaches with every validation method in several real regression problems. Finally, Section 5 closes the paper giving some remarks.
نتیجه گیری انگلیسی
In this paper we have proposed two new validation methods to reduce the SVMr training time (search of the SVMr hyper-parameters) while maintaining the accuracy and generalization properties of the final machine. These new validation methods can be applied to the standard and multi-parametric versions of the SVMr with any type of search algorithm for the hyper-parameters, including grid search and meta-heuristic search approaches such as evolutionary algorithms. In the experimental section of the paper we have tested the proposed methods in different experiments with grid search and an evolutionary algorithm as a search algorithm, and the K-fold cross-validation as a reference approach. We have shown that the proposed approaches obtain a similar performance than the K-fold approach in terms of the final SVMr accuracy, but in much shorter time, both when using the grid search and the evolutionary algorithm as SVMr hyper-parameters’ search techniques.