دانلود مقاله ISI انگلیسی شماره 25639
عنوان فارسی مقاله

زیر مجموعه های آموزشی مطلوب در یک مدل پیش بینی بار الکتریکی رگرسیون بردار پشتیبانی

کد مقاله سال انتشار مقاله انگلیسی ترجمه فارسی تعداد کلمات
25639 2012 9 صفحه PDF سفارش دهید محاسبه نشده
خرید مقاله
پس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید.
عنوان انگلیسی
Optimal training subset in a support vector regression electric load forecasting model
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Applied Soft Computing, Volume 12, Issue 5, May 2012, Pages 1523–1531

کلمات کلیدی
رگرسیون بردار پشتیبانی - زیر مجموعه آموزش بهینه - پیش بینی - بار الکتریکی
پیش نمایش مقاله
پیش نمایش مقاله زیر مجموعه های آموزشی مطلوب در یک مدل پیش بینی بار الکتریکی رگرسیون بردار پشتیبانی

چکیده انگلیسی

This paper presents an optimal training subset for support vector regression (SVR) under deregulated power, which has a distinct advantage over SVR based on the full training set, since it solves the problem of large sample memory complexity O(N2) and prevents over-fitting during unbalanced data regression. To compute the proposed optimal training subset, an approximation convexity optimization framework is constructed through coupling a penalty term for the size of the optimal training subset to the mean absolute percentage error (MAPE) for the full training set prediction. Furthermore, a special method for finding the approximate solution of the optimization goal function is introduced, which enables us to extract maximum information from the full training set and increases the overall prediction accuracy. The applicability and superiority of the presented algorithm are shown by the half-hourly electric load data (48 data points per day) experiments in New South Wales under three different sample sizes. Especially, the benefit of the developed methods for large data sets is demonstrated by the significantly less CPU running time.

مقدمه انگلیسی

The load prediction is invaluable in the daily operations of a power utility. It is used for various purposes, such as price and income elasticities, energy transfer scheduling, unit commitment and load dispatch. With the emergence of load management strategies, the load prediction has played a broader role in utility operations [1]. Thus, the development of an accurate, fast, simple and robust load prediction algorithm is important to electric utilities and its customers. As the advances in statistical learning theory, support vector regression (SVR) model has become very promising and popular due to its attractive features and profound empirical performance for small sample, nonlinearity and high dimensional data application [2], [3], [4] and [5]. Quan et al. [6] proposed a weighted least squares SVR local region algorithm for nonlinear time series. Pai and Hong [7] proposed a recurrent SVR model with genetic algorithms to forecast regional electricity load. Using a robust SVR algorithm, Zhan and Cheng [8] reported a harmonic and inter-harmonic analysis of electric power system. Hybridizing two dissimilar models, literature [9], [10] and [11] pointed out that further performance improvement could be made for forecasting in the competitive market. Based on the VC dimension theory and structural risk minimization principle, the quality and complexity of the SVR solution do not depend on the dimensionality of the input space directly. Then, the solution is optimized by solving a large-scale quadratic programming problem with linear and box constraints. The memory complexity of this problem, however, is O(N2) (N is the number of training data points). As a result, some application models of medium or large training sample size are hard to load into memory, and cannot be solved by standard SVR. Determining an optimal training subset in medium or large sample size situation is very important for generalization performance, computational efficiency, high prediction accuracy and data interpretability of SVR prediction [12]. On the other hand, redundant data not only are useless for SVR prediction but also could lead to low computational efficiency and low accuracy potentially. Thus, what redundant information should be ignored in training set has been a central topic in the areas such as statistics, pattern recognition, machine learning, and computer vision. Recently, Moustakidis and Theocharis [13] proposed an efficient filter feature selection method for achieving a satisfactory trade-off between classification accuracy and dimensionality reduction. Using an improved genetic algorithm, Yang et al. [14] and Hamdani et al. [15] presented two feature selection algorithms. Yang and Yang [16] introduced a novel condensing tree for feature selection. Liao [17] studied neural models using the smallest best feature subsets of a bladder cancer data set for classification. Past work on feature selection has emphasized the feature extraction and classification, however, less attention has been given to the critical issue of training data set reduction and time series prediction. Only the training data points near decision boundary (namely support vectors) have impact on the final prediction model, inspired by that, we present a training data set reduction algorithm for SVR. For these reasons, optimal training subset, which represents maximum information of the full training set, is presented to supply a balanced data with relatively small training sample size for SVR. Furthermore, an approximation convexity optimization framework for computing the optimal training subset is proposed in our study, and a stopping criteria for the algorithm is established. K optimal training subset, a new algorithm put forward tentatively, is employed to obtain naturally sparse optimal training subset. Further studies on convexity will be summarized in our next study. To show the applicability and superiority of the presented algorithm, half-hourly electric load data (48 data points per day) in New South Wales are collected. Before choosing neural networks, statistical methods and other hybrid models, the nature and intended use of electric load data should be consider carefully. The results of the comparison experiments prove that time serial forecasting and control system based on this algorithm have the following advantages: (1) Since we establish an approximation convexity optimization framework for computing the optimal training subset, the subset can extract maximum information of the full training set with minimum size. (2) Faster response capability with high precision for medium and large size training set. (3) Robust to parameter variation. In Section 2 of this paper we present the new algorithm for forecasting, and the main steps of the method are given. Then, the possible reasons behind the proposed technique are explained. In Section 3 we introduce the research design, the data description and three performance measures. Numerical results obtained and comparisons are presented and discussed in Section 4. In Section 5 we briefly review this paper and present the future research.

نتیجه گیری انگلیسی

All the training pairs are treated uniformly during the learning procedure of SVR, the influence of the training pairs, however, are different in many real world cases. Specially, over-fitting is caused by the overwhelming redundant training samples in one class input to the training system partially undo the learning effect on the small training samples of a different class [32]. Like other tools in soft computing, failure to discard redundant information will affect the prediction accuracy, computational efficiency, and learning convergence of SVR. This phenomenon is more serious as the training set has high level of noise. Based on the above reason, we have proposed a novel optimization framework for computing optimal training subset. The framework inherently has a concentration capable of solving the memory complexity problem O(N2) suffered by SVR model. Our contribution in this paper has been twofold, namely the specification of an approximation convexity optimization framework for OTS on the one hand, and the construction of a convergence computational scheme for solving OTS. Although we have not provided a integrity theoretical justification of our convexity optimization framework in this study, we have built a computational framework that provides some empirical evidence as to the merit of our proposed solution. As our plots have shown, our initial results point to the effectiveness of our proposed algorithm. To extract an optimal training subset that represents maximum information of the full training set, a novel K-OTS method and a convexity iteration strategy are applied to explore the optimal training subset, and the property of convexity is proved out by experience. However, it is fair to say that much remains to be done in the way of model construction and parameter setting.

خرید مقاله
پس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید.