دانلود مقاله ISI انگلیسی شماره 5855
ترجمه فارسی عنوان مقاله

بهینه سازی شبکه های عصبی چند جمله ای خود سازمانده

عنوان انگلیسی
Optimization of self-organizing polynomial neural networks
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
5855 2013 11 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Expert Systems with Applications, Volume 40, Issue 11, 1 September 2013, Pages 4528–4538

ترجمه کلمات کلیدی
شبکه های عصبی چند جمله ای - الگوریتم لونبرگ _ مارکوارت - بهینه سازی ازدحام ذرات - مدل سازی سری های زمانی
کلمات کلیدی انگلیسی
پیش نمایش مقاله
پیش نمایش مقاله  بهینه سازی شبکه های عصبی چند جمله ای خود سازمانده

چکیده انگلیسی

The main disadvantage of self-organizing polynomial neural networks (SOPNN) automatically structured and trained by the group method of data handling (GMDH) algorithm is a partial optimization of model weights as the GMDH algorithm optimizes only the weights of the topmost (output) node. In order to estimate to what extent the approximation accuracy of the obtained model can be improved the particle swarm optimization (PSO) has been used for the optimization of weights of all node-polynomials. Since the PSO is generally computationally expensive and time consuming a more efficient Levenberg–Marquardt (LM) algorithm is adapted for the optimization of the SOPNN. After it has been optimized by the LM algorithm the SOPNN outperformed the corresponding models based on artificial neural networks (ANN) and support vector method (SVM). The research is based on the meta-modeling of the thermodynamic effects in fluid flow measurements with time-constraints. The outstanding characteristics of the optimized SOPNN models are also demonstrated in learning the recurrence relations of multiple superimposed oscillations (MSO).

مقدمه انگلیسی

Approximation of complex multidimensional systems by SOPNN, also known as the GMDH polynomial neural networks (PNN), was introduced by Ivakhnenko (1971). The SOPNN are constructed by combining the low order polynomials into multi layered polynomial structures where the coefficients of the low-order polynomials (generally 2-dimensional 2nd order polynomials) are obtained by polynomial regression (Chapra & Canale, 1998) with the aim to minimize the approximation error. GMDH models may achieve reasonable approximation accuracy at low complexity and are simple to implement in digital computers (Maric & Ivek, 2011). The GMDH is resistant to over-fitting since it uses separate data sets for regression and for model selection. When applied to real time compensation of nonlinear behavior, the self-organizing nature of GMDH may eliminate the complicated structural modeling and parameterization, common to conventional modeling strategies (Iwasaki, Takei, & Matsui, 2003). The performance of the SOPNN is generally evaluated by a single parameter measure (Witten & Eibe, 2005), typically by the least square error, which minimizes the model approximation error rather than its complexity. When building the models for time-constrained applications the constraints can be efficiently embedded into the model selection metrics (Maric & Ivek, 2011). It was shown (Maric & Ivek, 2011) that the raw SOPNN models (GMDH PNN) are inferior to multilayer perceptron (MLP) when considering the accuracy with respect to the complexity. It was also concluded (Maric & Ivek, 2011) that SVM is not appropriate for building the low complexity models for time-constrained applications. The SOPNN node-polynomial weights, after calculated by the regression, remain unchanged during the rest of the training process resulting in sub-optimal SOPNN models. The accuracy and the prediction of models may be improved significantly when trained by the genetic programing and back-propagation (BP). It was shown (Nikolaev & Iba, 2003) that the population-based search technique, relying on the genetic programing and the BP algorithm, enables to identify the networks with good training as well as generalization performances. The BP improves the accuracy of the model but it is known to often get stuck in local minima. The idea of this paper is to adapt a more robust procedure for the optimization of the SOPNN relation with respect to its weights. The PSO is a nature inspired algorithm, which enables the optimization of model weights by simulating the flight of a bird flock (Eberhart & Kennedy, 1995). Since the PSO is simple to implement it has been used in our experiments for the estimation of the approximation abilities of raw SOPNN models. After the PSO improved significantly the approximation accuracy of various SOPNN models a more complex Levenberg–Marquardt (LM) algorithm (Levenberg, 1944 and Marquardt, 1963) has been adapted for the optimization of model weights. Although widely used for the optimization of ANN, the use of the LM algorithm for the optimization of weights of the SOPNN to the best of my knowledge has not been reported in the literature. The LM algorithm converges many times faster than the PSO and increases the approximation accuracy of the SOPNN model substantially. This paper describes the adaptation of PSO and LM algorithm for the optimization of SOPNN and demonstrates how the approximation accuracy of the original GMDH model can be significantly improved after optimizing its weights. In the following section, the GMDH, PSO and the LM algorithm are described. The PSO and the LM algorithm are adapted for the optimization of SOPNN weights. Section 3 describes the procedure for the estimation of the execution time for SOPNN and MLP in time constrained applications. A procedure for the compensation of thermodynamic effects in flow rate measurements is summarized in section 4 and the results of the simulations of the flow rate error compensation procedure by the surrogate models are given in section 5. Finally in section 6, the outstanding performances of the SOPNN are demonstrated on MSO task that has been widely studied in echo state networks (ESN) literature.

نتیجه گیری انگلیسی

The paper presents an efficient adaptation of the Levenberg–Marquardt algorithm for the optimization of the SOPNN. The paper points out that LM algorithm makes the SOPNN very competitive for the approximation of complex systems and procedures, particularly in real-time applications, since high approximation accuracy is generally achieved with low complexity models. In computationally intensive real-time applications the complex procedures may be replaced by simplified SOPNN surrogates and thus become feasible in real-time. The paper particularly emphasizes a high accuracy/complexity ratio of the optimized model and the simplicity of its implementation in software. The LM algorithm has been tested by modeling the computationally intensive procedure for the correction of the flow rate error. It was shown that the approximation characteristics of the original SOPNN can be improved substantially when optimizing the model weights by the LM algorithm. The SOPNN outperformed the corresponding MLP model when both optimized by the LM algorithm and having approximately equal computational complexities. Similar results have been obtained by modeling the procedures for the calculation of various thermodynamic properties of a natural gas. The SOPNN proves to be extremely efficient in learning polynomial-type recurrence relations from time series. When optimized by the LM algorithm the SOPNN outperformed the ANN and the SVM in MSO modeling. When modeling a MSO recurrence relation the optimized SOPNN displays outstanding generalization ability, uncommon to ANN and SVM models, and can accurately predict any subset of superimposed sinusoids from the MSO signal it has been trained for. The model prediction error remains very low even if changing the amplitudes and phases of the superimposed sine waves. SOPNN is also able to learn with extreme accuracy the recurrence relations of multiple products of sine waves, modulated MSO, damped MSO, etc. The ability to learn the recurrence relation and to accurately predict the future values from past known samples opens the possibilities of using the SOPNN in modeling the dynamic behavior of complex systems.