رگرسیون خطی فازی مبتنی بر شبکه های عصبی چند جمله ای
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|24614||2012||20 صفحه PDF||سفارش دهید||9343 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Expert Systems with Applications, Volume 39, Issue 10, August 2012, Pages 8909–8928
In this study, we introduce an estimation approach to determine the parameters of the fuzzy linear regression model. The analytical solution to estimate the values of the parameters has been studied. The issue of negative spreads of fuzzy linear regression makes the problem to be NP complete. To deal with this problem, an iterative refinement of the model parameters based on the gradient decent optimization has been introduced. In the proposed approach, we use a hierarchical structure which is composed of dynamically accumulated simple nodes based on Polynomial Neural Networks the structure of which is very flexible. In this study, we proposed a new methodology of fuzzy linear regression based on the design method of Polynomial Neural Networks. Polynomial Neural Networks divide the complicated analytical approach to estimate the parameters of fuzzy linear regression into several simple analytic approaches. The fuzzy linear regression is implemented by Polynomial Neural Networks with fuzzy numbers which are formed by exploiting clustering and Particle Swarm Optimization. It is shown that the design strategy produces a model exhibiting sound performance.
In recent years, the problem of modeling and prediction from observed data has been one of the most commonly encountered research topics in machine learning and data analysis (Guvenir & Uysal, 2000). A simple way to describe a system is the regression analysis (Yu & Lee, 2010). In the classical regression both independent and dependent variables are treated as real numbers. However, in many real-world situations, where the complexity of the physical system calls for the development of a more general viewpoint, regression variables are specified in the form of some non-numeric (granular) entities such as e.g., linguistic variables (Cheng & Lee, 2001). The well-known and commonly encountered classical regression cannot handle such situations (Bardossy, 1990 and Bardossy et al., 1990). The fuzzy regression, which can deal with the non-numerical entities, especially linguistic variables, was proposed by Imoto et al., 2008, Tanaka et al., 1982, Toyoura et al., 2004 and Watada, 2001. A fuzzy linear regression proposed by Tanaka is composed of the numeric input variables and the linguistic (granular) coefficients which are treated as some fuzzy numbers (in particular, those are described by triangular membership functions). The linguistic coefficients of the regression lead to the linguistic output of the regression model. In other words, the output of a fuzzy linear regression model becomes also a triangular fuzzy number. In essence, the fuzziness of the output of the regression model emerged because of the lack of perfect fit of numeric data to the assumed linear format of the relationship under consideration. In other words, through the introduction of triangular numbers (parameters of the model), this fuzzy regression reflects the deviations between the data and the linear model. Computationally, the estimation of the fuzzy parameters of the regression is concerned with some problems of linear programming (Bargiela, Pedrycz, & Nakashima, 2007). Diamond developed a simple regression model for triangular fuzzy numbers under the conceptual framework as equation(1) F(Rm)→F(R)F(Rm)→F(R) Turn MathJax on where F(R) denotes a family of fuzzy numbers (in our case triangular ones) defined in the space of real numbers R. For the conceptual framework formed by (1), the various analytical formulae quantifying the values of the parameters of the regression model had to address the issue of negative spreads (Diamond & Koerner, 1997), which complicates significantly the algorithms and makes them difficult to apply to highly-dimensional data. Considering the optimization standpoint, A. Bargiela et al. (2007) revised the mapping between the independent variables and the dependent variable to be expressed as follows equation(2) F(R)×F(R)×⋯×F(R)→F(R)F(R)×F(R)×⋯×F(R)→F(R) Turn MathJax on In addition, to deal with the issue of negative spreads, A. Bargiela proposed a certain re-formulation of the regression problem as a gradient-descent optimization task, which enables a generic generalization of the simple regression model to multiple regression models in a computationally feasible way (Toyoura et al., 2004). The iterative refinement based on the gradient decent approach to estimate the parameter of fuzzy linear regression is the modification of the conventional gradient decent optimization. The drawback of the gradient decent optimization is well-known: the optimization performance of the gradient decent optimization mainly depends on the shape of the error surface, the starting point of the candidate solution and the learning coefficient (Seiffert & Michaelis, 2002). In this paper, to overcome the drawback of analytical estimation approach and the iterative refinement approach to estimate parameters of the model, we introduce the new estimation technique, which based on the concept of the Polynomial Neural Networks (PNNs). When dealing with high-order nonlinear and multivariable equations of the model, we require a vast amount of data for estimating all its parameters (Cherkassky et al., 1996 and Dickerson and Kosko, 1996). To help alleviate the problems, one of the first approaches along the line of a systematic design of nonlinear relationships between system’s inputs and outputs comes under the name of a Group Method of Data Handling (GMDH). GMDH was developed in the late 1960s by Ivakhnenko (Ivakhnenko, 1971, Ivakhnenko and Madala, 1994, Ivakhnenko and Ivakhnenko, 1995 and Ivakhnenko et al., 1994) as a vehicle for identifying nonlinear relations between input and output variables. GMDH-type algorithms have been extensively used since the mid-1970s for prediction and modeling complex nonlinear processes. The GMDH algorithm generates an optimal structure of the model through successive generations of Partial Descriptions (PDs) of data being regarded as quadratic regression polynomials of two input variables. While providing with a systematic design procedure, GMDH comes with some drawbacks. First, it tends to generate quite complex polynomial even for relatively simple systems (experimental data). Second, owing to its limited generic structure (that is quadratic two-variable polynomials), GMDH also tends to produce an overly complex network (model) when it comes to highly nonlinear systems. Third, if there are less than three input variables, GMDH algorithm does not generate a highly versatile structure. To alleviate the problems associated with the GMDH, PNNs were introduced by Oh and Pedrycz, 2002 and Oh et al., 2003 as a new category of neural networks. In a nutshell, these networks come with a high level of flexibility associated with each node (processing element forming a PD (or PN)) can have a different number of input variables as well as exploit a different order of the polynomial (say, linear, quadratic, cubic, etc.). In comparison to well-known neural networks whose topologies are commonly selected and kept prior to all detailed (parametric) learning, the PNN architecture is not fixed in advance but becomes fully optimized (both structurally and parametrically). As a consequence, PNNs show a superb performance in comparison to the previously presented intelligent models. Although the PNN has a flexible architecture whose potential can be fully utilized through a systematic design, it is difficult to obtain the structurally and parametrically optimized network because of the limited design of the polynomial neurons (PNs) located in each layer of the PNN. In other words, when we construct PNs of each layer in the conventional PNN, such parameters as the number of input variables (nodes), the order of the polynomial, and the input variables available within a PN are fixed (selected) in advance by the designer. Accordingly, the PNN algorithm exhibits some tendency to produce overly complex networks as well as a repetitive computation load by the trial and error method and/or the repetitive parameter adjustment by designer like in case of the original GMDH algorithm. In order to generate a structurally and parametrically optimized network, such parameters need to be optimal. We augment the conventional PNNs (which is focused on dealing with numeric data) to process fuzzy variables. The fuzzy variables are formed on a basis of available numeric data by using clustering and Particle Swarm Optimization (PSO) (Juang and Wang, 2009 and Kaveh and Laknejadi, 2011). Through clustering we capture the distribution of data. PSO can find the optimal fuzzy variables, which afterwards can represent the relationships between input and output variable. The paper is organized in the following manner. First, in Section 2, we introduce a concept of fuzzy linear regression. The architecture and development of the PNNs for fuzzy linear regression are studied in Section 3. In Section 4, PSO is described. In Section 5, we report on a comprehensive set of experiments. Finally, concluding remarks are covered in Section 6.
نتیجه گیری انگلیسی
We have presented a new estimation approach for fuzzy liner regression. The proposed approach is based on Polynomial Neural Networks, which is the hierarchically cumulated networks and the final model of which is represented by a polynomial. From the experiments, we can conclude that the proposed estimation approach can handle the issue of negative spreads which makes the analytic estimation for the parameters of fuzzy linear regression impossible when the number of the input variables is large. The experiments ascertain the fact that the fuzzy linear regression based on Polynomial Neural Networks performs better than the fuzzy linear regression optimized by gradient decent approach.