فرایند تحلیل شبکه ای برای مشکلات طبقه بندی الگو با استفاده از الگوریتم های ژنتیکی
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|6142||2010||12 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 7680 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Information Sciences, Volume 180, Issue 13, 1 July 2010, Pages 2528–2539
The analytic network process (ANP) is a useful technique for multi-attribute decision analysis (MCDA) that employs a network representation to describe interrelationships between diverse attributes. Owing to effectiveness of the ANP in allowing for complex interrelationships between attributes, this paper develops an ANP-based classifier for pattern classification problems with interdependence or independence among attributes. To deal with interdependence, this study employs genetic algorithms (GAs) to automatically determine elements in the supermatrix that are not easily user-specified, to find degrees of importance of respective attributes. Then, with the relative importance for each attribute in the limiting supermatrix, the current work determines the class label of a pattern by its synthetic evaluation. Experimental results obtained by the proposed ANP-based classifier are comparable to those obtained by other fuzzy or non-fuzzy classification methods.
The popular analytic hierarchy process (AHP) ,  and  employs a uni-directional hierarchy to describe the relationships between decision levels with independent attributes. Compared to traditional AHP, the analytic network process (ANP) developed by Saaty  generalizes the AHP to deal with all kinds of dependence and feedback among clusters or components for a decision problem using the supermatrix approach . The ANP replaces hierarchies with networks and uses a network representation to indicate interrelationships between components or elements . A component consisting of various elements may be an aspect, a criterion, or a set of alternatives. For instance, the manager could employ components, such as advertising and quality, to estimate the market share of different stores of providing instant foods. The advertising component may contain many elements such as promotion and creativity. Moreover, advertising and quality could influence each other. The manager could further utilize the degree of importance for each attribute obtained by ANP to compute a synthetic evaluation of each alternative, previously handled by the traditional weighted average method (WAM) . The independence assumption among attributes is not warranted , therefore ANP applies to many decision problems. The ANP has gained much attention, owing to its effectiveness in allowing for complex interrelationships between attributes. Several studies explore ANP, including product mix planning of semiconductor fabrication by Chung et al. , selecting information system projects by Lee and Kim , project management by Meade and Presley  and Meade and Sarkis , constructing a financial-crisis model by Niemira and Saaty , the strategic management for forest management by Wolfslehner et al. , reverse logistics operations for end-of-life computers by Ravi et al. , evaluating digital video recorder systems by Chang et al. , a SWOT analysis by Yüksel and Dağdeviren , the transportation-mode selection by Tuzkaya and Önüt , urban industrial land valuation by Aragonés-Beltrán et al. . Nevertheless, the ANP has not been applied to pattern classification in the literature. Pattern classification is a multi-attribute decision problem that partitions a pattern space into classes and then assigns an input pattern to one of the classes . A decision-maker can perform input pattern classification by employing the widely-used WAM to derive the synthetic evaluation, and should consider interrelationships between attributes to determine respective attribute weights for the WAM. Although different attributes may influence each other, it is not possible to arbitrarily assume interdependence or independence among attributes for a classification problem. Thus, this paper develops an ANP-based classifier by employing ANP to deal separately with interdependence and independence among attributes. Since obtaining the relative importance of respective attributes is the focus, this paper only accounts for one component containing different attributes. In implementation, this work obtains the relative importance for respective attributes by ANP, and then employs the synthetic evaluation of a pattern, obtained by the WAM with degrees of importance for respective attributes, to determine the corresponding class label. To deal with interdependence, this study determines the elements in the supermatrix by a general-purpose optimization technique, namely GAs , thus further obtaining the degree of importance for each attribute from the limiting supermatrix. We incorporates maximizing the number of correctly classified training patterns into the fitness function. The rest of this paper is organized as follows. Section 2 briefly introduces the ANP. Section 3 presents the GA-based learning algorithms for the proposed ANP-based classifier. Section 4 evaluates classification performance of the proposed method using computer simulations of well-known classification problems, including the appendicitis data, the cancer data, the thyroid data, the Wisconsin breast-cancer data, and the Pima Indian diabetes data. We demonstrate that the proposed classifier performs well compared to other fuzzy or non-fuzzy classification methods. This paper concludes with Section 5.
نتیجه گیری انگلیسی
Traditional AHP is a very popular method for determining the relative importance of respective attributes by pairwise comparisons. A significant characteristic of AHP is the independence assumption among attributes. Nevertheless, since independence assumption among attributes for a decision problem is not warranted, this study regards ANP as an appropriate tool for deriving relative importance of each attribute. Using the supermatrix approach, ANP can deal with all kinds of dependence and feedback among components for a decision problem. This paper proposes an ANP-based classifier that can deal with interdependence and independence among attributes for pattern classification problems. The proposed classifier assigns a synthetic performance value to each input pattern. Relative importance for the Type I classifier derives from the limiting supermatrix obtained by multiplying the supermatrix many times by itself, with the supermatrix determined by the GA-based learning algorithm. Relative importance for the Type II classifier is directly assigned by GAs. The number of correctly classified training patterns is incorporated into the fitness function of the GA-based learning algorithm. Experimental results show that the proposed classifier performs well compared to other fuzzy or non-fuzzy classification methods. Previous computer simulations (i.e., Sections 4.2.2, 4.2.3, 4.2.4 and 4.2.5) use the same parameter specifications for the 10 different data sets for evaluating classification performance of the proposed classifier for test data. Simulation results demonstrate that the ANP-based classifier with common parameter specifications (i.e., population size, number of generations, substring length, crossover probability, and mutation probability) performs well for the five data sets. This finding implicitly shows that the ANP-based classifier does not require fine parameter tuning. In fact, there are no optimal parameter settings for a GA. The experimental results also show that the proposed classifier is not sensitive to different crossover, mutation probabilities and population sizes. As for the termination criteria, the maximum number of generations (i.e., NconNcon) is one of the simplest termination criteria. However, sometimes, it is not easy to assume the right number of generations. Furthermore, if NconNcon is too large, much time will be spent on computation. Thus, it seems more appropriate to measure the magnitude of improvement in function evaluation . Similar to the termination condition presented in , another termination condition is that the algorithm terminates the search when reaching either maximum number of generations or when the fitness values of all chromosomes 1are identical to an accuracy of εε, i.e, equation(12) |fh-fl|⩽ε,|fh-fl|⩽ε, Turn MathJax on where fhfh is the highest fitness value up to the current population and flfl is the lowest fitness value among all chromosomes in the current population. εε is the same as that specified in  (i.e., 10-410-4). Using the same parameter specifications as in the previous subsection, the average classification accuracy rate of the ANP-based classifier over 10 trials is 76.8%, whereas each trial terminates within 1000 generations. Thus computing time can significantly reduce compared to that by generating NconNcon (i.e., 2000) generations. Our future study will design the appropriate termination condition for the proposed classifier by evaluating classification performance to significantly improve classification accuracy rate. Traditional AHP typically employs a nine-point intensity scale to reveal the comparative importance between two attributes. This study regards a pairwise comparison matrix with a consistency index (CI), whose value should be below 0.1 , as a qualified form. Then, the pairwise comparison matrix determines the relative importance indicated in a priority vector. Actually, given a priority vector, the corresponding almost perfectly consistent (i.e., CI is very small) pairwise comparison matrix can be generated by constructing the Closest Discrete Pairwise (CDP) matrix . In other words, although a priority vector for the proposed classifier can be determined from GAs, a qualified pairwise comparison matrix is implicitly generated. The relationship between the CI of a pairwise comparison matrix and classification performance of the proposed classifier could be an interesting topic for future study. Regarding Step 1 (i.e., normalization) in the proposed GA-based learning algorithm, the maximum and minimum normalized values are 1 and 0, respectively, for each attribute. In addition to the normalization method mentioned in the previous section, we may consider the other methods. Different normalization methods might generate more or less impact on the classification performance. This paper does not focus on the aforementioned issue with respect to normalization, but will further explore this issue in future works.