جستجوی محلی الگوریتم چند هدفه ژنتیک با استفاده از نقشه کانن
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|11926||2009||11 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 8255 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
|شرح||تعرفه ترجمه||زمان تحویل||جمع هزینه|
|ترجمه تخصصی - سرعت عادی||هر کلمه 90 تومان||12 روز بعد از پرداخت||742,950 تومان|
|ترجمه تخصصی - سرعت فوری||هر کلمه 180 تومان||6 روز بعد از پرداخت||1,485,900 تومان|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Computers & Industrial Engineering, Volume 56, Issue 4, May 2009, Pages 1566–1576
Genetic Algorithms (GAs) are population based global search methods that can escape from local optima traps and find the global optima regions. However, near the optimum set their intensification process is often inaccurate. This is because the search strategy of GAs is completely probabilistic. With a random search near the optimum sets, there is a small probability to improve current solution. Another drawback of the GAs is genetic drift. The GAs search process is a black box process and no one knows that which region is being searched by the algorithm and it is possible that GAs search only a small region in the feasible space. On the other hand, GAs usually do not use the existing information about the optimality regions in past iterations. In this paper, a new method called SOM-Based Multi-Objective GA (SBMOGA) is proposed to improve the genetic diversity. In SBMOGA, a grid of neurons use the concept of learning rule of Self-Organizing Map (SOM) supporting by Variable Neighborhood Search (VNS) learn from genetic algorithm improving both local and global search. SOM is a neural network which is capable of learning and can improve the efficiency of data processing algorithms. The VNS algorithm is developed to enhance the local search efficiency in the Evolutionary Algorithms (EAs). The SOM uses a multi-objective learning rule based-on Pareto dominance to train its neurons. The neurons gradually move toward better fitness areas in some trajectories in feasible space. The knowledge of optimum front in past generations is saved in form of trajectories. The final state of the neurons determines a set of new solutions that can be regarded as the probability density distribution function of the high fitness areas in the multi-objective space. The new set of solutions potentially can improve the GAs overall efficiency. In the last section of this paper, the applicability of the proposed algorithm is examined in developing optimal policies for a real world multi-objective multi-reservoir system which is a non-linear, non-convex, multi-objective optimization problem.
Evolutionary Algorithms (EAs) are probabilistic search optimization techniques, which have been developed based on Darwin’s principles of natural selection and survival of the fittest individuals in a population. EAs use computational models of evolutionary processes as key elements in the design and implementation of computer-based problem solving systems (Cordon et al., 2002 and Goldberg, 1989). There are a variety of evolutionary computational models. There have been four well-defined EAs, which have served as the basis for the most of activities in the field of evolutionary computations: Genetic Algorithms (GAs) (Holland, 1975 and Michalewicz, 1996), Evolution Strategies (Schwefel, 1975, Schwefel, 1981 and Schwefel et al., 1995), Genetic Programming (GP) (Fogel, 1962 and Koza, 1992) and Evolutionary Programming (EP) (Fogel, 1991). Genetic algorithms have been utilized in different fields of engineering much more than other forms of EAs. Initial developments in evolutionary optimization models focused on single objective applications. In the past two decades, several Multi-objective EAs such as Vector Evaluated Genetic Algorithm (VEGA) (Schaffer, 1984) and Non-dominated Sorted Genetic Algorithms (NSGA) (Srinivas & Deb, 1994) have been proposed. These early EAs often performed poorly, considering two key parameters: convergence rate and diversity. Recent algorithms like Strength Pareto Evolutionary SPEA (Zitzler & Thiele, 1999) and NSGA-II (Deb, Pratap, Agarwal, & Meyarivan, 2002) perform better though, they still suffer from similar deficiencies. Deb (2001) and Van Veldhuizen and Lamont (2000) presented comprehensive reviews and classification of the most important approaches to genetic algorithms for multi-objective optimization. Lately, Konak, Coit, and Smith (2006) presented an overview and tutorial describing GAs developed for problems with multiple objectives. They concluded that these methods differ primarily from traditional GA by using specialized fitness functions and introducing methods to promote solution diversity. Many real-world problems do not satisfy necessary conditions such as continuity, differentiability, convexity, etc. Therefore, they can not be easily solved using traditional gradient-based optimization techniques. GAs have been considered as a practical optimization tool in many disciplines such as discontinuous multi-modal objective functions, combinatorial (together with discrete, continuous or integer design variables), dynamic, severely nonlinear, and non-differentiable, non-convex design spaces problems. Another advantage of MOEAs is definition of Pareto front set with an acceptable computational time. Traditional multi-objective algorithms define one solution in each run. The MOEAs usually attempt to generate (or closely approximate) the entire Pareto front in a single run and place emphasis on achieving solution diversity so as to avoid local optima (Rangarajan, Ravindran, & Reed, 2004).The advantages of GAs increasingly extend their applications. However, there are some drawbacks that limit their efficiency. The traditional GAs intensification process is not sufficiently accurate. GAs usually find the area of good fitness quite easily. However, finding the global optimal solution may be time-consuming and inaccurate. This is because the search strategy of GAs is probabilistic. In a probabilistic search process, when a chromosome is far from the local optima, there is a 50% chance that a random search direction will simultaneously improve all the objectives. However, when a point is close to the Pareto set, the size of proper descent/ascent cone is extremely narrow and there is small probability that a random update improves the objective functions (Brown & Smith, 2005). Thus with a random search strategy, GAs generally require a great number of iterations and they converge slowly, especially in the neighborhood of the global optimum. With a randomized reproduction strategy in which the crossover points are determined randomly, the resulting children are created without regard to the existing information about high fitness regions. Therefore, the fitness of a child can deviate quite widely from the fitness of its parents. Another drawback of the GAs is genetic drift. The GAs exploration process is a black box and the diversity information obtained from past generations is only implicitly and partially preserved in the current genome. This bears the risk of a regeneration of individuals that have already been seen in the search process. Even more problematic is the fact that the search can be negatively affected by genetic drift. As a consequence, big parts of the search space, potentially containing the global optimum, will never be explored. Thus there is a need for consistent exploration techniques that do not repeat the same patterns in mutation process and also can improve the diversity when increasing genetic generations. EAs are producing vast amounts of data during an optimization run without sufficient usage of them. In each of the numerous generations, a large number of chromosomes is generated and evaluated (Drobics, Bodenhofer, & Winiwarter, 2001). This data can be used to produce valuable insight to enhance EAs solution quality. GAs are complete probabilistic search-based optimization models because they do not use the knowledge aggregated about the optimality regions and searched areas in past iterations. Necessary requirements are for example, processing incoming data such that it creates some useful information that incrementally improves the next generation population and new chromosomes should not be created based on entire probabilistic processes. It is possible to extract and use previously computed knowledge in next generations. Then it can be concluded that there is an area of improvement in convergence rate and diversity of GAs. Thus there is a need to new GAs that their exploitations are knowledge oriented to accelerate the intensification process and also have better diversification to avoid genetic drift. In this paper, a background for developing a new GA-based algorithm is provided in the next section. In Section 3, the new algorithm is presented in detail. SBMOGA is developed based on some well known ideas such as SOM learning rule, and VNS shaking process. The new algorithm can provide consistent diversity without repeated evaluations and a systematic local and variable neighborhood search. In Section 4, a complex real world problem namely multi-objective multi-reservoir operation management problem is described and the optimization model formulation is presented. It is clear that this problem is non-convex nonlinear. In Section 5, the results of application of new algorithm to solve the multi-reservoir operation problem is shown. In the last section, conclusions and future research opportunities are presented.
نتیجه گیری انگلیسی
In this paper, a new method called SOM-Based Multi-Objective GA (SBMOGA) proposed to improve the effectiveness of a well-known GA-based multi-objective optimization model namely NSGA-II. In the new algorithm, in each generation, a population of neurons will be trained using elite solutions of GA’s current population. The training rule of neurons is developed using the concepts of learning rule of SOM and variable neighborhood search algorithm to improve both local and global search. Moving the neurons centers in stochastic shaking trajectories in feasible search space provides better results for exploration and improves genetic diversity. The training ratio (or training step size) is considered as a dynamic monotonic decreasing function of the GA generation number. This provides a variable neighborhood search when one neuron reaches to an optimal region. The neurons in SBMOGA algorithm uses the aggregated knowledge of optimum regions in past generations and new weight vectors for neurons’ centers are defined as a function of previous weight vectors. After some first generations, the neurons gradually attracted by local high fitness regions and the searching process is converted to a exploitation process based on existing aggregated knowledge about optimum set regions. Then random fluctuations of current neurons weight vectors within the high fitness areas can produce better explorations. The new method is not an independent algorithm. In this paper, it was linked with NSGA-II and the model was applied to a real-world two-reservoir optimization problem. The results have shown that the SBMOGA convergence to optimum front is much quicker than classical NSGA-II and it can decrease the complexity problems of EAs. Also the diversity of the non-dominated solutions of SBMOGA was much better than the NSGA-II.