هدایت جهش انطباقی برای کد واقعی الگوریتم ژنتیک
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|8093||2013||15 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 11930 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
- تولید محتوا با مقالات ISI برای سایت یا وبلاگ شما
- تولید محتوا با مقالات ISI برای کتاب شما
- تولید محتوا با مقالات ISI برای نشریه یا رسانه شما
پیشنهاد می کنیم کیفیت محتوای سایت خود را با استفاده از منابع علمی، افزایش دهید.
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Applied Soft Computing, Volume 13, Issue 1, January 2013, Pages 600–614
Adaptive directed mutation (ADM) operator, a novel, simple, and efficient real-coded genetic algorithm (RCGA) is proposed and then employed to solve complex function optimization problems. The suggested ADM operator enhances the abilities of GAs in searching global optima as well as in speeding convergence by integrating the local directional search strategy and the adaptive random search strategies. Using 41 benchmark global optimization test functions, the performance of the new algorithm is compared with five conventional mutation operators and then with six genetic algorithms (GAs) reported in literature. Results indicate that the proposed ADM-RCGA is fast, accurate, and reliable, and outperforms all the other GAs considered in the present study.
Many real-life applications can be modeled as nonlinear optimization problems and, often, their global optimal solution is sought . A typical nonlinear global optimization problem follows the form equation(1) View the MathML sourcemaximizeminimize f(x=x1,x2,…xN) subject to x ∈Ω where x is a continuous variable vector with search space Ω ⊆ RN, and f(x) is a continuous real-valued function having N variables. The domain Ω is defined within the upper and lower limits of each dimension. The problem is to find the global optimal solution x* with its corresponding global optimal function value f(x*). Two major classes of optimization techniques for solving general nonlinear optimization problems can be found in the literature, namely, gradient-based optimizers and evolutionary algorithm optimizers  and . All gradient-based optimizers (also called deterministic optimizers) are point-by-point algorithms and are therefore local optimization techniques in nature. Gradient-based optimization techniques start the search procedure with an initial guess solution. If this guess solution does not come close enough to the global optimal solution, the gradient-based optimization techniques are likely to be trapped in the local optimal solution. In practice, finding such a suitable starting solution is the major difficulty when trying to optimize automatically. Gradient-based optimizers with about 20 variables are usually impractical  because as the number of variables increases, so does the number of evaluations. Most of them are designed to solve a particular class of optimization problems with few variables. In other words, all evolutionary algorithm optimizers work with random sets of potential solutions—they are stochastic searching algorithms and therefore global optimization methods. Evolutionary algorithm optimizers generally scale well to solve higher dimensional optimization problems by comparing with gradient-based optimizers. Evolutionary algorithms consist of three population-based heuristic methodologies: genetic algorithms (GAs), evolutionary programming, and evolutionary strategies. GAs are perhaps the most popular evolutionary algorithms . In traditional GA implementations  and , the decision variables were encoded as binary strings, namely, binary coded genetic algorithm (BCGA). The performance of BCGA has been satisfactory on small- and moderate-size problems requiring less precision in the solution, but BCGA entails huge computational time and memory  for high-dimensional problems that call for greater precision. To improve these drawbacks when applying BCGA to multidimensional and high-precision numerical problems, the decision variables can be encoded as real numbers, namely, real-coded genetic algorithm (RCGA), which has become increasingly popular  and . The superiority of RCGA to BCGA has been established for continuous optimization problems  and medical data mining . The performance of GAs relies on efficient search operators to guide the system toward global optima. One problem afflicting GAs is premature convergence. To mitigate or even avoid trapping into the local optima, the mutation operator provides a mechanism to explore new solutions and maintains the diversity of the population in GAs search, but it does so at the cost of slowing down the learning process. In GAs literature, relatively less effort has been put into designing a new mutation operator for RCGAs . The step size and search direction are major factors that determine the performance of mutation operator . The present study seeks to propose a novel, simple, and efficient RCGA based on the adaptive directed mutation (ADM) operator, and whose performance is demonstrated on a set of complex function optimization problems. The remainder of this paper is organized as follows: Section 2 gives a brief review of the mutation operator in RCGAs. Section 3 provides a detailed description of the proposed methodology. The set of benchmark problems, the compared algorithms, and the experimental results are reported in Section 4. Finally, Section 5 presents a number of conclusions from the present study.
نتیجه گیری انگلیسی
The current study has presented a new mutation operator to ADM that focuses on simplicity, robustness, and efficiency within the context of RCGAs. To evaluate the performance of the proposed algorithm, we conducted a series of experiments on a set of 41 well-known real-valued benchmark global optimization test functions. When compared with five conventional mutation operators, including RM, PLM, NUM, MNUM, and PM, the proposed ADM approach shows a significant improvement in the quality of the global optimum solution found under the same simulation conditions. The present study also compared the performance of the proposed ADM-RCGA with that of six leading evolutionary algorithms, besides the conventional mutation operators. Against the state-of-the-art adaptive evolutionary methods, HYK-GA and CMA-ES, the proposed ADM-RCGA shows superior performance. Furthermore, ADM-RCGA also outperforms all other GAs, including BOA, IEA, LX-PM, and OGA. Finally, we can conclude that the proposed ADM-RCGA provides more accuracy, greater reliability, and higher efficiency than all the other GAs considered in the present study. The experiment outcome for the proposed ADM-RCGA is excellent in most cases, but it still performed worse in some functions due to increasing the risks of local optima traps. As our future perspective, we plan to further improve the evolutionary efficiency by integrating the approach of design of experiment with the proposed ADM-RCGA algorithm.