الگوریتم ژنتیک ناهموار تاوان برای بهینه سازی محدودیت
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|8249||2013||19 صفحه PDF||سفارش دهید||10404 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Information Sciences, Volume 241, 20 August 2013, Pages 119–137
Many real-world issues can be formulated as constrained optimization problems and solved using evolutionary algorithms with penalty functions. To effectively handle constraints, this study hybridizes a novel genetic algorithm with the rough set theory, called the rough penalty genetic algorithm (RPGA), with the aim to effectively achieve robust solutions and resolve constrained optimization problems. An infeasible solution is subjected to rough penalties according to its constraint violations. The crossover operation in the genetic algorithm incorporates a novel therapeutic approach and a parameter tuning policy to enhance evolutionary performance. The RPGA is evaluated on eleven benchmark problems and compared with several state-of-the-art algorithms in terms of solution accuracy and robustness. The performance analyses show this approach is a self-adaptive method for penalty adjustment. Remarkably, the method can address a variety of constrained optimization problems even though the initial population includes infeasible solutions.
Most real-world optimization problems involve constraints. Since Holland first developed a genetic algorithm (GA) in 1975 , GAs have been successfully applied to a wide range of complex problems in science, engineering, and industry fields. The challenge of a constrained problem is how to optimize the objective function value against its constraint violations. However, traditional GAs may consume considerable computational energy in searching infeasible solutions because genetic operations do not always preserve feasibility . Considerable research has focused on constraint-handling techniques for evolutionary algorithms (EAs)  and . Traditionally, penalty-function methods are the most popular constraint-handling techniques . Each infeasible solution is penalized by the magnitude of its constraint violations. Many studies attempt to manipulate penalty coefficients for balancing the objective function with constraint violations  and . However, the static penalty method has difficulty in adjusting all the penalty factors empirically . Although the dynamic penalty (DP) method uses evolutionary time to compute the corresponding penalty factors, it is still difficult to determine dynamic penalty functions appropriately . The adaptive penalty (AP) method considers the feasibility ratios of sequential generations to determine penalties . Nevertheless, the drawback of the AP method is the need to choose generational gap and coefficient adjustments. This study attempts to answer the following questions: (1) Which constraints dominate an optimization problem? (2) What kind of fitness metrics can evaluate infeasible solutions in a meaningful way? (3) How can parameters be adjusted appropriately? Therefore, a novel penalty-adjustment method is proposed in this paper to guide genetic evolution for approaching the global optimal solution. Because the proposed method is inspired by Pawlak’s rough set theory (RST)  and , it is named the rough penalty (RP) method, which serves as a new constraint-handling technique that aims to effectively apply the information granulation technique of RST in dealing with indiscernibility penalties. In this paper, the RP method is further incorporated with a GA, named RPGA, for solving constrained optimization problems with the following two goals: (1) adjust penalty coefficients according to their constraint violations and (2) analyze inefficient constraints by applying RST during evolution. The advantage of the proposed RPGA is that it can automatically adjust penalty coefficients for each optimization problem. Furthermore, the method does not require extra functional analysis to realize its solution space. The performance of the RPGA is evaluated by eleven well-known constrained problems. Experimental results show the proposed RPGA cannot only find optimal or close-to-optimal solutions but also obtain robust results for both linear and nonlinear constraint functions. The remainder of this paper is organized as follows. Section 2 briefly describes several constraint-handling techniques. Section 3 introduces the proposed RST-based penalty function. Next, Section 4 describes the proposed RPGA and its genetic operations. In Section 5, we adopt Taguchi’s quality-design method to analyze the best parameter setting for the proposed RPGA. Section 6 reports the computational results for 11 constrained optimization problems and comparisons with several well-known optimization algorithms. Finally, we summarize the findings and contributions of this study in Section 7.
نتیجه گیری انگلیسی
To the best of our knowledge, the proposed RPGA is the first RST-based constraint-handling approach, which applies the information granulation of RST to address the indiscernibility relation among penalty coefficients for GAs. The proposed RP method cooperates with GAs for dealing with constrained optimization problems. The principle of the RP method is that the RPGA releases/enforces some penalties on inefficient/efficient constraints during evolution. Importantly, this approach can find the optimum or near-optimal solutions even though the initial population includes infeasible solutions. Because of its self-adaptive penalty adjustment method, the RPGA does not need prior knowledge about the solution space of a constrained problem. Moreover, the RPGA can solve a range of constrained optimization problems that have both linear and nonlinear constraints. The performance of the proposed algorithm was measured using 11 benchmark functions. The performance was also compared with 11 state-of-the-art algorithms. Experimental results indicate the proposed RPGA finds most near-optimal solutions and outperforms four existing penalty-function algorithms. The proposed algorithm is also robust in obtaining consistent results and performed as well as, or in some cases better than, four search-bias algorithms and three multi-objective optimization algorithms. Notably, the proposed algorithm can find feasible solutions in all runs of all the test functions even though some difficult problems have smaller feasible ratios. In conclusion, empirical analyses show the proposed RP method can address a variety of constrained optimization problems that have both linear and nonlinear objective functions with equality and inequality constraints. A performance assessment also demonstrates the proposed RP method has a remarkable capability to balance the objective function and the constraint violations for handling constrained optimization problems.