الگوریتم ژنتیکی کدگذاری شده باینری واقعی پارامتر تطبیقی در مسایل بهینه سازی محدود: تجزیه و تحلیل عملکرد و برآورد پارامترهای کنترل بهینه
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|28247||2013||33 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Information Sciences, Volume 233, 1 June 2013, Pages 54–86
Real parameter constrained problems are an important class of optimization problems that are encountered frequently in a variety of real world problems. On one hand, Genetic Algorithms (GAs) are an efficient search metaheuristic and a prominent member within the family of Evolutionary Algorithms (EAs), which have been applied successfully to global optimization problems. However, genetic operators in their standard forms are blind to the presence of constraints. Thus, the extension of GAs to constrained optimization problems by incorporating suitable handing techniques is an active direction within GAs research. Recently, we have proposed a Binary Real coded Genetic Algorithm (BRGA). BRGA is a new hybrid approach that combines cooperative Binary coded GA (BGA) with Real coded GA (RGA). It employs an adaptive parameter-based hybrid scheme that distributes the computational power and regulates the interactions between the cooperative versions, which operate in a sequential time-interleaving manner. In this study, we aim to extend BRGA to constrained problems by introducing a modified dynamic penalty function into the architecture of BRGA. We use the CEC’2010 benchmark suite of 18 functions to analyze the quality, time and scalability performance of BRGA. To investigate the effectiveness of the proposed modification, we compare the performance of BRGA under both the original and the modified penalty functions. Moreover, to demonstrate the performance of BRGA, we compare it with the performance of some other EAs from the literature. We also implement a robust parameter tuning procedure that relies on techniques from statistical testing, experimental design and Response Surface Methodology (RSM) to estimate the optimal values for the control parameters to secure a good performance by BRGA against specific problems at hand.
Many optimization problems in science and engineering involve constraints. In general, a real-parameter constraint optimization function can be formulated as follows: equation(1) View the MathML sourceMinimizeF(X),whereX=(x1,x2,…,xn)andX∈S, Turn MathJax on equation(2) View the MathML sourceSubject to:gi(X)⩽0,i=1,…,p, Turn MathJax on View the MathML sourceHj(X)=0,j=p+1,…,m. Turn MathJax on Inequality constraints are usually transformed into inequalities in the following form: equation(3) View the MathML sourceMinimizeF(X),whereX=(x1,x2,…,xn)andX∈S, Turn MathJax on where X is a vector of a real number candidate solution and S is a continuous domain design space. A solution X is regarded as feasible when the constraints in Eq. (2) are satisfied. In this article, the value of the resolution is set to 0.0001, as recommended by Mallipeddi and Suganthan . Constrained optimization problems involve a variety of great challenges arising from the various limits on the decision variables, the constraints involved, the interference among constraints and the interrelationship between the constraints and the objective functions . Genetic Algorithms (GAs) are efficient search metaheuristics. Two main concepts of evolution, natural selection and genetic dynamics, inspired the development of GAs. Basic principles of GAs were laid down by Holland  and his colleagues in 1975 and were elaborated in detail by Goldberg . Currently, GAs play an increasingly important role in a variety of fields and applications such as bioinformatics, computational science, engineering, economics, chemistry, manufacturing and other fields. However, constrained optimization problems pose a challenge to the performance of GAs, as GAs in their original versions (i.e., the ones that use traditional crossover and mutation operators) are blind to constraints. Therefore, it is very likely that the candidate solutions generated by GAs during the search process would violate certain constraints. Researchers have recently developed several constrained handling techniques to maintain the applicability of GAs to constrained optimization problems and prevent the genetic search from leaving the feasible region. Penalty functions, repair algorithms, separating the objective and constraints, and using special representations and operators are good examples of such constrained handling techniques. Coello Coello  reviews the comprehensive literature regarding constrained handling techniques in EAs. Penalty functions, however, represent one of the earliest and most commonly used approaches within the EA community. Courant first proposed penalty function in the 1940s , and Carroll  and Fiacco and McCormick  later expanded it. The treatment of constraints in penalty methods happens by converting the constrained optimization problem to an unconstrained one by augmenting the constraints to the objective function as a penalty term. When the solution is infeasible, the objective value is penalized using a pre-determined penalty, i.e., the degree of constraint violation. This method allows retaining some genetic information from the infeasible solutions so that they might contribute to improving later solutions. We have recently proposed a hybrid Binary-Real coded Genetic Algorithm (BRGA)  and  for real parameter optimization. Fig. 1 shows a schematic diagram that clarifies the BRGA operation. BRGA employs a hybrid scheme that organizes the interactions and divides the computational power between two cooperative GA versions, i.e., the Binary-coded GA part (BGA part) and the Real-coded GA part (RGA part). The evolutionary search is primarily guided by the BGA part, which is used to identify promising regions within the search space. According to the schema theory , using a low cardinality alphabet is responsible for BGA’s success in finding good solutions. In our proposed hybrid scheme, the BGA part plays a vital role in scanning the search space effectively and partitioning it into smaller potentially promising parts that the RGA part can later exploit. Whenever the BGA part completes its shares of the genetic search, the computational power switches from the BGA part to the RGA part in a process that can be best described as population handover. Full-size image (58 K) Fig. 1. Typical optimization cycle in BRGA. Figure options During population handover, the BGA part injects some of its best population members into the RGA part of the population solutions. Another sample of injected members is randomly generated from the region bounded by those best members. Population handover is thus an opportunity for the RGA part to focus its search by receiving valuable feedback from the BGA part about the promising regions within the search space. The RGA part, which can directly process problem variables, can also effectively compensate for the poor ability of BGA to find accurate solutions from the limitations of mapping from the real problem space into the binary representation space and vice versa. Starting from a population dominated by promising members, the RGA part extensively exploits these regions and adapts population members to identify higher quality solutions. Whenever the RGA part completes its share of a genetic search, the computational power switches back to the BGA part in a reverse population handover process. The RGA part injects a portion of its best population members into the BGA part solution population. The handover process is thus an opportunity for the BGA part to correct its behavior in scanning and partitioning the search space by receiving valuable feedback from the RGA part about the optimality of the regions considered promising in the previous cycle. The evolutionary search continues similarly until it satisfies the termination criteria. To regulate the share of each part from the total computational power, the handover process frequency, and the amount and type of information exchange, BRGA employs a hybrid scheme that relies on selection criteria and a group of static, dynamic and adaptive parameters. We have successfully applied BRGA to global optimization problems . In this study, we aim to extend BRGA to real parameter-constrained optimization problems by incorporating a modified dynamic penalty mechanism within the algorithm architecture. One main critique against the penalty method is the difficulty of deriving good values for the penalty factors. If the penalty value is too high and the optimum lies at the boundary of the feasible region, the algorithm is quickly pushed inside the feasible region and cannot move back towards the boundary with the infeasible region. High penalty factors can discourage the algorithm from exploring a larger part of the infeasible region from the beginning of the search. Moreover, high penalty factors can limit the ability of the algorithm to jump from one feasible region to another, if those feasible regions are scattered as disjointed islands within the search space. If the penalty is too low, more of the search budget is spent on searching infeasible regions, as the penalty is negligible with respect to the objective function. To overcome the difficulty of estimating the optimal values for penalty factors, we generally implement a robust parameter tuning procedure in special and BRGA parameters that follows the general framework of “experimental analysis of search heuristics”, as proposed by Bartz-Beielstein . In the remainder of this article, we review the related literature on the background of BRGA in Section 2. We describe the implementation details and discuss the algorithm operation in Section 3. We analyze the notional concepts behind BRGA operation in Section 4. In Section 5, we report and discuss the outcomes of the numerical evaluations. We present the details of the implemented parameter tuning procedure and comment on the obtained optimal parameter settings in Section 6. Finally, in Section 7, we conclude the article and highlight the outlook for future research.
نتیجه گیری انگلیسی
BRGA is a recent hybrid approach that combines two cooperative GA versions, i.e., BGA and RGA, and uses adaptive parameter-based hybrid schemes to divide the computational power and regulate the interaction between the cooperative versions, which operate in a sequential time-interleaving manner. This study has extended BRGA to real parameter-constrained optimization problems by introducing a modified dynamic constraint-handling technique within the BRGA architecture. We have used the CEC’2010 benchmark suite to analyze the algorithm’s performance. CEC’2010 is a recently proposed benchmark suite with 18 constrained functions that exhibit several challenging problem characteristics, including a small feasibility ratio, rotation, optimum shifting, scalability, different combinations of separable and non-separable functions and different combinations of equality- and inequality-constraint functions. The outcomes of the numerical evaluation confirmed that BRGA is a suitable candidate for constrained optimization problems. At 10D, BRGA located feasible solutions for all functions within the benchmark suite. It achieved perfect feasibility rates in 15 cases. With the problem dimensions scaled to 30D, BRGA successfully identified feasible solutions in 15 cases. It achieved perfect feasibility rates in 14 cases. Moreover, the obtained experimental results demonstrate the effectiveness of the modified penalty function in improving BRGA’s performance against the adopted benchmark functions. The results also show superior performance for BRGA over other EAs in the literature. The efficiency experiments reveal that BRGA is efficient, using less than 50% of its total allocated computational budget to construct the first feasible solution for most cases. The parameter-tuning procedure outcomes confirm that BRGA efficiently uses memory, due to its small population sizes. Finally, the implemented parameter-tuning procedure is computationally expensive, but it can effectively identify the optimal parameter settings, thus securing good BRGA performance over a broad spectrum of benchmark functions. Most importantly, it sheds light on BRGA’s behavior and its internal mechanisms and has been used as a tool to drive general parameter settings that secured a reasonable performance for BRGA against a group of test cases. The performance of BRGA at the general parameter configurations was, in general, inferior to its performance at the optimal parameter configurations. However, BRGA at the general parameter configurations was still competitive against the other EAs in both quality and time performance. An important task within our research agenda is customizing BRGA to act as a scheduling algorithm within a novel management system to control virtualized resource allocations in cloud platforms . Other possible directions for future work include investigating the effectiveness of the hybrid scheme when the BGA part is implemented using a more advanced BGA version; examining the affect of parameter-tuning procedures on improving BRGA performance under scalability experiments; and reducing the number of hybrid scheme parameters by fixing some of them.