بهینه ساز ازدحام ذرات مبتنی بر یادگیری پویای محلی برای بهینه سازی عددی جهانی
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|10549||2012||21 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 12217 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
- تولید محتوا با مقالات ISI برای سایت یا وبلاگ شما
- تولید محتوا با مقالات ISI برای کتاب شما
- تولید محتوا با مقالات ISI برای نشریه یا رسانه شما
پیشنهاد می کنیم کیفیت محتوای سایت خود را با استفاده از منابع علمی، افزایش دهید.
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Information Sciences, Volume 209, 20 November 2012, Pages 16–36
The concept of particle swarms originated from the simulation of the social behavior commonly observed in animal kingdom and evolved into a very simple but efficient technique for optimization in recent past. Since its advent in 1995, the Particle Swarm Optimization (PSO) algorithm has attracted the attention of a lot of researchers all over the world resulting into a huge number of variants of the basic algorithm as well as many parameter selection/control strategies. PSO relies on the learning strategy of the individuals to guide its search direction. Traditionally, each particle utilizes its historical best experience as well as the global best experience of the whole swarm through linear summation. The Comprehensive Learning PSO (CLPSO) was proposed as a powerful variant of PSO that enhances the diversity of the population by encouraging each particle to learn from different particles on different dimensions, in the metaphor that the best particle, despite having the highest fitness, does not always offer a better value in every dimension. This paper presents a variant of single-objective PSO called Dynamic Neighborhood Learning Particle Swarm Optimizer (DNLPSO), which uses learning strategy whereby all other particles’ historical best information is used to update a particle’s velocity as in CLPSO. But in contrast to CLPSO, in DNLPSO, the exemplar particle is selected from a neighborhood. This strategy enables the learner particle to learn from the historical information of its neighborhood or sometimes from that of its own. Moreover, the neighborhoods are made dynamic in nature i.e. they are reformed after certain intervals. This helps the diversity of the swarm to be preserved in order to discourage premature convergence. Experiments were conducted on 16 numerical benchmarks in 10, 30 and 50 dimensions, a set of five constrained benchmarks and also on a practical engineering optimization problem concerning the spread-spectrum radar poly-phase code design. The results demonstrate very competitive performance of DNLPSO while locating the global optimum on complicated and multimodal fitness landscapes when compared with five other recent variants of PSO.
In its simplest form, an unconstrained D-dimensional real-parameter optimization problem can be formulated as a search for the optimal parameter vector View the MathML source, which minimizes an objective function View the MathML source i.e. View the MathML source for all View the MathML source, where Ω is a non-empty large finite set serving as the domain of the search. For an unconstrained optimization problem, Ω∈RD. Since View the MathML source, the restriction to minimization is without loss of generality. In general, the optimization task is complicated by the existence of non-linear objective functions with multiple local minima. A local minimum View the MathML source may be defined as: View the MathML source, where ∥·∥ indicates any p-norm distance measure. Real-parameter optimization problems are rife in different fields of engineering, social and physical sciences and many of them pose severe challenge to the classical derivative-based techniques. This fact has led the researchers to develop various optimization techniques founded on the simulation of natural phenomena. Some of the real parameter optimizers that are in wide use these days are Genetic Algorithm (GA) , Simulated Annealing (SA) , Tabu Search (TS) , Covariance Matrix Adaptation Evolution Strategies (CMA-ESs) , Teaching–Learning-Based Optimization  and , Harmony Search , Immune Algorithm , Differential Evolution (DE)  and , and Particle Swarm Optimization (PSO) . PSO , ,  and  marks one of the most popular classes of nature-inspired optimizers and has its roots in artificial life and social psychology. In fact, it emulates the flocking behavior of birds and fish schooling. PSO has become very popular these days as an efficient algorithm for intelligent search and optimization. It does not require any gradient information of the function to be optimized, uses only primitive mathematical operators, and is conceptually very simple. Since its inception in 1995, PSO has attracted a great deal of attention of the researchers all over the globe resulting into nearly uncountable number of variants of the basic algorithm, theoretical and empirical investigations of the dynamics of the particles, parameter selection and control, and applications of the algorithm to a wide spectrum of real world problems from diverse fields of science and engineering. For a comprehensive knowledge on the foundations, perspectives, and applications of PSO, see , , , , ,  and . PSO is, however, not free from premature convergence, especially over multimodal, rugged, and non-separable fitness landscapes. Several PSO-variants were proposed by the researchers over the past decade to circumvent these problems and make PSO more efficient as a black-box optimizer over the continuous search space for real parameter optimization problems. Some of the most significant variants of PSO that attracted much interest from the researchers in recent past are Comprehensive Learning PSO (CLPSO) , Dynamic Multi-Swarm PSO (DMSPSO) , Fully Informed Particle Swarm (FIPS) , Unified PSO (UPSO) , Adaptive PSO (APSO) , scale-free FIPS , Orthogonal Learning PSO (OLPSO) , etc. In addition, there are several hybrid PSO variants, which incorporate tested methods of other algorithms with PSO, such as GA-PSO , Evolutionary PSO (EPSO) , DE-PSO , and C-PSO . In this paper, we present an improved variant of the CLPSO algorithm, called Dynamic Neighborhood Learning PSO (DNLPSO), for the purpose of solving single-objective optimization problems. CLPSO uses a novel learning strategy whereby all other particles’ historical best information is used to update a particle’s velocity. In order to achieve a better balance of the explorative and exploitative behaviors of CLPSO, we incorporated a few novel strategies regarding the selection of the exemplar particles that contribute to the velocity updating of the other learning particles in the swarm. By the consistent updating of their velocities in course of the algorithm, all the particles tend to reach better solutions and eventually converge to an optimal solution. The learning strategies which we have incorporated in the proposed algorithm are as follows: (1) The learner particle learns not only from its own experience, but also from the experience of the particles in the swarm. However, unlike CLPSO, the exemplar particles are not chosen randomly, but from a predefined neighborhood, thus making the particles diverse enough for yielding good results. (2) The neighborhood of the learner particles have been made dynamic in nature i.e. the entire swarm is regrouped into a number of neighborhoods after certain intervals. This enhances the explorative nature of the algorithm and thus helps the particles to come out from local optima in case of premature convergence. Rest of the article is organized in the following way: Section 2 provides an overview of the PSO family of algorithms and also discusses the CLPSO algorithm. In Section 3, the proposed DNLPSO is discussed in sufficient details. Section 4 provides a theoretical analysis of the explorative power of the algorithm justifies the motivation of the method. Effects of control parameters on the performance of DNLPSO are presented in Section 5. The experimental results on numerical unconstrained and constrained benchmarks as well as a practical engineering optimization problem are presented and discussed in Section 6. Finally Section 7 concludes the paper.
نتیجه گیری انگلیسی
In CLPSO as the particles learns in a comprehensive strategy they have more number of exemplars to learn from and a larger potential space to fly. This learning strategy enables the CLPSO to make use of the information in swarm more effectively to generate better quality solution when compared to other variants of PSO. But the selection of the exemplar being random in this algorithm the particle may not be guided always properly as already mentioned. This being the main drawback of this brilliant algorithm we have tried to modify this algorithm in our paper so that it can perform better. We have implemented the idea of neighborhood based learning strategy so that the learner particle can be guided properly. From the experiments and analysis we observe that this modified learning strategy enables our algorithm to perform more efficiently. Based on the results of the six algorithms on the 16 chosen test problems, we can conclude that DNLPSO significantly improves the CLPSO’s performance and gives the best performance except in few cases. In this regard one point is worth enough to mention that according to No Free Lunch (NFL) theorem  it is impossible to have a best general-purpose universal optimization strategy and the only way for one strategy to be superior to the others is when we focus on a particular class of problems only. The main attraction of our algorithm is its simplicity and easy implementation.