ذرات الگوریتم ازدحام مختل برای بهینه سازی عددی
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|10440||2010||6 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 4180 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
- تولید محتوا با مقالات ISI برای سایت یا وبلاگ شما
- تولید محتوا با مقالات ISI برای کتاب شما
- تولید محتوا با مقالات ISI برای نشریه یا رسانه شما
پیشنهاد می کنیم کیفیت محتوای سایت خود را با استفاده از منابع علمی، افزایش دهید.
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Applied Soft Computing, Volume 10, Issue 1, January 2010, Pages 119–124
The canonical particle swarm optimization (PSO) has its own disadvantages, such as the high speed of convergence which often implies a rapid loss of diversity during the optimization process, which inevitably leads to undesirable premature convergence. In order to overcome the disadvantage of PSO, a perturbed particle swarm algorithm (pPSA) is presented based on the new particle updating strategy which is based upon the concept of perturbed global best to deal with the problem of premature convergence and diversity maintenance within the swarm. A linear model and a random model together with the initial max–min model are provided to understand and analyze the uncertainty of perturbed particle updating strategy. pPSA is validated using 12 standard test functions. The preliminary results indicate that pPSO performs much better than PSO both in quality of solutions and robustness and comparable with GCPSO. The experiments confirm us that the perturbed particle updating strategy is an encouraging strategy for stochastic heuristic algorithms and the max–min model is a promising model on the concept of possibility measure.
Particle swarm optimization (PSO) algorithm is a population-based heuristic global optimization technology introduced by Kennedy and Eberhart  in 1995. Its basic idea is based on the simulation of simplified animal social behaviors such as fish schooling, bird flocking, etc. Recently, Poli et al.  reviewed PSO algorithm, current and ongoing research, applications and open problems. In PSO algorithm, the individual is called particle which has no mass and volume, and the trajectory of each individual in the search space is adjusted by dynamically altering the velocity of each particle, according to its own flying experience and the flying experience of the other particles in the search space. The next iteration takes place after all particles have been moved. Eventually the swarm as a whole, like a flock of birds collectively foraging for food, is likely to move close to an optimum of the fitness function. PSO algorithm is becoming very popular due to its simplicity of implementation and ability to quickly converge to a reasonably good solution. Eberhart and Shi  used a fuzzy system to adapt the inertia weight ω to significantly improve PSO performance. A fuzzy variable neighborhood particle swarm optimization  is introduced to represent the quadratic assignment problem with a fuzzy matrix. A novel fuzzy adaptive optimization strategy  is introduced to avoid falling into local optima based on double-variable and single-dimensional fuzzy control structure. Hu and Li  discard the particle velocity and reduces the basic PSO from the second order to the first order difference equation. The evolutionary process is only controlled by the variables of the particles position. Some well known algorithms ,  and  are hybridized with PSO and even better results are reported. Langdon and Poli  use evolutionary computation to automatically find problems which demonstrate the strength and weaknesses of modern search heuristics and illustrate the benefits and drawbacks of different population sizes, velocity limits, and constriction coefficients. A memetic algorithm  with a synchronous particle local search and a fuzzy global best for the updating of a particle trajectory is proposed for multi-objective optimization. The high convergence speed of PSO often results in a rapid loss of diversity during the optimization process, which inevitably leads to undesirable premature convergence. A Guaranteed Convergence PSO (GCPSO) is discussed  and a separate velocity update formula is used for the best particle in the swarm. Shelokar et al.  proposed an improved PSO hybridized with ant colony approach which applied PSO for global optimization and the idea of ant colony approach to update positions of particles to attain rapidly the feasible solution space. In this paper a perturbed particle swarm algorithm (pPSA) is presented so as to escape from the local optimal trap. The new particle updating strategy is based upon the concept of possibility  to deal with the problem of maintaining diversity within the swarm as well as to promote exploration in the search. pPSA with a perturbed particle updating strategy is “possibly at gbest” instead of a crisp location which is different from other fuzzy PSOs. In order to further understand the effects of uncertainty two new models are proposed and compared with each other together with the initial max–min model. The remainder of this paper is organized as follows. Some background information is provided in Section 2. The details of the perturbed particle updating strategy and pPSA are described in Section 3. The experimental performance, evolutionary behaviors and two new models on measuring the uncertainty on global numerical optimization are shown are proposed in Section 4. Conclusion and future works are drawn in Section 5.
نتیجه گیری انگلیسی
A perturbed particle updating strategy is presented to overcome the premature convergence of PSO and an pPSA algorithm is proposed in this paper. The perturbed updating strategy is based on the concept of possibility measure to model the lack of information about the true optimality of the gbest. The gbest in pPSA means “possibly at gbest (p-gbest)” instead of a crisp location in the conventional approaches. Numerical experiments indicate that this strategy can effectively avoid the local optimality with a non-increasing uncertainty. Two models are also proposed to control the uncertainty and are compared with max–min model. How to effectively incorporate this strategy into other algorithms and to analyze the controlling model on uncertainty are interesting works.