دانلود مقاله ISI انگلیسی شماره 10440
ترجمه فارسی عنوان مقاله

ذرات الگوریتم ازدحام مختل برای بهینه سازی عددی

عنوان انگلیسی
A perturbed particle swarm algorithm for numerical optimization
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
10440 2010 6 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Applied Soft Computing, Volume 10, Issue 1, January 2010, Pages 119–124

ترجمه کلمات کلیدی
ذرات بهینه سازی ازدحام - مزاحمت - استراتژی به روز رسانی ذرات - بهینه سازی عددی
کلمات کلیدی انگلیسی
Particle swarm optimization,Perturbed PSA,Particle updating strategy,Numerical optimization
پیش نمایش مقاله
پیش نمایش مقاله  ذرات الگوریتم ازدحام مختل برای بهینه سازی عددی

چکیده انگلیسی

The canonical particle swarm optimization (PSO) has its own disadvantages, such as the high speed of convergence which often implies a rapid loss of diversity during the optimization process, which inevitably leads to undesirable premature convergence. In order to overcome the disadvantage of PSO, a perturbed particle swarm algorithm (pPSA) is presented based on the new particle updating strategy which is based upon the concept of perturbed global best to deal with the problem of premature convergence and diversity maintenance within the swarm. A linear model and a random model together with the initial max–min model are provided to understand and analyze the uncertainty of perturbed particle updating strategy. pPSA is validated using 12 standard test functions. The preliminary results indicate that pPSO performs much better than PSO both in quality of solutions and robustness and comparable with GCPSO. The experiments confirm us that the perturbed particle updating strategy is an encouraging strategy for stochastic heuristic algorithms and the max–min model is a promising model on the concept of possibility measure.

مقدمه انگلیسی

Particle swarm optimization (PSO) algorithm is a population-based heuristic global optimization technology introduced by Kennedy and Eberhart [1] in 1995. Its basic idea is based on the simulation of simplified animal social behaviors such as fish schooling, bird flocking, etc. Recently, Poli et al. [2] reviewed PSO algorithm, current and ongoing research, applications and open problems. In PSO algorithm, the individual is called particle which has no mass and volume, and the trajectory of each individual in the search space is adjusted by dynamically altering the velocity of each particle, according to its own flying experience and the flying experience of the other particles in the search space. The next iteration takes place after all particles have been moved. Eventually the swarm as a whole, like a flock of birds collectively foraging for food, is likely to move close to an optimum of the fitness function. PSO algorithm is becoming very popular due to its simplicity of implementation and ability to quickly converge to a reasonably good solution. Eberhart and Shi [3] used a fuzzy system to adapt the inertia weight ω to significantly improve PSO performance. A fuzzy variable neighborhood particle swarm optimization [4] is introduced to represent the quadratic assignment problem with a fuzzy matrix. A novel fuzzy adaptive optimization strategy [5] is introduced to avoid falling into local optima based on double-variable and single-dimensional fuzzy control structure. Hu and Li [6] discard the particle velocity and reduces the basic PSO from the second order to the first order difference equation. The evolutionary process is only controlled by the variables of the particles position. Some well known algorithms [7], [8] and [9] are hybridized with PSO and even better results are reported. Langdon and Poli [10] use evolutionary computation to automatically find problems which demonstrate the strength and weaknesses of modern search heuristics and illustrate the benefits and drawbacks of different population sizes, velocity limits, and constriction coefficients. A memetic algorithm [11] with a synchronous particle local search and a fuzzy global best for the updating of a particle trajectory is proposed for multi-objective optimization. The high convergence speed of PSO often results in a rapid loss of diversity during the optimization process, which inevitably leads to undesirable premature convergence. A Guaranteed Convergence PSO (GCPSO) is discussed [12] and a separate velocity update formula is used for the best particle in the swarm. Shelokar et al. [13] proposed an improved PSO hybridized with ant colony approach which applied PSO for global optimization and the idea of ant colony approach to update positions of particles to attain rapidly the feasible solution space. In this paper a perturbed particle swarm algorithm (pPSA) is presented so as to escape from the local optimal trap. The new particle updating strategy is based upon the concept of possibility [14] to deal with the problem of maintaining diversity within the swarm as well as to promote exploration in the search. pPSA with a perturbed particle updating strategy is “possibly at gbest” instead of a crisp location which is different from other fuzzy PSOs. In order to further understand the effects of uncertainty two new models are proposed and compared with each other together with the initial max–min model. The remainder of this paper is organized as follows. Some background information is provided in Section 2. The details of the perturbed particle updating strategy and pPSA are described in Section 3. The experimental performance, evolutionary behaviors and two new models on measuring the uncertainty on global numerical optimization are shown are proposed in Section 4. Conclusion and future works are drawn in Section 5.

نتیجه گیری انگلیسی

A perturbed particle updating strategy is presented to overcome the premature convergence of PSO and an pPSA algorithm is proposed in this paper. The perturbed updating strategy is based on the concept of possibility measure to model the lack of information about the true optimality of the gbest. The gbest in pPSA means “possibly at gbest (p-gbest)” instead of a crisp location in the conventional approaches. Numerical experiments indicate that this strategy can effectively avoid the local optimality with a non-increasing uncertainty. Two models are also proposed to control the uncertainty and are compared with max–min model. How to effectively incorporate this strategy into other algorithms and to analyze the controlling model on uncertainty are interesting works.