تجزیه و تحلیل عملکرد از مدل موازی دانه درشت از الگوریتم کلونی مصنوعی زنبور عسل
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|28399||2013||22 صفحه PDF||سفارش دهید||10407 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Information Sciences, Volume 253, 20 December 2013, Pages 34–55
Despite the efficiency of evolutionary algorithms is prominent for large scale problems, their running times in terms of CPU time are quite large. Multi processing units served by recent hardware developments can be employed to overcome this drawback reducing the running time and sharing the total workload. However, evolutionary algorithms cannot be directly distributed to processing units due to their cooperative working models. These models need to be modified to be able to run them on distributed environments without causing deterioration in performance. In this study, a detailed performance analysis of a parallel model for the artificial bee colony algorithm, which is one of the recently developed swarm based evolutionary algorithms and a promising numerical optimization tool, is proposed. For this purpose large-scale benchmark problems are solved by the proposed model and also its original sequential counterpart model. The model is also applied to a real-world problem: training of neural networks for classification purposes. Comparative results show that the artificial bee colony algorithm is very suitable to use in parallel architectures since it has the ability to produce high quality solutions with small populations due to its perturbation operator. The proposed model decreases the running time in addition to improving the performance and convergence rate of the algorithm. It can be said that the speedup gained over its sequential counterpart is almost linear.
Evolutionary algorithms (EAs) model natural evolution and are used to solve a wide range of problems . Many various versions of EAs based on different models have been proposed in the literature  and . The artificial bee colony (ABC) algorithm  introduced by Karaboga is a recent swarm-based evolutionary computation technique and models the foraging behavior of honey bees. The algorithm has been further studied by Karaboga and Basturk , , , ,  and , and has been found to be a good optimization tool especially for high dimensional and multi-modal problems. The ABC algorithm combines exploration and exploitation in a balanced manner and has just a few parameters including the population size, the number of cycles and the control parameter limit which determines some solutions to be abandoned. Several modifications to the original ABC algorithm have been presented to improve its performance for certain specific problems , , ,  and . Karaboga and Akay  compared the ABC algorithm with genetic algorithm, particle swarm optimization algorithm, differential evolution algorithm and evolution strategies. They pointed out the differences and similarities between the ABC algorithm and the others. Their study showed that the performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing fewer control parameters . Like other EAs, the running time of the ABC algorithm can be reduced while solving large-scale problems by using parallel implementations. However, most of the population-based algorithms including the ABC algorithm work sequentially in nature. To make them work in a parallel manner and communicate the units working in parallel with each other, some modifications need to be introduced into the original algorithm without loss of efficiency. In the literature, some recent parallel ABC algorithm models based on different parallelization strategies are available. The first parallel implementation of the ABC algorithm was used through a shared memory architecture . In that study, a whole colony was divided into local populations, each of which has a local memory, and a global solution is stored in a global shared memory. The approach provided speedup while it did not change the performance of the original algorithm. Luo et al. proposed a ripple communication strategy and investigated the effect of high population sizes and also different migration sizes . Subotic et al. assigned different threads to separate the subpopulations and proposed different types of communications among these subpopulations  and . Benitez and Lopes presented two parallel approaches for the ABC algorithm: a master–slave and a hybrid-hierarchical approach . The hybrid-hierarchical approach improved the quality of solutions while the master–slave approach did not produce satisfactory results. Three parallel models, which are the master–slave, multi-hive with migrations, and hybrid hierarchical, were examined in  and . The multi-population approaches generated better quality with less computational effort. These studies introduced low speedups since they employed a relatively small number of CPUs. Another issue related to these studies is that they used low dimensional problems and did not analyze benchmark problems requiring high computational effort. Very recently, Basturk and Akay implemented a parallel synchronous ABC algorithm based on a master–slave model and the results showed that the proposed model is as efficient as the asynchronous version while it requires much less time to solve large problems . To the best of our knowledge, any study on the coarse-grained model-based parallel ABC algorithm which investigates the effects of the number of subpopulations, migration interval and different migration topologies is not available in the literature yet. One of the aims of this study is to conduct such an analysis. The other purpose of the study is to employ the proposed parallel algorithm to a real-world problem in addition to large-scale benchmark problems. The real-word problem considered in the study is training of an artificial neural network (ANN). The ANN is a computing system that can learn on its own and has been widely used in many real-world applications. The design of the ANNs is a difficult and also an important optimization problem since the success of the ANNs depends greatly on the training process and the training algorithm used . Many optimization algorithms have been applied to train the ANNs. The ABC algorithm has also been applied successfully for training the ANNs ,  and . In this study, a coarse-grained model based parallel ABC algorithm (PABC) is implemented through the message passing interface (MPI)  among multi processing units. For this purpose, the whole population is divided into small size subpopulations and each subpopulation is evolved independently on a different core. The evolved subpopulations interact with each other and the information regarding a subpopulation migrates to its neighbors in some periods. Throughout the proposed study, some modifications such as the number of food sources in subpopulations, the migration interval, the migration size and migration topology which controls the amount of information exchanged among the subpopulations, are introduced to control the search behavior of the algorithm. The effect of these control parameters are analyzed to determine the parameter set that yields the best solutions in terms of solution quality and CPU time. The performance of the PABC algorithm is compared to its sequential counterpart’s to check the level of improvement. Results show that the modification in the original model provides a decrease in CPU time in addition to an improvement in solution quality. The reason for the decrease in CPU time is parallel implementation of the algorithm and distribution of processing load to multi processing units. Also, the reason for improved solution quality is that the ABC algorithm can work better with low size populations due to its perturbation operator which changes one parameter at a time. The rest of the paper is organized as follows. Section 2 provides a description of the original sequential ABC algorithm. The proposed parallel implementation is explained in Section 3. Experimental results are reported in Section 4 in detail. Finally, conclusions and future research lines are provided in Section 5.
نتیجه گیری انگلیسی
In this study, a parallel ABC algorithm based on a coarse-grained parallel computing model is presented. Performance of the proposed algorithm is compared to performance of its sequential counterpart in terms of solution quality and running time. The characteristics of the algorithm are discussed and its performance is analyzed based on computational experiments on benchmark problems with various complexities and specialities. We also investigate the performance of the proposed algorithm for training three different architectures of ANNs in a variety of classical classification problems. This study represents the first study in the literature both analyzing the parameters of the PABC algorithm and the application of this algorithm in training of ANNs for classification purposes. It is found that the performance and convergence of the PABC algorithm are superior to those of the ABC algorithm. Dividing the population into smaller parts produces better results as compared to large populations and as the migration interval is reduced the performance is improving significantly and migration topology does not have a marked influence on the results but achieves good convergence on the problems considered. This observation made us assign proper values for the number of subpopulations, migration interval and migration topology. Implementing a parallel ABC algorithm on a coarse-grained model gives efficient results in addition to reducing running time significantly. We obtained almost linear speedups for both optimizing the benchmark functions and classification of datasets using ANNs. A future development of this work will aim at further investigating different parallelization models of the ABC algorithm and the impact of programming models (master–slave, fine-grained and hybrid) on the performance of the algorithm on different real-world problems.