بهینه سازی انباشته کردن تنظیمات مجموعه از طریق الگوریتم مصنوعی کلونی زنبور عسل
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|7571||2013||9 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 6987 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
- تولید محتوا با مقالات ISI برای سایت یا وبلاگ شما
- تولید محتوا با مقالات ISI برای کتاب شما
- تولید محتوا با مقالات ISI برای نشریه یا رسانه شما
پیشنهاد می کنیم کیفیت محتوای سایت خود را با استفاده از منابع علمی، افزایش دهید.
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Swarm and Evolutionary Computation, Available online 25 April 2013
A Classifier Ensemble combines a finite number of classifiers of same kind or different, trained simultaneously for a common classification task. The Ensemble efficiently improves the generalization ability of the classifier compared to a single classifier. Stacking is one of the most influential ensemble techniques that applies a two level structure of classification namely the base classifiers level and the meta-classifier level. Finding suitable configuration of base level classifiers and the meta-level classifier is always a tedious task and it is domain specific. The Artificial Bee Colony (ABC) Algorithm is a relatively new and popular meta-heuristic search algorithm proved to be successful in solving optimization problems. In this work, we propose the construction of two types of stacking using ABC algorithm: ABC-Stacking1 and ABC-Stacking2. The proposed ABC based stacking is tested using 10 benchmark datasets. The results show that the ABC-Stacking yields promising results and is most useful in selecting the optimal base classifiers configuration and the meta-classifier.
Classifier Ensemble (CE) has been an interesting topic of research during the past decade. The idea behind Ensemble Learning is to learn a set of classifiers instead of learning a single classifier and then combine the predictions of multiple classifiers . The motivation behind this is to reduce variance and reduce bias so that the results are less dependent on peculiarities of a single training set and the combination of multiple classifiers may learn a more expressive model than a single classifier . Since its proposal, there has been an explosive growth in the CE domain and many ensemble methodologies have been proposed . Stacked Generalization also known as Stacking is one of the most popular ensembles and it employs two levels of construction: base classifier level and meta-classifier level. Stacking is one possible solution where the combiner is trained on unseen data constructed by a cross-validation procedure .It is inferred from the literature that, stacking increases the predictive accuracies of the ensemble to a greater extent . On Stacking the ensemble, two important problems arise: selection of appropriate base classifiers which will constitute the ensemble; Selection of suitable meta-classifier to combine the base classifiers . Classifiers give varied performance on the basis of problem domain under consideration. The same classifier may yield good results for one dataset and not for another . A classifier when acting as a meta-classifier may give a varied performance, because it is dealing with meta- instances and not the actual instance. So even after deciding upon the classifiers to be configured into the ensemble, the choice of meta-classifier still stands as a question and again this is a matter of issue . The selection of base classifiers and a suitable meta-classifier to obtain good performance is very difficult. If the option of base-classifiers and meta-classifier to be considered is smaller in number, an exhaustive search could be applied to find the optimal configuration. However exhaustive searches are time consuming and the optimal configurations obtained for one domain may not be optimal for another . An alternative solution could be to use meta-heuristic search which has a smaller search space and a good convergence speed compared to exhaustive search . Nature inspired meta-heuristic search have been widely used in finding optimal solutions. In the recent past, Evolutionary and Swarm Intelligent algorithms like Genetic Algorithm (GA), Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO), Differential Evolution (DE) and Niching Algorithms use meta-heuristics and play a major role in solving combinatorial, constrained and unconstrained problems wherever optimization is required , , , , ,  and . Apart from stand alone, the ensemble form of these algorithms also finds its application in producing optimal solutions , , ,  and . This paper proposes the construction of stacking using the Artificial Bee Colony (ABC) algorithm. The ABC algorithm is a swarm intelligent, meta-heuristic search algorithm inspired by the foraging behavior of honey bee swarms. This algorithm was introduced by Karaboga  and it serves as a powerful optimization tool. The ABC algorithm has shown a competitive performance (and sometimes superior) compared to GA, PSO and ACO ,  and . The ABC algorithm is also simple in concept and has only fewer control parameters . In literature, stacking has been constructed using optimization algorithms like GA  and  and ACO  and . GA-Stacking and ACO-Stacking have given optimal configuration of base classifiers for stacking with promising results , ,  and . However to the best of our knowledge, stacking using ABC algorithm has not been proposed so far and this proposal has shown excellent results. Though stacking configuration has been optimized using GA and ACO, the main idea behind this proposal is: to analyze whether more optimization can be achieved with increased performance of the ensemble stacking; to analyze the performance of ABC whether it shows competitive performance to ACO and provides powerful optimization. The ABC-Stacking has been proposed and implemented in two levels: ABC-Stacking1 (Base-Level Stacking) and ABC-Stacking2 (Meta-Level Stacking). In base-level stacking, the employed and onlooker bees search for the optimal configuration of base classifiers and the meta-classifier is a fixed learning algorithm. In meta-level stacking, while forming the base classifier configuration, the bees simultaneously keep exploring for the best meta-classifier through the pool of classifiers, each time the stacking of selected classifiers is done. This paper is organized in 8 sections. Section 2 gives a brief note on CE and the ensemble stacking. The concept of ABC algorithm is explained in Section 3. In Section 4, the previous works in the literature related to Stacking are discussed. Section 5 gives the detailed description of the proposed ABC-Stacking. Experimental environment and the experiments are discussed in Section 6. Computational results of the proposed method in comparison with the existing methods are discussed in Section 7. Section 8 concludes the paper.
نتیجه گیری انگلیسی
In this paper, we propose the Artificial Bee Colony approach for optimizing the Stacking ensemble configurations. This approach was compared with the predictive learners, basic ensemble techniques, GA-Stacking and ACO-Stacking. The proposed algorithm has good convergence and strong search capability in the search space. This algorithm has resulted in efficient configuration of base-classifiers, selection of best meta-classifier and has shown performance comparable to the best results reported so far.