دانلود مقاله ISI انگلیسی شماره 22401
ترجمه فارسی عنوان مقاله

تجزیه و تحلیل خطا-رد تجارت کردن در طبقه بندی چندگانه ترکیبی خطی

عنوان انگلیسی
Analysis of error-reject trade-off in linearly combined multiple classifiers
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
22401 2004 21 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Pattern Recognition, Volume 37, Issue 6, June 2004, Pages 1245–1265

ترجمه کلمات کلیدی
سیستم های طبقه بندی چندگانه - همجوشی طبقه بندی - کمبینرس خطی - رد گزینه - تجارت کردن رد - خطا
کلمات کلیدی انگلیسی
Multiple classifier systems, Classifier fusion, Linear combiners, Reject option, Error-reject trade-off
پیش نمایش مقاله
پیش نمایش مقاله  تجزیه و تحلیل خطا-رد تجارت کردن در طبقه بندی چندگانه ترکیبی خطی

چکیده انگلیسی

In this paper, a theoretical and experimental analysis of the error-reject trade-off achievable by linearly combining the outputs of an ensemble of classifiers is presented. To this aim, the theoretical framework previously developed by Tumer and Ghosh for the analysis of the simple average rule without the reject option has been extended. Analytical results that allow to evaluate the improvement of the error-reject trade-off achievable by simple averaging their outputs under different assumptions about the distributions of the estimation errors affecting a posteriori probabilities, are provided. The conditions under which the weighted average can provide a better error-reject trade-off than the simple average are then determined. From the theoretical results obtained under the assumption of unbiased and uncorrelated estimation errors, simple guidelines for the design of multiple classifier systems using linear combiners are given. Finally, an experimental evaluation and comparison of the error-reject trade-off of the simple and weighted averages is reported for five real data sets. The results show the practical relevance of the proposed guidelines in the design of linear combiners.

مقدمه انگلیسی

During the past decade, many research communities, among others the pattern recognition community and the machine learning community, have shown a growing interest in the so-called Multiple Classifier Systems (MCSs) [1]. It is now widely accepted that a combination of multiple classifiers can provide advantages over the traditional monolithic approach to classifier design. Besides the many experimental works showing the improvement in performance that can be achieved by MCSs in several applications, a few works have also provided theoretical analyses of the simplest combining techniques proposed in the literature. For instance, Tumer and Ghosh [2] and [3] developed a theoretical framework for analysing the performance improvement achievable by the simple average of classifier outputs. A theoretical analysis of the majority voting rule was provided by Lam and Suen [4]. Kittler et al. [5] developed a theoretical framework for the combination of classifiers that use distinct pattern representations. Kuncheva [6] compared the classification error at a given point in the feature space, for majority voting, simple average, and order statistics rules. Kittler and Alkoot [7] compared the sum and majority vote rules theoretically and by experiments. Despite these important works, a general theoretical framework for classifier combination is currently beyond the state of the art [1]. Consequently, many important topics lack theoretical explanations, and a comparison of various fusion rules is only possible by experiments. Further theoretical analyses aimed at investigating these topics and comparing a limited set of fusion rules, though under strict assumptions, are necessary steps towards a general framework of classifier fusion. In this paper, we address a topic that has not been considered from a theoretical viewpoint, i.e. the improvement of the error-reject trade-off achievable by classifier combination. Theoretical works, like the ones quoted above, have analysed the performance of MCSs only in terms of error probability, without taking into account the reject option. Only few experimental works have evaluated the performance of MCSs with the reject option. For instance, Battiti and Colla [8] have investigated the error-reject trade-off provided by MCSs using majority voting rule or linear combiners by experiments. Lam and Suen's experiments [9] analysed MCSs using the Bayesian and the weighted majority voting rules. The experimental work by Foggia et al. [10] dealt with the Bayesian rule. In this work we focus on linear combiners, one of the simplest and most widely used combining techniques, and analyse the theoretical and experimental improvement of the error-reject trade-off that can be achieved by classifier combination. The main purpose of this paper is to provide an analytical evaluation of the improvement in error-reject trade-off achievable by the linear combination of multiple classifiers. To this end, we have extended the analytical framework developed by Tumer and Ghosh [2] and [3]. This framework allows to evaluate the error probability, without the reject option, achievable by simple averaging the outputs of classifiers that provide estimates of the class posterior probabilities. In previous works [11] and [12], we used this framework to compare the performance of the simple and weighted average rules. In this work, we extend the framework to the evaluation of the expected risk of individual classifiers, and of the linear combination of multiple classifiers. This allows us to assess and compare the performances of individual and linearly combined classifiers when the reject option is used. A preliminary analysis has been presented by the authors in Ref. [13]. In this paper, we extend both the theoretical analysis and the experimental investigation. Furthermore, we address the problem of obtaining practical guidelines for the design of linear combiners with the reject option, based on the results obtained from the analysis of our theoretical framework. The paper is organised as follows. In Section 2, the theoretical background of statistical classification with the reject option is reviewed briefly. In Section 3, the framework by Tumer and Ghosh is summarised, and our extension to classification with the reject option described. The quantitative analysis of the error-reject trade-off achievable by the simple average and weighted average combining rules is presented in Section 4. This analysis allows to compare the error-reject trade-off of these two rules. In Section 5, we show that the theoretical framework suggests some practical guidelines for the design of a linear combiner with the reject option. The results of an experimental comparison, guided by the analysis in 4 and 5, are reported in Section 6. Conclusions are drawn in Section 7.

نتیجه گیری انگلیسی

In this paper we presented a theoretical framework for the analysis of the error-reject trade-off achievable by linearly combining the outputs of an ensemble of classifiers. We believe our work gives two main contributions to the state of the art of multiple classifier systems. First, we have addressed a topic that has never been considered from the theoretical viewpoint, that is, the improvement of the error-reject trade-off achievable by linear combination of classifier outputs, and proposed an analytical framework that allows analysing and quantifying this improvement. Secondly, from our framework we have derived some practical guidelines for the design of linear combiners with the reject option. As regards the first point, we evaluated the reduction of the added risk achievable by the simple average, when the reject option is used, under different hypotheses on the distribution of errors affecting the estimates of posterior probabilities provided by individual classifiers. Our analysis showed that the conclusions drawn by Tumer and Ghosh [3] concerning the reduction of the added error achievable by the simple average can be extended to the reduction of the added risk. Furthermore, we showed the conditions under which the weighted average can provide a better error-reject trade-off than the simple average. Although the main goal of our framework is to contribute to the understanding of error-reject trade-off of linear combiners, we think that many of these results can be used in the practical design of linear combiners with the reject option. In this paper, we have described some of these results and assessed them by experiments (5 and 6). The reported experimental results show that the guidelines obtained by our theoretical framework under the assumption of unbiased and uncorrelated estimation errors can be used to compare qualitatively the error-reject trade-off of the simple and the weighted average even for real applications where the above assumption is likely to be violated. In particular, we can use such guidelines to predict qualitatively whether the best error-reject trade-off will be provided by the simple average or the weighted average. According to our results, this can be assessed by simply comparing the accuracy of individual classifiers for the zero reject rate. Moreover, experimental results also proved the effectiveness of the simple, parametric technique, for weight estimation, derived from our framework. We think that these guidelines for linear combiner design are attractive from a practical viewpoint, as they require to estimate only the accuracy of individual classifiers for the zero reject rate. In particular, our method of weight estimation is much simpler to implement than other methods proposed in the literature [19] and [22]. The theoretical framework presented in this paper opens the way to several research directions. As regards linear combiners, the framework can be used to obtain a quantitative comparison between the error-reject trade-off of the simple and the weighted average in terms of the difference between their added risks. This is the subject of our on-going research [11] and [12]. Such a quantitative comparison would be of great interest both from the theoretical and the practical point of view. As pointed out in the Introduction, a quantitative comparison of different fusion rules are mandatory steps towards a general framework for classifier fusion. Furthermore, it can also be used as starting point for the analysis of the error-reject trade-off of other fusion rules, such as order statistics combiners.