نظریه دمپستر-شافر در سیستم های اطلاعات جغرافیایی : یک بررسی
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|17561||2007||9 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Expert Systems with Applications, Volume 32, Issue 1, January 2007, Pages 47–55
Since the information used in a Geographic Information System has a certain degree of uncertainly, in general classical mathematics models should not be applied to solve geographical problems computationally. Therefore, probabilistic or fuzzy-related methods should be considered, in order to model the behaviour of real problems that have to be solved by or with a Geographic Information System. In this paper, a review of the application of Dempster–Shafer Theory of Evidence—also called “belief functions”—in relation to Geographic Information System is given. The review will focus on classification as a way of fusing information in a Geographic Information System. Information fusion, for classification, represents the first step in the abstraction of information and a means of data mining, and both the advantages and limitations of the technique of the Theory of Evidence in comparison to other techniques are analysed.
It is difficult to avoid uncertainty when attempting to make models of the real world. Uncertainty is inherent to natural phenomena, and it is impossible to create a perfect representation of reality. Classic mathematics deals with ideal worlds where perfect geometric figures exist and can verify extraordinary conditions. The formalisation of fuzzy sets started in the 1960s with the works of Zadeh (1965) in fuzzy sets and Dempster (1968) in belief functions. Belief functions offer a non Bayesian method for quantifying subjective evaluations by using probability. In the 1970s, it was further developed by Shafer, whose book. Mathematical Theory of Evidence (Shafer, 1976) remains a classic in belief functions, or the so-called Theory of Evidence (TE). This theory has been also called the Dempster–Shafer Theory of Evidence. In the 1980s, the scientific community working with Artificial Intelligence got involved in using TE in applications. Parallel formalisations continued in the 1980s with the work of Higashi and Klir, 1982, Höhle, 1982 and Yager, 1983. In the 1990s, Klir (1991) developed it further and gave it the name of “General Theory of Information.” Today, mathematical formalisations continue to be developed. There are many publications touching upon this subject, and entire congresses are dedicated specifically to uncertainty and its related fields. The increasing computational power has contributed to new possibilities for the solutions of problems in many areas of science and technology—mathematics, medicine, social science, business, and the like—and every aspect of the real world is capable of been tackled by these new computational techniques. The formalisation of uncertainty in order to construct models for the real world is so important that Klir (1995) has even said that we are now experiencing a change of attitude within the scientific community—a change that has all the characteristics of a new framework for understanding natural phenomena; a new concept of knowledge in the sense of a new paradigm, as it has been highlighted in the popular book by Kuhn (1962). Currently, the use of Geographic Information Systems (GIS) is widespread and used in many applications. Their most important use is as a decision support system, but to so use it requires that one combine information within the GIS in an optimal manner, in order that the decision-maker can extract the more relevant information in helping him or her make the decision. Moreover, the data store in each layer of the GIS and the method utilised to fuse the information are both inherently rife with uncertainty. Section 2 of this paper will be dedicated to briefly introducing to the reader the GIS terminology required to grasp the thrust of the paper. In Section 3, the inherent uncertainties are presented, along with the new paradigm of fuzzy methods for modelling natural phenomena; more specifically, uncertainty for geoinformation will be examined, along with the problems with implementing expert systems in GIS. Classification in a GIS allows for the reduction of information, and this classification is carried out in layers within a framework of uncertainty. Section 4 is dedicated to TE, and it has been written in a manner similar to Section 2 (i.e., providing a brief introduction and outlining the terminology involved); this section finishes with the Dempster rule, which is fundamental in fusing information in a GIS. Section 5 presents several case studies where a mixture of information was garnered from layers of a GIS, in order to make decisions. Most of these case studies used TE for combining information.
نتیجه گیری انگلیسی
For performing classification in GIS, several techniques have been used: Endorsement Theory, Neural Network, TE and Bayes. Classification is important, because it represents the first step in abstraction or the reduction of information within a GIS. The obvious question remains: Is there one technique that is better than the others for geographic applications in particular? It has been shown that for land cover with categorical data, Endorsement theory is a better option than Bayes or TE (see Comber et al., 2004). Neural networks have been used several times for classification in a GIS. The performance of the Neural Network is necessarily variable, depending upon the architecture of the net: number of hidden layers, number of hidden nodes within each hidden layer, the function used to normalize the input and output data, the learning parameter, the number of training iterations utilized, initial weight values, and so forth. The TE classifier requires also fitting some parameters, with the most important being the assignation of evidence from the input data. Therefore, it is very difficult to compare the methods for fusing information in a GIS, given the variety of geographic applications. Even within the same application, doing so is difficult, due to the many parameters that have to be fine-tuned within the algorithms. MERCURY requires little user intervention, and is not as sensitive to parameter specification as Neural Network inputs. The TE classifier also had significantly faster computing times compared to the Neural Network, but as said before, one cannot be sure that a Neural Network with the correct number of layers and neurons could outperform TE, even at the cost of more computer time. We agree with Fisher’s (2001) opinion, when he says that there is no one method that is better than another, but that they are broadly complementary. Instead, he has said that there is an obligation on the part of the user of any method to compare it with others and to state which one works best for the application at hand. TE is a theory that has been applied to geographic science with some success. The main problem in the applications is the assignation of belief and uncertainty to the input layers. Most of the time, intuition is the only criteria for this assignation. Here one must apply the rule that says, “If a system is fed with low-quality data, it cannot be expected high quality results.” Garbage in = garbage out. This is so, even for systems based on theories with sound mathematical foundations. It could be said that the practice of information fusion for classification and decision-making with a GIS is in its infancy. Even though it has been applied several times in different field—as has been shown in this paper—there are few studies that have presented comparisons with other methods. Where this has occurred, no exhaustive search for all the parameters has usually been carried out. Currently in the application of TE to GIS there is a gap between application and theory; usually, applications come before the theory. This is typical of immature research areas, and it is self-evident to say that lot of work remains to be done in the fusion of information in a GIS.