دانلود مقاله ISI انگلیسی شماره 29310
ترجمه فارسی عنوان مقاله

حساسیت به شواهد در شبکه های بیزی گاوسی با استفاده از اطلاعات متقابل

عنوان انگلیسی
Sensitivity to evidence in Gaussian Bayesian networks using mutual information
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
29310 2014 12 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Information Sciences, Volume 275, 10 August 2014, Pages 115–126

ترجمه کلمات کلیدی
آنتروپی - انتشار مدارک و شواهد - شبکه های بیزی گاوسی - اطلاعات متقابل - تجزیه و تحلیل حساسیت -
کلمات کلیدی انگلیسی
Entropy, Evidence propagation, Gaussian Bayesian network, Mutual information, Sensitivity analysis,
پیش نمایش مقاله
پیش نمایش مقاله  حساسیت به شواهد در شبکه های بیزی گاوسی با استفاده از اطلاعات متقابل

چکیده انگلیسی

We introduce a methodology for sensitivity analysis of evidence variables in Gaussian Bayesian networks. Knowledge of the posterior probability distribution of the target variable in a Bayesian network, given a set of evidence, is desirable. However, this evidence is not always determined; in fact, additional information might be requested to improve the solution in terms of reducing uncertainty. In this study we develop a procedure, based on Shannon entropy and information theory measures, that allows us to prioritize information according to its utility in yielding a better result. Some examples illustrate the concepts and methods introduced.

مقدمه انگلیسی

A Bayesian network is a probabilistic graphical model that represents the conditional dependencies among a set of random variables through a directed acyclic graph (DAG). Bayesian networks have become an increasing popular representation for reasoning under uncertainty and are widely applied to diverse fields, such as medical diagnosis, image recognition, and decision-making systems, among many others. Formally, a Bayesian network consists of qualitative and quantitative parts. The quantitative part is given by a DAG, whose nodes represent random variables that may be observable, latent, or a target variable of interest. The qualitative part, specifies the conditional probability distribution for each node given its parents; this allows us to compute the joint probability distribution of the model. The aim of Bayesian network analysis is usually to obtain the conditional probability distribution of a target variable when a set of observable variables (evidence values) is available. Sometimes the variables defined as evidence are fixed in advance but other times they vary from model to model. In this context, sensitivity analysis is a method for investigating the relationship between network inputs and the conditional distribution of the target variable, for which inputs can be the parameters considered in the conditional probability distribution or actual values taken by the observed variables. There is a large body of literature dealing with sensitivity analysis techniques for Bayesian networks. Most studies have addressed discrete Bayesian networks. For example, Malhas and Al Aghbari [16] introduced a score based on mutual information increases to discover new interesting patterns. Chan and Darwiche [4] presented a distance measure between the original distribution and a new one in which the parameters have been changed. Laskey [14] measured sensitivity by computing the partial derivatives of output probabilities with respect to given parameters. Kostal et al. [13] proposed measures of statistical dispersion based on Shannon and Fisher information. Castillo and Kjaerulff [3] developed a sensitivity analysis for Gaussian Bayesian networks (GBNs) using partial derivatives and symbolic propagation. Gómez-Villegas et al. [7], [8], [9] and [10] used Kullback Leibler divergence as a measure of sensitivity in GBNs. Here we focus on a different aspect of sensitivity analysis. As mentioned previously, the set of evidence variables is not specified in advance in many real-life problems. In fact, it is usual practice to try to collect as much information as possible. However, this information always has an associated cost, so it may be desirable to evaluate which of all the available variables are most informative and useful for obtaining the best results. A very important assumption made in this paper is that a better result is achieved if the conditional probability distribution of the target variable has the lowest uncertainty, that is, the lowest entropy. Thus, we use information theory to provide tools to prioritize the available information to reduce the uncertainty of the target variable as far as possible. The remainder of the article is structured as follows. In Section 2 we briefly review GBNs and show how propagation of observable values can be performed in this case. We also introduce our working example. Section 3 presents some general concepts of entropy, mutual information, and normalized measures. In Section 4, we first propose a procedure to study the sensitivity to evidence in GBNs and then perform a sensitivity analysis on our working example. The second contribution of the paper is presented in Section 5, which is an extension of the sensitivity analysis proposed above but incorporating normalized measures. Results are presented for the working example and a supplementary example. Finally, in Section 6 we draw conclusions.

نتیجه گیری انگلیسی

We proposed a new methodology to quantify numerically the contribution of each non-evidential variable to reduction in uncertainty of a target variable. Then, the conditional distribution of the variable of interest is used to select the most informative variables. The first method proposed is based on mutual information measures and allows us to obtain a prioritization to request additional information. We showed that given a set of evidence variables, if there are some non-evidential variables with mutual information close to zero, they should not be considered as evidence because they do not add any information to the analysis. This is an important contribution, because it reduces the cost of modeling and data collection. The second method proposed, which is an extension of the first, includes normalized measures of mutual information. Under some restrictions, this procedure, as well as prioritizing unobserved variables, can compare the contribution of the same variable to a different target or to the same target in different Bayesian networks. Therefore, with this method more and better tools are provided for analysis, although the restrictions should be taken into account. We introduced a Bayesian network as an example to show the results of the first procedure in our sensitivity analysis; we then added a second Bayesian network to make comparisons using the second procedure.