اثر آشفتگی پارامتر بلاک در شبکه های بیزی گاوسی: حساسیت و نیرومندی
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
29198 | 2013 | 20 صفحه PDF |

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Information Sciences, Volume 222, 10 February 2013, Pages 439–458
چکیده انگلیسی
In this work we study the effects of model inaccuracies on the description of a Gaussian Bayesian network with a set of variables of interest and a set of evidential variables. Using the Kullback–Leibler divergence measure, we compare the output of two different networks after evidence propagation: the original network, and a network with perturbations representing uncertainties in the quantitative parameters. We describe two methods for analyzing the sensitivity and robustness of a Gaussian Bayesian network on this basis. In the sensitivity analysis, different expressions are obtained depending on which set of parameters is considered inaccurate. This fact makes it possible to determine the set of parameters that most strongly disturbs the network output. If all of the divergences are small, we can conclude that the network output is insensitive to the proposed perturbations. The robustness analysis is similar, but considers all potential uncertainties jointly. It thus yields only one divergence, which can be used to confirm the overall sensitivity of the network. Some practical examples of this method are provided, including a complex, real-world problem.
مقدمه انگلیسی
A Bayesian network (BN) is a probabilistic graphical model used to study a set of variables with a known dependence structure. Probabilistic graphical models are framed in the field of AI to model uncertainty and numerous techniques have been developed to improve models and its learning. Learning an optimal Bayesian network classifier is an NP-hard problem then some algorithms has been developed to propose improvements to the traditional models (see [17]). Schemes to refine rules and methods to develop learning algorithms reducing computational cost and improving learning accuracy can be found in [31] and [32]. Computational intelligence techniques, as neural networks, support vector machines and extreme learning machine (see [15]), have been used in many applications. Our interest models, BNs, are revealed to be one of the best technique to study this type of real world problems in a number of complex domains, including medical diagnosis or dynamic systems, for example. A BN has two components, as shown in Definition 1. The qualitative part is a directed acyclic graph (DAG) showing the dependence structure of the variables. The quantitative part consists of conditional probability distributions assigned to the problem variables giving their parents in the DAG. Building a BN is a complex task because it requires the specification of a large number of parameters subject to cognitive biases [2]. The parameters are generally estimated from statistical data or assessed by human experts in the domain of application. “Experts are often reluctant to assess all the parameters required, feeling that they are unable to give assessments with a high level of accuracy” [6]. As a consequence of incompleteness of data and partial knowledge of the domain, the assessments obtained inevitably are inaccurate [30]. With inaccurate parameters, network output after evidence propagation may also be inaccurate, depending on the sensitivity of the model. Then, sensitivity analyses in BNs are necessary to evaluate the effect of uncertainty in the network and to determine the values of the parameters to get accurate network outputs. Sensitivity analysis is a general technique for investigating the robustness of the output of a mathematical model and is performed for various different purposes. The practicability of conducting such an analysis of a probabilistic network has recently been studied extensively, resulting in a variety of new insights and effective methods. A survey with some of these research results is in van der Gaag et al. [30]. Authors like Laskey [21], Coupé et al. [6] and [7], Chan and Darwiche [5] or Bednarski et al. [1] have studied the sensitivity in discrete BNs, where the parameters are the conditional probabilities of variables given its parents in the DAG. Moreover, all the papers mentioned above deal with variations in one parameter at a time holding the other fixed and with only one interest variable, being the network output the probability distribution of the interest variable given the evidence. An analysis considering the effects of variations in values of only one parameter, holding the other fixed, is called one way sensitivity analysis. Then, next sensitivity analyses are one way sensitivity analyses. Laskey [21] presents a methodology for analytic computation of sensitivity values in discrete BNs when there is only one variable of interest in the problem. These sensitivity values measure the impact of small changes in the parameter on the probability value of interest computing partial derivatives of output probabilities with respect to inaccurate parameters. Coupé et al. [6] provide an elicitation procedure to assess values to the parameters in which, alternatively, sensitivity analyses are performed and parameters assessments redefined. Given the highly time-consuming in [7] parameters that cannot affect the output are studied. Moreover, for discrete BNs, where parameters are conditional probabilities, [5] propose a distance measure between two probability distributions to compute the amount of change that occurs when moving from one distribution to another. They contrast the proposed measure with classical measures as the Kullback–Leibler (KL) divergence [20] and show that belief change between two states of belief can be unbounded, even when their KL divergence tends to zero. They show, however, that KL divergence can be used to compute the average change in beliefs. Despite this inappropriate behavior in discrete BN it is shown [9] that KL measure is a good one to describe the effects of parameter perturbations in Gaussian models. Finally, Bednarski et al. [1] focus their one way sensitivity analysis in identifying the set of sensibilities that affects the variable of interest. When the interest is about a set of parameters, then, the objective is to analyze the effects of variations in values of a set of parameters at the same time, this analysis is called n-way sensitivity analysis. Authors like KJærulff and van der Gaag [19] or Chan et al. [4] introduce n-way sensitivity analyses to identify multiple parameters changes in discrete BNs. In [4] only one interest variable is considered while authors in [19] work with a set of interest variables. Literature about sensitivity analysis in Gaussian Bayesian networks (GBNs) is not extensive. Authors like Castillo and KJærulff [3] or Gómez-Villegas et al. [9] and [10] have studied the problem of uncertainty in parameters assignments in GBNs. In [3] a one way sensitivity analysis based on [21] is proposed. Then the impact of small changes in the network parameters is studied evaluating local aspects of the distribution such as location and dispersion. In this analysis only one variable of interest is considered. Moreover, the analysis also focuses on small changes about the parameters of the network and evaluates the impact of uncertainty on mean vector and covariance matrix. In [9] a one way sensitivity analysis is proposed to evaluate the effects of small and large changes in the network parameters considering a global sensitivity measure. As in [5] this is a distance measure between two probability distributions, but in this context – GBNs – the measure used is the KL divergence. Moreover, in [10], the expressions obtained for the sensitivity analysis were evaluated in the limit, considering extreme changes in the network parameters. All the papers mentioned above deal with variations in one parameter at a time holding the others fixed. Then, both analyses are one way sensitivity analysis. The present paper aims to generalize the sensitivity analysis presented in [9] in two ways. First, by developing an n-way version of the sensitivity analysis we hope to study the effects of perturbations in a set of parameters. Second, considering a GBN with several variables of interest and evidential variables. When analyzing the sensitivity of the network, we normally select a set of parameters to be reviewed. But it could also be necessary to determine the simultaneous impact of all inaccuracies over the whole network. With this aim, we propose a robustness analysis. This paper therefore offers two analysis methods: one for sensitivity, and another for robustness. Other works have focused on uncertainty about the arcs of the DAG that describes the network. Authors like Renooij [28] for discrete BNs or Gómez-Villegas et al. [11] for GBNs develop analyses to study the effect of adding or removing an arc of the DAG, changing the dependence structure of the network. In both cases, the KL divergence is used to compare probability distributions. Thus, in a general framework we compute the KL divergence of the network output after evidence propagation under two different models, the original and the perturbed, to evaluate the effect of inaccuracies in the assigned parameters. If all the divergences are small we can conclude that the network is not sensitive to the proposed perturbations. A similar methodology for the robustness analysis is developed, but working with only one perturbed model where all inaccuracies are considered at the same time. Both analyses are applied to a GBN with various types of uncertainty. A complex, practical problem dealing with structure reliability of a building is also studied. This paper is organized as follows. In Section 2 we introduce BNs and GBNs. In Section 3, we describe our proposed methodology for evaluating the sensitivity and robustness of a GBN. In Section 4 we present the results of these methods. In Section 5 we introduce a simple GBN, and study its sensitivity and robustness under two different cases of uncertainty. In Section 6, we apply the proposed methodology to a complex network. Finally, Section 7 summarizes our conclusions and proofs with details about calculations of the proposed results are given in Appendix A.
نتیجه گیری انگلیسی
This paper has presented a new method for analyzing the sensitivity and robustness of a GBN when some of the parameters describing the quantitative part of the network are inaccurate or uncertain. The KL divergence is used to compare the overall behaviors of the original network and a perturbed network, thereby evaluating the effect of changes in the network parameters after evidence propagation for the variables of interest. The proposed method is simple to calculate for any GBN, and can handle any kind of perturbation or inaccuracy in the network parameters. Then, it is possible to study either large or small perturbations in the uncertain parameters of the network. The sensitivity analysis considers different sets of parameters, depending on which kinds of variables are perturbed (interest or evidential), and whether the uncertainties lie in their mean vector or covariance matrix. Each case yields a different divergence, making it possible to know which set of variables most strongly disturbs the network output after evidence propagation. When the divergences obtained by the sensitivity analysis are small, the network may be considered robust under the proposed perturbations. To confirm this result, we also develop a robustness analysis method that evaluates all the perturbations at the same time. There still is not an answer when KL divergence is sufficiently large. We consider that until that divergence is not used more in practical contexts with different uncertainty situations an accurate answer to that question cannot be given.