پاسخ به پرس و جوها در شبکه های بیزی هیبرید با استفاده از روش نمونه گیری مهم
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|29173||2012||11 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Decision Support Systems, Volume 53, Issue 3, June 2012, Pages 580–590
In this paper we propose an algorithm for answering queries in hybrid Bayesian networks where the underlying probability distribution is of class MTE (mixture of truncated exponentials). The algorithm is based on importance sampling simulation. We show how, like existing importance sampling algorithms for discrete networks, it is able to provide answers to multiple queries simultaneously using a single sample. The behaviour of the new algorithm is experimentally tested and compared with previous methods existing in the literature. Highlights ► An algorithm for answering queries in hybrid Bayesian networks is proposed. ► The underlying probability distribution is of class MTE. ► The algorithm is based on importance sampling simulation. ► Answers to multiple queries simultaneously using a single sample can be carried out. ► The performance of the algorithm is tested and compared with previous methods.
Bayesian networks  and  have become a popular tool for representing uncertainty in decision support systems. A review of recent literature shows the variety of applications in which they have been successfully used , , ,  and . One of the main reasons for using them as the inference engine in a decision support system is that efficient reasoning algorithms can be designed, taking advantage of their structure , , , , ,  and . Most of the methodological development around Bayesian networks has concentrated on the case in which all the variables involved are qualitative or discrete. However, decision support systems usually have to operate in domains described in terms of both discrete and continuous variables simultaneously. In such scenarios, there is always the possibility of discretising the continuous variables  and , in order to be able to use methods designed for discrete variables. But such a solution in general conveys a loss of information. Continuous and discrete variables can be handled simultaneously, with no need to discretise, in the so-called hybrid Bayesian networks. The first advances in this field came along with the definition of the Conditional Gaussian (CG) model ,  and . The limitations of this approach are the assumption of normality over the continuous variables, and also the fact that dependencies of discrete variables conditional on continuous ones, are not allowed. This structural restriction is overcome in the augmented Conditional Linear Gaussian (CLG) networks, where discrete nodes are allowed to have continuous parents, by representing their conditional distributions as softmax functions . However this model also relies on the normality assumption. Furthermore, exact inference is not possible in augmented CLG networks, and the solution proposed in  is based on a Gaussian approximation of the product of the Gaussian and softmax functions, which provides exact marginals for the discrete variables and also is able to obtain exact values only for the first and second order moments of the distribution of the continuous variables. A more general proposal is based on the use of mixtures of truncated exponentials (MTEs), which do not impose any restriction and also do not rely on the normality assumption . This model has been successfully applied to decision problems . An important feature of MTEs is that they are compatible with efficient exact inference algorithms like, for instance, the Shenoy–Shafer architecture  and the variable elimination scheme . As MTEs are able to approximate a wide variety of probability distributions , they can be used as a general framework for carrying out inference in hybrid Bayesian networks, just by approximating each conditional distribution in the network by an MTE and then using an exact inference algorithm. This approach has been analysed in , by solving a network involving Logistic and Gaussian distributions using MTEs, variational approximations , discretisation  and Markov Chain Monte Carlo . A recent approach, similar in essence to MTEs, is based on representing the distribution in a hybrid Bayesian network as a Mixture of Polynomials (MOPs) . Both MTEs and MOPs have been generalised in a global framework for representing hybrid Bayesian networks, called Mixtures of Truncated Basis Functions (MoTBFs) . However, even though MOPs have some advantages over MTEs, specially the ability of dealing with a wider class of deterministic relationships, so far they lack of an algorithm for learning the models from data, while this issue has been solved for MTEs . Hence, MTEs can be used as an exact model and not only as an approximation of other distributions. In that sense, MTEs behave as a nonparametric model, where no assumption is made about the underlying distribution. Even though Bayesian networks allow efficient inference algorithms to operate over them, it is known that exact probabilistic inference is an NP-hard problem . Furthermore, approximate probabilistic inference is also an NP-hard problem if a given precision is required . For that reason, approximate algorithms that tradeoff complexity for accuracy have been developed for discrete Bayesian networks. An important class of such approximate algorithms is based on the importance sampling technique, that provides a flexible approach to construct anytime reasoning algorithms , , , ,  and . Inference in hybrid Bayesian networks with MTEs does not escape from the above mentioned complexity. If the model is learnt from a database using the algorithm in , it can be too complex if the number of variables is high. But even using the approximations in , inference may become unfeasible if the model is complex enough. With this motivation, in this paper we propose an approximate algorithm for computing fast and accurate answers to precise queries in hybrid Bayesian networks with MTEs. The algorithm is based on importance sampling, and therefore it is an anytime algorithm  in the sense that the accuracy of its results is proportional to the time it is allowed to use for computing the answer. We show how our proposal outperforms the previous state-of-the-art method for approximate inference with MTEs, introduced in . The rest of the paper is organised as follows. We establish the notation and define some preliminary concepts in Section 2. The problem addressed here is formally posted in Section 3. The core of the methodological contributions is in Section 4, and the details of the algorithm can be found in Section 5. The experimental analysis carried out to test the performance of the algorithm is reported in Section 6. The concluding remarks are given in Section 7.
نتیجه گیری انگلیسی
We have introduced a method for solving multiple queries in hybrid Bayesian networks with MTEs. The method is based on importance sampling, which makes it an anytime algorithm. The algorithm is able to compute answers to multiple questions using a unique sample. We have shown that the variance remains bounded if the same sample is also used to compute the numerator and denominator in each query. The experiments conducted illustrate the behaviour of the proposed algorithm, and they support the idea that the IS algorithm outperforms the two algorithms previously used for carrying out probabilistic reasoning in hybrid Bayesian networks with MTEs. Therefore, the methodology introduced here expands the class of problems that can be handled using hybrid Bayesian networks, and more precisely, it provides versatility to the MTE model, by increasing the efficiency in solving probabilistic inference tasks. We expect to continue this research line by developing methods for answering more complex queries. For instance, a query consisting on finding the most probable explanation to an observed fact in terms of a set of target variables, which is called abductive inference . We also plan to study the application of the proposed algorithm to MOPs . The main difference would be in Alg. 1, as in the case of MOPs, each term may oscillate within an interval, while MTEs are smoother.