Learning Bayesian networks for solving real-world problems
Bayesian networks, which provide a compact graphical way to express complex probabilistic relationships among several random variables, are rapidly becoming the tool of choice for dealing with uncertainty in knowledge based systems. However, approaches based on Bayesian networks have often been dismissed as unfit for many real-world applications since probabilistic inference is intractable for most problems of realistic size, and algorithms for learning Bayesian networks impose the unrealistic requirement of datasets being complete. In this thesis, I present practical solutions to these two problems, and demonstrate their effectiveness on several real-world problems. The solution proposed to the first problem is to learn selective Bayesian networks, i.e., one that use only a subset of the given attributes to model a domain. The aim is to learn networks that are smaller, and hence computationally simpler to evaluate, but retain the performance of networks induced using all attributes. I present two methods for inducing selective Bayesian networks from data and evaluate them on several different problems. Both methods are shown to induce selective networks that are not only significantly smaller and computationally simpler to evaluate, but also perform as well, or better, than networks using all attributes. To address the second problem, I propose a principled method, based on the EM algorithm, for learning both Bayesian network structure and probabilities from incomplete data, and evaluate its performance on several datasets with different amount of missing data and different assumptions about the missing data mechanisms. The proposed algorithm is shown to induce Bayesian networks that are very close to the actual underlying model. Finally, I apply both methods to the task of diagnosing acute abdominal pain. Known to be a very difficult domain, this is a very high dimensional problem characterized by a large number of attributes and missing data. Several researchers have argued that the simplest Bayesian network, the naive Bayesian classifier, is optimal for this problem. My experiments on two datasets in this domain show that not only do selective Bayesian networks use only a small fraction of the attributes but they also significantly outperform other methods, including the naive Bayesian classifier.
Computer science|Artificial intelligence
Singh, Moninder, "Learning Bayesian networks for solving real-world problems" (1998). Dissertations available from ProQuest. AAI9829991.