Finding Latent Causes in Causal Networks: an Efficient Approach Based on Markov Blankets (original) (raw)

Time and sample efficient discovery of Markov blankets and direct causal relations

Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '03, 2003

Abstract Data Mining with Bayesian Network learning has two important characteristics: under conditions learned edges between variables correspond to casual influences, and second, for every variable T in the network a special subset (Markov Blanket) identifiable ...

Learning causal networks from data

1994

Author's Signature Date: • CD-cxo-7 A ; for commercial puiposes or Rnancial gam may be undeitaken R e p r o d u c e d with p e r m issio n o f th e co p y r ig h t o w n e r. F urth er rep ro d u ctio n p roh ib ited w ith o u t p e r m issio n. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Recursive Causal Structure Learning in the Presence of Latent Variables and Selection Bias

2021

We consider the problem of learning the causal MAG of a system from observational data in the presence of latent variables and selection bias. Constraint-based methods are one of the main approaches for solving this problem, but the existing methods are either computationally impractical when dealing with large graphs or lacking completeness guarantees. We propose a novel computationally efficient recursive constraint-based method that is sound and complete. The key idea of our approach is that at each iteration a specific type of variable is identified and removed. This allows us to learn the structure efficiently and recursively, as this technique reduces both the number of required conditional independence (CI) tests and the size of the conditioning sets. The former substantially reduces the computational complexity, while the latter results in more reliable CI tests. We provide an upper bound on the number of required CI tests in the worst case. To the best of our knowledge, thi...

Learning Causal Networks from Data: A Survey and a New Algorithm for Recovering Possibilistic Causal Networks

Ai Communications, 1997

Causal concepts play a crucial role in many reasoning tasks. Organised as a model revealing the causal structure of a domain, they can guide inference through relevant knowledge. This is an especially difficult kind of knowledge to acquire, so some methods for automating the induction of causal models from data have been put forth. Here we review those that have a graph representation. Most work has been done on the problem of recovering belief nets from data but some extensions are appearing that claim to exhibit a true causal semantics. We will review the analogies between belief networks and "true" causal networks and to what extent methods for learning belief networks can be used in learning causal representations. Some new results in recovering possibilistic causal networks will also be presented.

Causal discovery on high dimensional data

Applied Intelligence, 2014

Existing causal discovery algorithms are usually not effective and efficient enough on high dimensional data. Because the high dimensionality reduces the discovered accuracy and increases the computation complexity. To alleviate these problems, we present a three-phase approach to learn the structure of nonlinear causal models by taking the advantage of feature selection method and two state of the art causal discovery methods. In the first phase, a greedy search method based on Max-Relevance and Min-Redundancy is employed to discover the candidate causal set, a rough skeleton of the causal network is generated accordingly. In the second phase, constraint-based method is explored to discover the accurate skeleton from the rough skeleton. In the third phase, direction learning algorithm IGCI is conducted to distinguish the direction of causalities from the accurate skeleton. The experimental results show that the proposed approach is both effective and scalable, particularly with interesting findings on the high dimensional data.

A sound and complete algorithm for learning causal models from relational data

The PC algorithm learns maximally oriented causal Bayesian networks. However, there is no equivalent complete algorithm for learning the structure of relational models, a more expressive generalization of Bayesian networks. Recent developments in the theory and representation of relational models support lifted reasoning about conditional independence. This enables a powerful constraint for orienting bivariate dependencies and forms the basis of a new algorithm for learning structure. We present the relational causal discovery (RCD) algorithm that learns causal relational models. We prove that RCD is sound and complete, and we present empirical results that demonstrate effectiveness.

Causal Structure Learning: a Bayesian approach based on random graphs

A Random Graph is a random object which take its values in the space of graphs. We take advantage of the expressibility of graphs in order to model the uncertainty about the existence of causal relationships within a given set of variables. We adopt a Bayesian point of view in order to capture a causal structure via interaction and learning with a causal environment. We test our method over two different scenarios, and the experiments mainly confirm that our technique can learn a causal structure. Furthermore, the experiments and results presented for the first test scenario demonstrate the usefulness of our method to learn a causal structure as well as the optimal action. On the other hand the second experiment, shows that our proposal manages to learn the underlying causal structure of several tasks with different sizes and different causal structures.

Three algorithms for causal learning

2010

OF DISSERTATION Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Computer Science The University of New Mexico Albuquerque, New Mexico December, 2010 Three Algorithms for Causal Learning by Roshan Ram Rammohan B.E., Bangalore University, 1999 M.S., Dept. of Computer Science, University of New Mexico, 2 006 Ph.D., Computer Science, University of New Mexico, 2010 Abstract The field of causal learning has grown in the past decade, esta blishing itself as a major focus in artificial intelligence research. Traditionally, approaches to causal learning are split into two areas. One area involves the learning of struc tures from observational data alone and the second, involves the methodologies of conduct i g and learning from experiments. In this dissertation, I investigate three differen t aspects of causal learning, all of which are based on the causal Bayesian network framework. Co nstraint based structure search algorithms that learn partiall...

Discovery of causal rules using partial association

Discovering causal relationships in large databases of observational data is challenging. The pioneering work in this area was rooted in the theory of Bayesian network (BN) learning, which however, is a NP-complete problem. Hence several constraint-based algorithms have been developed to efficiently discover causations in large databases. These methods usually use the idea of BN learning, directly or indirectly, and are focused on causal relationships with single cause variables. In this paper, we propose an approach to mine causal rules in large databases of binary variables. Our method expands the scope of causality discovery to causal relationships with multiple cause variables, and we utilise partial association tests to exclude noncausal associations, to ensure the high reliability of discovered causal rules. Furthermore an efficient algorithm is designed for the tests in large databases. We assess the method with a set of real-world diagnostic data. The results show that our method can effectively discover interesting causal rules in large databases.