Improving Causal Inference by Increasing Model Expressiveness (original) (raw)
Related papers
2012
Despite their success in transferring the powerful human faculty of causal reasoning to a mathematical and computational form, causal models have not been widely used in the context of core AI applications such as robotics. In this paper, we argue that this discrepancy is due to the static, propositional nature of existing causality formalisms that make them difficult to apply in dynamic real-world situations where the variables of interest are not necessarily known a priori. We define Causal Logic Models (CLMs), a new probabilistic, first-order representation which uses causality as a fundamental building block. Rather than merely converting causal rules to first-order logic as various methods in Statistical Relational Learning have done, we treat the causal rules as basic primitives which cannot be altered without changing the system. We provide sketches of algorithms for causal reasoning using CLMs, preliminary results for causal explanation, and explore the significant differenc...
Sequences of mechanisms for causal reasoning in artificial intelligence
We present a new approach to token-level causal reasoning that we call Sequences Of Mechanisms (SoMs), which models causality as a dynamic sequence of active mechanisms that chain together to propagate causal influence through time. We motivate this approach by using examples from AI and robotics and show why existing approaches are inadequate. We present an algorithm for causal reasoning based on SoMs, which takes as input a knowledge base of first-order mechanisms and a set of observations, and it hypothesizes which mechanisms are active at what time. We show empirically that our algorithm produces plausible causal explanations of simulated observations generated from a causal model. We argue that the SoMs approach is qualitatively closer to the human causal reasoning process, for example, it will only include relevant variables in explanations. We present new insights about causal reasoning that become apparent with this view. One such insight is that observation and manipulation do not commute in causal models, a fact which we show to be a generalization of the Equilibration-Manipulation Commutability of [Dash(2005)].
Learning Causal Structure from Reasoning
According to the transitive dynamics model, people can construct causal structures by linking together configurations of force. The predictions of the model were tested in two experiments in which participants generated new causal relationships by chaining together two (Experiment 1) or three (Experiment 2) causal relations. The predictions of the transitive dynamics model were compared against those of Goldvarg and Johnson-Laird's model theory . The transitive dynamics model consistently predicted the overall causal relationship drawn by participants for both types of causal chains, and, when compared to the model theory, provided a better fit to the data. The results suggest that certain kinds of causal reasoning may depend on force dynamic-rather than on purely logical or statistical-representations.
The Causal Sampler: A Sampling Approach to Causal Representation, Reasoning, and Learning
Cognitive Science, 2017
Although the causal graphical model framework has achieved success accounting for numerous causal-based judgments, a key property of these models, the Markov condition, is consistently violated (Rehder, 2014; Rehder & Davis, 2016). A new process model—the causal sampler—accounts for these effects in a psychologically plausible manner by assuming that people construct their causal representations using the Metropolis-Hastings sampling algorithm constrained to only a small number of samples (e.g., < 20). Because it assumes that Markov violations are built into people’s causal representations, the causal sampler accounts for the fact that those violations manifest themselves in multiple tasks (both causal reasoning and learning). This prediction was corroborated by a new experiment that directly measured people’s causal representations.
Designing effective supports for causal reasoning
Educational Technology Research and …, 2008
Causal reasoning represents one of the most basic and important cognitive processes that underpin all higher-order activities, such as conceptual understanding and problem solving. Hume called causality the ''cement of the universe'' [Hume (1739[Hume ( /2000. Causal reasoning is required for making predictions, drawing implications and inferences, and explaining phenomena. Causal relations are usually more complex than learners understand. In order to be able to understand and apply causal relationships, learners must be able to articulate numerous covariational attributes of causal relationships, including direction, valency, probability, duration, responsiveness, as well as mechanistic attributes, including process, conjunctions/disjunctions, and necessity/sufficiency. We describe different methods for supporting causal learning, including influence diagrams, simulations, questions, and different causal modeling tools, including expert systems, systems dynamics tools, and causal modeling tools. Extensive research is needed to validate and contrast these methods for supporting causal reasoning.
A statistical semantics for causation
1992
We propose a model-theoretic definition of causation, and show that, contrary to common folklore, genuine causal influences can be distinguished from spurious covariations following standard norms of inductive reasoning. We also establish a sound characterization of the conditions under which such a distinction is possible. Finally, we provide a proof-theoretical procedure for inductive causation and show that, for a large class of data and structures, effective algorithms exist that uncover the direction of causal influences as defined above.
Integrating causal reasoning at different levels of abstraction
1988
In this paper we describe a problem-solving system which uses a multi-level causal model of its domain. The system functions in the role of a pilot's assistant in the domain of commercial air transport emergencies. The model represents causal relationships among the aircraft subsystems, the effectors (engines, control surfaces), the forces that act on an aircraft in flight (thrust, lift), and the aircraft's flight profile (speed, .altitude, etc.). The causal relationsllips are represented at three levels of abstraction: Boolean, qualitative, and quantitative, and reasoning about causes and effects can rake place at each of these levels. Since processing at each level has different characteristics with respect to speed, the type of data required, and the specificity of the results, the problem-solving system can adapt to a wide variety of situations. The system is currently being implemented in the KEE TM development environment on a Symbolics Lisp machine.