Beyond the Information Given Causal Models in Learning and Reasoning (original) (raw)
Related papers
Mechanisms of predictive and diagnostic causal induction
Journal of Experimental Psychology: Animal Behavior Processes, 2002
In predictive causal inference, people reason from causes to effects, whereas in diagnostic inference, they reason from effects to causes. Independently of the causal structure of the events, the temporal structure of the information provided to a reasoner may vary (e.g., multiple events followed by a single event vs. a single event followed by multiple events). The authors report 5 experiments in which causal structure and temporal information were varied independently. Inferences were influenced by temporal structure but not by causal structure. The results are relevant to the evaluation of 2 current accounts of causal induction, the Rescorla-Wagner
Inferring non-observed correlations from causal scenarios: The role of causal knowledge
Learning and Motivation, 2004
This work aimed at demonstrating, first, that na€ ıve reasoners are able to infer the existence of a relationship between two events that have never been presented together and, second, the sensitivity of such inference to the causal structure of the task. In all experiments, naive participants judged the strength of the causal link between a cue A and an outcome O in a first phase and between a second cue B and the same outcome O in a second phase. In the final test, participants estimated the degree of correlation between the two cues, A and B. Participants perceived the two cues as significantly more highly correlated when they were effects of a common potential cause (Experiment 1a and 2) than when they were potential causes of a common effect (Experiment 1b and 2). This effect of causal directionality on inferred correlation points out the influence of mental models on human causal detection and learning, as proposed by recent theoretical models.
Evaluating Causal Hypotheses: The Curious Case of Correlated Cues
Cognitive Science, 2016
Although the causal graphical model framework has achieved considerable success accounting for causal learning data, application of that formalism to multi-cause situations assumes that people are insensitive to the statistical properties of the causes themselves. The present experiment tests this assumption by first instructing subjects on a causal model consisting of two independent and generative causes and then requesting them to make data likelihood judgments, that is, to estimate the probability of some data given the model. The correlation between the causes in the data was either positive, zero, or negative. The data was judged as most likely in the positive condition and least likely in the negative condition, a finding that obtained even though all other statistical properties of the data (e.g., causal strengths, outcome density) were controlled. These results pose a problem for current models of causal learning. Hypothesis testing occupies a central role in learning theor...
Covariation in natural causal induction.
The covariation component of everyday causal inference has been depicted, in both cognitive and social psychology as well as in philosophy, as heterogeneous and prone to biases. The models and biases discussed in these domains are analyzed with respect to focal sets: contextually determined sets of events over which covariation is computed. Moreover, these models are compared to our probabilistic contrast model, which specifies causes as first and higher order contrasts computed over events in a focal set. Contrary to the previous depiction of covariation computation, the present assessment indicates that a single normative mechanism-the computation of probabilistic contrasts-underlies this essential component of natural causal induction both in everyday and in scientific situations.
2002
How ought we learn causal relationships? While Popper advocated a hypothetico-deductive logic of causal discovery, inductive accounts are currently in vogue. Many inductive approaches depend on the causal Markov condition as a fundamental assumption. This condition, I maintain, is not universally valid, though it is justifiable as a default assumption. In which case the results of the inductive causal learning procedure must be tested before they can be accepted. This yields a synthesis of the hypothetico-deductive and inductive accounts, which forms the focus of this paper. I discuss the justification of this synthesis and draw an analogy between objective Bayesianism and the account of causal learning presented here.