When Type 2 Processing Misfires: The Indiscriminate Use of Statistical Thinking about Reasoning Problems (original) (raw)
Related papers
We examined matching bias in syllogistic reasoning by analysing response times, confidence ratings and individual differences. Roberts’ (2005) ‘negations paradigm’ was used to generate conflict between the surface features of problems and the logical status of conclusions. The experiment replicated matching bias effects in conclusion evaluation (Stupple & Waterhouse, 2009), revealing increased processing times for matching/logic ‘conflict problems’. Results paralleled chronometric evidence from the belief bias paradigm indicating that logic/belief conflict problems take longer to process than non-conflict problems (Stupple, Ball, Evans, & Kamal-Smith, 2011). Individuals’ response times for conflict problems also showed patterns of association with the degree of overall normative responding. Acceptance rates, response times, metacognitive confidence judgements and individual differences all converged in supporting dual-process theory. This is noteworthy because dual-process predictions about heuristic/analytic conflict in syllogistic reasoning generalised from the belief bias paradigm to a situation where matching features of conclusions, rather than beliefs, were set in opposition to logic.
The logic-bias effect: The role of effortful processing in the resolution of belief–logic conflict
Memory & Cognition, 2015
According to the default interventionist dualprocess account of reasoning, belief-based responses to reasoning tasks are based on Type 1 processes generated by default, which must be inhibited in order to produce an effortful, Type 2 output based on the validity of an argument. However, recent research has indicated that reasoning on the basis of beliefs may not be as fast and automatic as this account claims. In three experiments, we presented participants with a reasoning task that was to be completed while they were generating random numbers (RNG). We used the novel methodology introduced by Handley, Newstead & Trippas (Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 28-43, 2011), which required participants to make judgments based upon either the validity of a conditional argument or the believability of its conclusion. The results showed that belief-based judgments produced lower rates of accuracy overall and were influenced to a greater extent than validity judgments by the presence of a conflict between belief and logic for both simple and complex arguments. These findings were replicated in Experiment3, in which we controlled for switching demands in a blocked design. Across all three experiments, we found a main effect of RNG, implying that both instructional sets require some effortful processing. However, in the blocked design RNG had its greatest impact on logic judgments, suggesting that distinct executive resources may be required for each type of judgment. We discuss the implications of our findings for the default interventionist account and offer a parallel competitive model as an alternative interpretation for our findings.
Journal of Applied Social Psychology, 1992
This experiment examined the role that ambiguity and uncertainty play in the use of base rate and individuating information in probability judgments. Subjects responded to a number of inference problems that varied in terms of base rates and the accuracy of the source of the individuating information. The ambiguity in the decision situation was also manipulated by varying the human or technological nature of the source of the individuating information and the causal relevance of the base rate information. The results provide substantial support for the predictions derived from the ambiguity conceptualization. Several interactions were uncovered involving attributes of the base rate and individuating information suggesting complex judgmental processes similar to anchoring and adjustment. Discussion focused on the role that human versus technological sources of information may play in judgment and decision making, and the utility of the ambiguity notion for understanding the use of base rate and individuating information in probabilistic inference problems.
2011
When the validity of a deductive conclusion conflicts with its believability people often respond in a belief-biased manner. This study used response times to test the selective processing model, which views belief-bias effects as arising from the interplay between superficial heuristic processes and more rigorous analytic processes. Participants were split into three response groups according to their propensity to endorse logically normative conclusions. The low-logic, high belief-bias group demonstrated rapid responding, consistent with heuristic processing. The medium-logic, moderate belief-bias group showed slower responding, consistent with enhanced analytic processing, albeit selectively biased by conclusion believability. The high-logic, low belief-bias group's relatively unbiased responses came at the cost of increased processing times, especially with invalid-believable conclusions. These findings support selective processing claims that distinct heuristic and analytic processing systems underpin reasoning, and indicate that certain individuals differentially engage one system more than the other. A minor amendment is proposed to the current selective processing model to capture the full range of observed effects.
The Doubting System 1: Evidence for automatic substitution sensitivity
Acta psychologica, 2015
A long prevailing view of human reasoning suggests severe limits on our ability to adhere to simple logical or mathematical prescriptions. A key position assumes these failures arise from insufficient monitoring of rapidly produced intuitions. These faulty intuitions are thought to arise from a proposed substitution process, by which reasoners unknowingly interpret more difficult questions as easier ones. Recent work, however, suggests that reasoners are not blind to this substitution process, but in fact detect that their erroneous responses are not warranted. Using the popular bat-and-ball problem, we investigated whether this substitution sensitivity arises out of an automatic System 1 process or whether it depends on the operation of an executive resource demanding System 2 process. Results showed that accuracy on the bat-and-ball problem clearly declined under cognitive load. However, both reduced response confidence and increased response latencies indicated that biased reason...
A popular distinction in cognitive and social psychology has been between intuitive and deliberate judgments. This juxtaposition has aligned in dual-process theories of reasoning associative, unconscious, effortless, heuristic, and suboptimal processes (assumed to foster intuitive judgments) versus rule-based, conscious, effortful, analytic, and rational processes (assumed to characterize deliberate judgments). In contrast, we provide convergent arguments and evidence for a unified theoretical approach to both intuitive and deliberative judgments. Both are rule-based, and in fact, the very same rules can underlie both intuitive and deliberate judgments. The important open question is that of rule selection, and we propose a 2-step process in which the task itself and the individual's memory constrain the set of applicable rules, whereas the individual's processing potential and the (perceived) ecological rationality of the rule for the task guide the final selection from that set. Deliberate judgments are not generally more accurate than intuitive judgments; in both cases, accuracy depends on the match between rule and environment: the rules' ecological rationality. Heuristics that are less effortful and in which parts of the information are ignored can be more accurate than cognitive strategies that have more information and computation. The proposed framework adumbrates a unified approach that specifies the critical dimensions on which judgmental situations may vary and the environmental conditions under which rules can be expected to be successful.
An enormous amount of research has been conducted that documents judgment and decision-making biases when dealing with situations involving uncertainty. The results of these experiments are generally taken as evidence that people have weaknesses when they reason about situations requiring the application of probabilistic or statistical concepts. One such paper documented that when asked to explain why events occurred, people rated a conjunction of two explanations as being more likely to have influenced the outcome than the explanations' individual components, a statistical impossibility given that a conjunction of two events cannot be more likely than their individual component events (Leddo, Abelson and Gross, 1984). The present research explores the hypothesis that the nature of the task that people are asked to perform may also contribute to the biases observed in these experiments. Here, high school participants were asked to rate the probability of both individual and conjoint explanations as in Leddo et al. (1984). However, other participants were given the same scenarios and asked to state the probability that individual or conjoint events were either true (inference) or likely to happen (prediction). Results confirmed the hypothesis that conjunction effects (rating two events as more probable than one) were strongest in explanations and weakest in predictions. This suggests that the task a person is asked to perform may contribute to whether or not people show biases in judgment and decision making.