Adaptive Probability Theory: Human Biases as an Adaptation (original) (raw)

Probability biases as Bayesian inference

Judgment and Decision Making, 2006

In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the laboratory conditions in which they are demonstrated, they can be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions.

Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty

Cognition, 1996

Professional probabilists have long argued over what probability means, with, for example, Bayesians arguing that probabilities refer to subjective degrees of confidence and frequentists arguing that probabilities refer to the frequencies of events in the world. Recently, Gigerenzer and his colleagues have argued that these same distinctions are made by untutored subjects, and that, for many domains, the human mind represents probabilistic information as frequencies. We analyze several reasons why, from an ecological and evolutionary perspective, certain classes of problemsolving mechanisms in the human mind should be expected to represent probabilistic information as frequencies. Then, using a problem famous in the "heuristics and biases" literature for eliciting base rate neglect, we show that correct Bayesian reasoning can be elicited in 76% of subjects-indeed, 92% in the most ecologically valid condition-simply by expressing the problem in frequentist terms. This result adds to the growing body of literature showing that frequentist representations cause various cognitive biases to disappear, including overconfidence, the conjunction fallacy, and base-rate neglect. Taken together, these new findings indicate that the conclusion most common in the literature on judgment under uncertainty-that our inductive reasoning mechanisms do not embody a calculus of probability-will have to be reexamined. From an ecological and evolutionary perspective, humans may turn out to be good intuitive statisticians after all.

On the importance of random error in the study of probability judgment. Part I: New theoretical developments

Journal of Behavioral …, 1997

demonstrated that over-and undercon®dence can be observed simultaneously in judgment studies, as a function of the method used to analyze the data. They proposed a general model to account for this apparent paradox, which assumes that overt responses represent true judgments perturbed by random error. To illustrate that the model reproduces the pattern of results, they assumed perfectly calibrated true opinions and a particular form (log-odds plus normally distributed error) of the model to simulate data from the full-range paradigm. In this paper we generalize these results by showing that they can be obtained with other instantiations of the same general model (using the binomial error distribution), and that they apply to the half-range paradigm as well. These results illustrate the robustness and generality of the model. They emphasize the need for new methodological approaches to determine whether observed patterns of over-or undercon®dence represent real eects or are primarily statistical artifacts.

Random variation and systematic biases in probability estimation

Cognitive Psychology, 2020

A number of recent theories have suggested that the various systematic biases and fallacies seen in people's probabilistic reasoning may arise purely as a consequence of random variation in the reasoning process. The underlying argument, in these theories, is that random variation has systematic regressive effects, so producing the observed patterns of bias. These theories typically take this random variation as a given, and assume that the degree of random variation in probabilistic reasoning is sufficiently large to account for observed patterns of fallacy and bias; there has been very little research directly examining the character of random variation in people's probabilistic judgement. We describe 4 experiments investigating the degree, level, and characteristic properties of random variation in people's probability judgement. We show that the degree of variance is easily large enough to account for the occurrence of two central fallacies in probabilistic reasoning (the conjunction fallacy and the disjunction fallacy), and that level of variance is a reliable predictor of the occurrence of these fallacies. We also show that random variance in people's probabilistic judgement follows a particular mathematical model from frequentist probability theory: the binomial proportion distribution. This result supports a model in which people reason about probabilities in a way that follows frequentist probability theory but is subject to random variation or noise.

The perception of probability

Psychological Review, 2014

We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making.

An associative framework for probability judgement: An application to biases

J Exp Psychol Learn Mem Cogn, 2003

Three experiments show that understanding of biases in probability judgment can be improved by extending the application of the associative-learning framework. In Experiment 1, the authors used M. A. Gluck and G. H. Bower's (1988a) diagnostic-learning task to replicate apparent base-rate neglect and to induce the conjunction fallacy in a later judgment phase as a by-product of the conversion bias. In Experiment 2, the authors found stronger evidence of the conversion bias with the same learning task. In Experiment 3, the authors changed the diagnostic-learning task to induce some conjunction fallacies that were not based on the conversion bias. The authors show that the conjunction fallacies obtained in Experiment 3 can be explained by adding an averaging component to M. A. Gluck and G. H. Bower's model. A great deal of research in decision making has focused on people's biases in probability judgment. The main reason for this is that deviations from normative theories are more informative than correct estimations in order to infer the cognitive processes underlying probability judgment, which, in turn, is a basic component of decision making. Some of these biases are remarkable because they seem to involve the violation of some basic and simple principles of the probability theory. Our main purpose is to show that the associative-learning framework can improve our understanding of biases in probability judgment. We focus on three well-known errors: the base-rate neglect, the conjunction fallacy, and the conversion of conditional probabilities (thereafter referred to as the conversion bias). First, let us briefly outline these biases.

Individual differences in the perception of probability

PLOS Computational Biology, 2021

In recent studies of humans estimating non-stationary probabilities, estimates appear to be unbiased on average, across the full range of probability values to be estimated. This finding is surprising given that experiments measuring probability estimation in other contexts have often identified conservatism: individuals tend to overestimate low probability events and underestimate high probability events. In other contexts, repulsive biases have also been documented, with individuals producing judgments that tend toward extreme values instead. Using extensive data from a probability estimation task that produces unbiased performance on average, we find substantial biases at the individual level; we document the coexistence of both conservative and repulsive biases in the same experimental context. Individual biases persist despite extensive experience with the task, and are also correlated with other behavioral differences, such as individual variation in response speed and adjustment rates. We conclude that the rich computational demands of our task give rise to a variety of behavioral patterns, and that the apparent unbiasedness of the pooled data is an artifact of the aggregation of heterogeneous biases.

Probability theory, not the very guide of life

Psychological Review, 2009

Probability theory has long been taken as the self-evident norm against which to evaluate inductive reasoning, and classical demonstrations of violations of this norm include the conjunction error and base-rate neglect. Many of these phenomena require multiplicative probability integration, whereas people seem more inclined to linear additive integration, in part, at least, because of well-known capacity constraints on controlled thought. In this article, the authors show with computer simulations that when based on approximate knowledge of probabilities, as is routinely the case in natural environments, linear additive integration can yield as accurate estimates, and as good average decision returns, as estimates based on probability theory. It is proposed that in natural environments people have little opportunity or incentive to induce the normative rules of probability theory and, given their cognitive constraints, linear additive integration may often offer superior bounded rationality.