Scientists Invent New Hypotheses, Do Brains? (original) (raw)
Related papers
The Predictive Brain: A Modular View of Brain and Cognitive Function?
Under review
Modularity is arguably one of the most influential theses guiding research on brain and cognitive function since phrenology. This paper considers the following question: is modularity entailed by recent Bayesian models of brain and cognitive function, especially the predictive processing framework? It starts by considering three of the most well-articulated arguments for the view that modularity and predictive processing work well together. It argues that all three kinds of arguments for modularity come up short, albeit for different reasons. The analysis in this paper, although formulated in the context of predictive processing, speaks to broader issues with how to understand the relationship between functional segregation and integration and the reciprocal architecture of the predictive brain. These conclusions have implications for how to study brain and cognitive function. Specifically, when cognitive neuroscience works within an acyclic Markov decision scheme, adopted by most Bayesian models of brain and cognitive function, it may very well be methodologically misguided. This speaks to an increasing tendency within the cognitive neurosciences to emphasise recurrent and reciprocal neuronal processing captured within newly emerging dynamical causal modelling frameworks. The conclusions also suggest that functional integration is an organising principle of brain and cognitive function.
Why Bayesian brains perform poorly on explicit probabilistic reasoning problems
2022
There is a growing body of evidence suggesting that the neural processes underlying perception, learning, and decision-making approximate Bayesian inference. Yet, humans perform poorly when asked to solve explicit probabilistic reasoning problems. In response, some have argued that certain brain processes are Bayesian while others are not; others have argued that reasoning errors can be explained by either inaccurate generative models or limitations of approximation algorithms. In this paper, we offer a complementary perspective by considering how a Bayesian brain would implement conscious reasoning processes more generally. These considerations require making two distinctions, each of which highlights a fundamental reason why Bayesian brains should not be expected to perform well at explicit inference. The first distinction is between inferring probability distributions over hidden states and representing probabilities as hidden states. The former assumes that the brain’s dynamics ...
Psychological review, 2017
Recent debates in the psychological literature have raised questions about the assumptions that underpin Bayesian models of cognition and what inferences they license about human cognition. In this paper we revisit this topic, arguing that there are 2 qualitatively different ways in which a Bayesian model could be constructed. The most common approach uses a Bayesian model as a normative standard upon which to license a claim about optimality. In the alternative approach, a descriptive Bayesian model need not correspond to any claim that the underlying cognition is optimal or rational, and is used solely as a tool for instantiating a substantive psychological theory. We present 3 case studies in which these 2 perspectives lead to different computational models and license different conclusions about human cognition. We demonstrate how the descriptive Bayesian approach can be used to answer different sorts of questions than the optimal approach, especially when combined with principl...
Unification by Fiat: Arrested Development of Predictive Processing
Cognitive Science, 2020
Predictive processing (PP) has been repeatedly presented as a unificatory account of perception, action, and cognition. In this paper, we argue that this is premature: As a unifying theory, PP fails to deliver general, simple, homogeneous, and systematic explanations. By examining its current trajectory of development, we conclude that PP remains only loosely connected both to its computational framework and to its hypothetical biological underpinnings, which makes its fundamentals unclear. Instead of offering explanations that refer to the same set of principles, we observe sys- tematic equivocations in PP-based models, or outright contradictions with its avowed principles. To make matters worse, PP-based models are seldom empirically validated, and they are fre- quently offered as mere just-so stories. The large number of PP-based models is thus not evidence of theoretical progress in unifying perception, action, and cognition. On the contrary, we maintain that the gap between theory and its biological and computational bases contributes to the arrested development of PP as a unificatory theory. Thus, we urge the defenders of PP to focus on its critical problems instead of offering mere re-descriptions of known phenomena, and to validate their models against possible alternative explanations that stem from different theoretical assumptions. Otherwise, PP will ultimately fail as a unified theory of cognition.