Predictive Coding and Thought (original) (raw)
Related papers
Predictive coding and representationalism
According to the predictive coding theory of cognition (PCT), brains are predictive machines that use perception and action to minimize prediction error, i.e. the discrepancy between bottom–up, externally-generated sensory signals and top–down, internally-generated sensory predictions. Many consider PCT to have an explanatory scope that is unparalleled in contemporary cognitive science and see in it a framework that could potentially provide us with a unified account of cognition. It is also commonly assumed that PCT is a representational theory of sorts, in the sense that it postulates that our cognitive contact with the world is mediated by internal representations. However, the exact sense in which PCT is representational remains unclear; neither is it clear that it deserves such status—that is, whether it really invokes structures that are truly and nontrivially representational in nature. In the present article, I argue that the representational pretensions of PCT are completely justified. This is because the theory postulates cognitive structures—namely action-guiding, detachable, structural models that afford representational error detection—that play genuinely representational functions within the cognitive system.
Above and Beyond the Concrete: The Diverse Representational Substrates of the Predictive Brain
Behavioral and Brain Sciences
In recent years, scientists have increasingly taken to investigate the predictive nature of cognition. We argue that prediction relies on abstraction, and thus theories of predictive cognition need an explicit theory of abstract representation. We propose such a theory of the abstract representational capacities that allow humans to transcend the “here-and-now”. Consistent with the predictive cognition literature, we suggest that the representational substrates of the mind are built as a hierarchy, ranging from the concrete to the abstract; however, we argue that there are qualitative differences between elements along this hierarchy, generating meaningful, often unacknowledged, diversity. Echoing views from philosophy, we suggest that the representational hierarchy can be parsed into: modality-specific representations, instantiated on perceptual similarity; multimodal representations, primarily instantiated on the discovery of spatiotemporal contiguity; and categorical representati...
This paper examines the relationship between perceiving and imagining on the basis of predictive processing models in neuroscience. Contrary to the received view in philosophy of mind, which holds that perceiving and imagining are essentially distinct, these models depict perceiving and imagining as deeply unified and overlapping. It is argued that there are two mutually exclusive implications of taking perception and imagination to be fundamentally unified. The view defended is what I dub the ecological-enactive view given that it does not succumb to internalism about the mind-world relation, and allows one to keep a version of the received view in play.
Predictive minds can think: addressing generality and surface compositionality of thought
Synthese, 2022
Predictive processing framework (PP) has found wide applications in cognitive science and philosophy. It is an attractive candidate for a unified account of the mind in which perception, action, and cognition fit together in a single model. However, PP cannot claim this role if it fails to accommodate an essential part of cognition—conceptual thought. Recently, Williams (Synthese 1–27, 2018) argued that PP struggles to address at least two of thought’s core properties—generality and rich compositionality. In this paper, I show that neither necessarily presents a problem for PP. In particular, I argue that because we do not have access to cognitive processes but only to their conscious manifestations, compositionality may be a manifest property of thought, rather than a feature of the thinking process, and result from the interplay of thinking and language. Pace Williams, both of these capacities, constituting parts of a complex and multifarious cognitive system, may be fully based on the architectural principles of PP. Under the assumption that language presents a subsystem separate from conceptual thought, I sketch out one possible way for PP to accommodate both generality and rich compositionality.
The metaphysics of Predictive Processing - A non-representational account
2022
This dissertation focuses on generative models in the Predictive Processing framework. It is commonly accepted that generative models are structural representations; i.e. physical particulars representing via structural similarity. Here, I argue this widespread account is wrong: when closely scrutinized, generative models appear to be non-representational control structures realizing an agent’s sensorimotor skills. The dissertation opens (Ch.1) introducing the Predictive Processing account of perception and action, and presenting some of its connectionist implementations, thereby clarifying the role generative models play in Predictive Processing. Subsequently, I introduce the conceptual framework guiding the research (ch.2). I briefly elucidate the metaphysics of representations, emphasizing the specific functional role played by representational vehicles within the systems of which they are part. I close the first half of the dissertation (Ch.3) introducing the claim that generative models are structural representations, and defending it from intuitive but inconclusive objections. I then move to the second half of the dissertation, switching from exposition to criticism. First (Ch.4), I claim that the argument allegedly establishing that generative models are structural representations is flawed beyond repair, for it fails to establish generative models are structurally similar to their targets. I then consider alternative ways to establish that structural similarity, showing they all either fail or violate some other condition individuating structural representations. I further argue (Ch.5) that the claim that generative models are structural representations would not be warranted even if the desired structural similarity were established. For, even if generative models were to satisfy the relevant definition of structural representation, it would still be wrong to consider them as representations. This is because, as currently defined, structural representations fail to play the relevant functional role of representations, and thus cannot be rightfully identified as representations in the first place. This conclusion prompts a direct examination of generative models, to determine their nature (Ch.6). I thus analyze the simplest generative model I know of: a neural network functioning as a robotic “brain” and allowing different robotic creatures to swiftly and intelligently interact with their environments. I clarify how these networks allow the robots to acquire and exert the relevant sensorimotor abilities needed to solve the various cognitive tasks the robots are faced with, and then argue that neither the entire architecture nor any of its parts can possibly qualify as representational vehicles. In this way, the structures implementing generative models are revealed to be non-representational structures that instantiate an agent’s relevant sensorimotor skills. I show that my conclusion generalizes beyond the simple example I considered, arguing that adding computational ingredients to the architecture, or considering altogether different implementations of generative models, will in no way force a revision of my verdict. I further consider and allay a number of theoretical worries that it might generate, and then briefly conclude the dissertation
Predictive Processing and Representation: How Less Can Be More (2020)
The ambitious, mathematically elegant unificatory proposal of Predictive Processing (PP) to account for perception and action seems to have taken the world by storm. Though many different varieties of PP may be distinguished, most of them adhere to representationalism in one form or another. In this paper, we inquire into these representational foundations. We argue that PP is best understood in a non-representational way. We argue that the most popular way of construing representational content in PP, despite pretensions to the contrary, proliferates representations unacceptably. Next we show that PP’s explanatory potential can be retained without positing representations. We thus show that PP can’t have and doesn’t need representations to do its explanatory work, and conclude that our efforts are better placed in furthering the programme of non-representational PP.
From representations in predictive processing to degrees of representational features
Minds and Machines
Whilst the topic of representations is one of the key topics in philosophy of mind, it has only occasionally been noted that representations and representational features may be gradual. Apart from vague allusions, little has been said on what representational gradation amounts to and why it could be explanatorily useful. The aim of this paper is to provide a novel take on gradation of representational features within the neuroscientific framework of predictive processing. More specifically, we provide a gradual account of two features of structural representations: structural similarity and decoupling. We argue that structural similarity can be analysed in terms of two dimensions: number of preserved relations and state space granularity. Both dimensions can take on different values and hence render structural similarity gradual. We further argue that decoupling is gradual in two ways. First, we show that different brain areas are involved in decoupled cognitive processes to a grea...
Predictive Processing and the Representation Wars
Clark has recently suggested that predictive processing advances a theory of neural function with the resources to put an ecumenical end to the ''representation wars'' of recent cognitive science. In this paper I defend and develop this suggestion. First, I broaden the representation wars to include three foundational challenges to representational cognitive science. Second, I articulate three features of predictive processing's account of internal representation that distinguish it from more orthodox representationalist frameworks. Specifically, I argue that it posits a resemblance-based representational architecture with organism-relative contents that functions in the service of pragmatic success, not veridical representation. Finally , I argue that internal representation so understood is either impervious to the three anti-representationalist challenges I outline or can actively embrace them.
Predictive Processing and Perception: What does Imagining have to do with it?
Consciousness and Cognition, 2022
Predictive processing (PP) accounts of perception are unique not merely in that they postulate a unity between perception and imagination. Rather, they are unique in claiming that perception should be conceptualised in terms of imagination and that the two involve an identity of neural implementation. This paper argues against this postulated unity, on both conceptual and empirical grounds. Conceptually, the manner in which PP theorists link perception and imagination belies an impoverished account of imagery as cloistered from the external world in its intentionality, akin to a virtual reality, as well as endogenously generated. Yet this ignores a whole class of imagery whose intentionality is directed on the actual environment-projected mental imagery-and also ignores the fact that imagery may be triggered crossmodally in a bottom-up, stimulus-driven way. Empirically, claiming that imagery and perception share neural circuitry ignores relevant clinical results in this area. These evidence substantial perception/imagery neural dissociations, most notably in the case of aphantasia. Taken together, the arguments here suggest that PP theorists should substantially temper, if not outright abandon, their claim to a perception/imagination unity.