A functional model of sensemaking in a neurocognitive architecture (original) (raw)

The Network Architecture of Cortical Processing in Visuo-spatial Reasoning

Scientific Reports, 2012

Reasoning processes have been closely associated with prefrontal cortex (PFC), but specifically emerge from interactions among networks of brain regions. Yet it remains a challenge to integrate these brain-wide interactions in identifying the flow of processing emerging from sensory brain regions to abstract processing regions, particularly within PFC. Functional magnetic resonance imaging data were collected while participants performed a visuo-spatial reasoning task. We found increasing involvement of occipital and parietal regions together with caudal-rostral recruitment of PFC as stimulus dimensions increased. Brain-wide connectivity analysis revealed that interactions between primary visual and parietal regions predominantly influenced activity in frontal lobes. Caudal-to-rostral influences were found within left-PFC. Right-PFC showed evidence of rostral-to-caudal connectivity in addition to relatively independent influences from occipito-parietal cortices. In the context of hierarchical views of PFC organization, our results suggest that a caudal-to-rostral flow of processing may emerge within PFC in reasoning tasks with minimal top-down deductive requirements. R easoning is a cognitive process associated with making logical inferences about experienced phenomena in the surrounding world. As an everyday driving example, imagine a traffic light turns yellow with some distance left to the intersection. Integrating this information indicates an upcoming red light, so the traffic rules require bringing the car to a stop. Reasoning involves making connections between sensory information, integrated features (i.e., perception of sensory information) and rules. Prefrontal cortex (PFC) plays a key role in mediating these interacting processes through interplay between multiple prefrontal areas and other, more specialized, cortical regions 1. Thus, our approach to characterize PFC activity related to reasoning involved a brain-wide causal network analysis that combined functional connectivity characteristics with estimates of the distribution of activity between and within cortical regions. This approach allowed for the detection of the ''processing flow'' across the brain and, in particular, the direction of influence along the rostral-caudal axis of PFC in the context of reasoning tasks. Recent studies have suggested an ''abstractness'' hierarchy along the rostral-caudal axis of PFC 2,3. According to this top-down view, rostral areas are associated with more abstract processes such as maintenance of goals and rules, predominantly influencing caudal PFC activity that mediates domain-specific motor representations and sensory feature representations. In reasoning tasks, similar cognitive resources are expected to be engaged when the objective is to process and integrate sensory features (e.g. tracking and indentifying changes in a shape) and to verify certain rules (e.g. whether the shape rotates clockwise). However the flow of processing within PFC regions mediating the sequence of reasoning components has not yet been explored. More specifically, when performance rules must be verified on the basis of stimulus configurations, an overall caudal-to-rostral processing flow may be predicted, wherein domain-specific areas integrate stimulus features and communicate information to ruleprocessing regions. Raven's Progressive Matrices (RPM) is a commonly used intelligence test that assesses human reasoning 4-6. In RPM, participants are required to select an answer choice that best completes a progression of one or multiple rules among a series of visual items. Due to the multimodal nature of RPM 7 and the possibility of confounding variability in individuals' approaches to solving reasoning problems, reliable assessment of the contributing cognitive resources becomes extremely challenging. In response to these concerns, we designed a visuo-spatial reasoning task (VSRT) that required participants to perform a sequence of sensory processing, feature integration, and rule verification steps. For sensory processing, participants were asked to visually inspect the shapes presented within three panels in each trial. In addition, through feature integration, they had to identify change patterns across the panels (e.g., identifying a revolving shape). Finally, in the rule verification step, the identified patterns were compared against a collection of known rules (see Results and Methods for more details). This task enabled

Architecture of Explanatory Inference in the Human Prefrontal Cortex

Frontiers in Psychology, 2011

Cognitive psychologists have therefore increasingly recognized the importance of investigating the psychology of explanation. We suggest that category learning, typicality judgments, reasoning, and conceptual coherence are strongly interconnected, and that our beliefs about the causal powers of objects, events, and agents-and about the rule-like causal relationships among them-are central to the generation and evaluation of the myriad ways in which we interpret, understand and explain ourselves and our environment. Parallel developments in cognitive neuroscience have fostered the study of the neural mechanisms underlying explanation. For instance, the resurgence of cognitive simulation theories has motivated neuroscience models of explanatory inference based on the simulation of modality-specific components of experience (e.g.,

An impressive and ambitious new cognitive architecture that integrates cognitive modeling with biological reality

The scientific study of cognition can be broken down into a series of levels in a hierarchy, but what follows is by no means an exhaustive list. Basic empiricism, the testing of hypotheses with observable data, is toward the bottom. Next up the ladder is theory-driven empiricism, according to which hypotheses are grounded in a particular theoretical approach, and results can serve as a piece of an overarching puzzle or framework. Another level up is the place for quantitative modeling, wherein theories are stated in highly specified and highly testable terms. Such models can be fit to data, and the successes or failures of such model fitting tend to direct the model toward ever more improved approximations of some aspect of cognition or behavior.

Goldstone, R. L., de Leeuw, J. R., & Landy, D. H. (2015). Fitting Perception in and to Cognition. Cognition, 135, 24-29.

Perceptual modules adapt at evolutionary, lifelong, and moment-to-moment temporal scales to better serve the informational needs of cognizers. Perceptual learning is a powerful way for an individual to become tuned to frequently recurring patterns in its specific local environment that are pertinent to its goals without requiring costly executive control resources to be deployed. Mechanisms like predictive coding, categorical perception, and action-informed vision allow our perceptual systems to interface well with cognition by generating perceptual outputs that are systematically guided by how they will be used. In classic conceptions of perceptual modules, people have access to the modules’ outputs but no ability to adjust their internal workings. However, humans routinely and strategically alter their perceptual systems via training regimes that have predictable and specific outcomes. In fact, employing a combination of strategic and automatic devices for adapting perception is one of the most promising approaches to improving cognition.

Individuation, counting, and statistical inference: The role of frequency and whole-object representations in judgment under uncertainty

Journal of Experimental Psychology: General, 1998

Evolutionary approaches to judgment under uncertainty have led to new data showing that untutored subject reliably produce judgments that conform to may principles of probability theory when (a) they are asked to compute a frequency instead of the probability of a single event, and (b) the relevant information is expressed as frequencies. But are the frequencycomputation systems implicated in these experiments better at operating over some kinds of input than others? Principles of object perception and principles of adaptive design led us to propose the individuation hypothesis: that these systems are designed to produce wellcalibrated statistical inferences when they operate over representations of "whole" objects, events, and locations. In a series of experiments on Bayesian reasoning, we show that human performance can be systematically improved or degraded by varying whether a correct solution requires one to compute hit and false-alarm rates over "natural" units, such as whole objects, as opposed to inseparable aspects, views, and other parsings that violate evolved principles of object construal. The ability to make well-calibrated probability judgments depends, at a very basic level, on the ability to count. The ability to count depends on the ability to individuate the world: to see it as composed of discrete entities. Research on how people individuate the world is, therefore, relevant to understanding the statistical inference mechanisms that govern how people make judgments under uncertainty. Computational machinery whose architecture is designed to parse the world and make inferences about it is under intensive study in many branches of psychology: perception, psychophysics, cognitive development, cognitive neurosci-_________________________________________________

Dual processes, probabilities, and cognitive architecture

Mind & Society, 2012

It has been argued that dual process theories are not consistent with Oaksford and Chater's probabilistic approach to human reasoning Chater in Psychol Rev 101:608-631, 1994, 2007;, which has been characterised as a ''single-level probabilistic treatment[s]'' (Evans 2007). In this paper, it is argued that this characterisation conflates levels of computational explanation. The probabilistic approach is a computational level theory which is consistent with theories of general cognitive architecture that invoke a WM system and an LTM system. That is, it is a single function dual process theory which is consistent with dual process theories like Evans' (2007) that use probability logic (Adams 1998) as an account of analytic processes. This approach contrasts with dual process theories which propose an analytic system that respects standard binary truth functional logic (Heit and Rotello in J Exp Psychol

Probabilistic models of cognition: exploring representations and inductive biases

Trends in Cognitive Sciences, 2010

Cognitive science aims to reverse-engineer the mind, and many of the engineering challenges the mind faces involve induction. The probabilistic approach to modeling cognition begins by identifying ideal solutions to these inductive problems. Mental processes are then modeled using algorithms for approximating these solutions, and neural processes are viewed as mechanisms for implementing these algorithms, with the result being a top-down analysis of cognition starting with the function of cognitive processes. ...

Fitting perception in and to cognition

Perceptual modules adapt at evolutionary, lifelong, and moment-to-moment temporal scales to better serve the informational needs of cognizers. Perceptual learning is a powerful way for an individual to become tuned to frequently recurring patterns in its specific local environment that are pertinent to its goals without requiring costly executive control resources to be deployed. Mechanisms like predictive coding, categorical perception, and action-informed vision allow our perceptual systems to interface well with cognition by generating perceptual outputs that are systematically guided by how they will be used. In classic conceptions of perceptual modules, people have access to the modules’ outputs but no ability to adjust their internal workings. However, humans routinely and strategically alter their perceptual systems via training regimes that have predictable and specific outcomes. In fact, employing a combination of strategic and automatic devices for adapting perception is one of the most promising approaches to improving cognition.

A cognitive model based on representations that are spatial functions

This paper outlines a cognitive model in which internal representations are spatial functions, and in which the associated process model is governed by distance in psychological space. Motivation for the model comes from the role of similarity judgements in human reasoning, and the apparent ability of humans to create task-dependent features about the concepts used in reasoning. Motivation also comes from the promise that neuroimages might be interpretable in terms of the conceptual tasks in which the person was engaged at the time of imaging. The creation of task-dependent features to aid problem solving is demonstrated in a categorisation task.