Milking a spherical cow: Toy models in neuroscience (original) (raw)

On the role of theory and modeling in neuroscience

2020

Abstract: In recent years, the field of neuroscience has gone through rapid experimental advances and extensive use of quantitative and computational methods. This accelerating growth has created a need for methodological analysis of the role of theory and the modeling approaches currently used in this field. Toward that end, we start from the general view that the primary role of science is to solve empirical problems, and that it does so by developing theories that can account for phenomena within their domain of application. We propose a commonly-used set of terms - descriptive, mechanistic, and normative - as methodological designations that refer to the kind of problem a theory is intended to solve. Further, we find that models of each kind play distinct roles in defining and bridging the multiple levels of abstraction necessary to account for any neuroscientific phenomenon. We then discuss how models play an important role to connect theory and experiment, and note the importa...

Vital Brains: The Making and Use of Models in Neuroscience, 7-8 April 2016, Free University Berlin

Using models to understand the human brain is hardly new. The practice has been used widely in the natural and life sciences since the 1800s, and continues today. Yet very little study by social scientists has been dedicated to how the neurosciences develop and use models to better understand what the brain is and how it works, including the complex entanglements between brains and the social. The workshop aims to understand the problems associated with modelling the brain by exploring the history and use of physical models of the brain, the development of digital models and simulations of the brain, and the development and use of animal models in neuroscience. The workshop will tackle four key questions: 1) How are brain models created and used in teaching and research? 2) What are the conceptual benefits, and limits of abstracting the brain from its context? 3) What would a 'vital' model of the brain look like? 4) What are the implications of brain modelling for the social?

Mechanistic explanations and animal model simulations in neuroscience

The ‘new mechanists’ assume that discovering neural mechanisms is a major aim of neuroscience where this constitutes a process of explaining. They strive to articulate the norms for good explanations. I argue that: the normative project of the ontic mechanistic explanation is unrealistic; the epistemological version of mechanistic explanation accounts for the practical epistemological constraints of neuroscience but fails normatively; hence a dilemma: either methodologically strong but unrealistic or epistemologically realistic but methodologically weak view of mechanistic explanation. I propose that the solution is in abandoning the idea that the study of mechanisms in neuroscience aims mainly at explaining. Model-building and simulating neural phenomena and mechanisms do not necessarily aim at providing explanations. Additionally, some of the cognitive goals attributed to explanation are actually served by simulation. Thus, the new mechanist approach needs to be revised so as to do better justice to the role of simulation in neuroscience.

The Need for the Emergence of Mathematical Neuroscience: Beyond Computation and Simulation

Frontiers in Computational Neuroscience, 2011

Computational neuroscience, broadly defined, is the mathematical and physical modeling of neural processes at a specific chosen scale, from molecular and cellular to systems, for the purpose of understanding how the brain and related structures represent and process information. The ultimate objective is to provide an understanding of how the organism takes in sensory information, how such information is integrated and used in the brain, and how the output of such processing results in meaningful decisions and behaviors by the organism to allow it to function and thrive in its environment. This endeavor involves the building of computational models that aim to replicate and explain observed or measured data in order to arrive at a deeper understanding of the dynamics of brain function. Beginning with a set of experimental observations or measurements, a model is postulated that aims to provide a set of rules or relationships that if given the initial experimental observations (or at least part of such a set) would be able to describe and explain some desired aspects or properties of the experimental measurements, such as casual, correlative, or mechanistic relationships between the data and underlying molecular, cellular, and systems mechanisms that produced it. In general, this process almost always begins with a qualitative "guess" about how the data fit together and what are the likely rules that govern the relationships between it. This is subject to a number of uncontrollable variables, including the amount and quality (e.g., accuracy and precision) of the data, how general or narrow the acquisition conditions were under which it was collected, which may constrain the generality and applicability of the model, and the degree of understanding and expertise on the part of the investigator constructing the model about the neurobiology which the data describe. This qualitative picture of the model is then "translated" into a quantitative mathematical framework which almost always involves expressing the hypothesized relationships as ordinary or partial differential equations or related objects, such as difference equations, as state variables that evolve in space and/or time. The model, once constructed, is still nothing more than a guess, and so testing it with the goal of building circumstantial support for it (or against it) is then carried out by numerical simulations of the processes being modeled, often where the answers or outputs are known from experiment and can be compared with the outputs computed by the model. At this point several outcomes are possible, assuming the model is at least partially correct. One possibility is that the model is able to describe the data set used to constructed it but cannot make any novel non-trivial predictions or new hypotheses about the system under study. This outcome often provides a modest contribution to the literature that may give some insights into the mechanisms involved if the model or at least parts of it can be experimentally tested and validated. A less desirable outcome is where a model contains terms or is constructed in a way where further experimental testing of the model cannot occur, for example due to limitations in experimental technologies or terms in the mathematics that have no known real world counterparts. A more productive outcome is when the model results in a novel non-trivial or unexpected experimental hypothesis that can be tested and verified. This may lead to the design and carrying out of new experiments and may lead to potentially significant novel experimental findings. In turn, new data sets allow the fine tuning or modification of the model in an iterative way. But in all cases though, the core of the process is the same: one guesses at a model and uses mathematics to justify the guess. The actual validation of the guess is based on numerical simulations, and in an often iterative approach improves the model. Note however, that in the typical way in which computational neuroscience is practiced, the mathematics involved, while in an applied sense is central to the process, is purely descriptive and does not participate in the process of discovery. Given this discussion, we can define computational neuroscience somewhat more provocatively as numerical simulations of postulated models constructed from qualitative hypotheses. In the most limited case this definition extends itself to numerical simulations of postulated models constructed from unverifiable hypotheses. The computational neuroscience literature is full of beautifully mathematically constructed models that have had minimal impact on main stream neuroscience or our understanding of brain function because of this.

Vital Models: The Making and Use of Models in the Brain Sciences

Progress in Brain Research, Elsevier , 2017

In the contemporary neurosciences, particularly with the advent of 'big science' projects like the Human Brain Project in the European Union and the BRAIN Initiative in the United States, brain models are often presented as a powerful way to move forward understandings of the brain. For example, the Human Brain Project claims that "Empirical research will enable the formulation of multi-scale theories and predictive neuroinformatics by modeling and simulation to identify organizational principles of spatial and temporal brain architecture" (Amunts et al. 2016). The US BRAIN Initiative seeks to "Shed light on the complex links between brain function and behavior, incorporating new theories and computational models".

Minimal Models and Canonical Neural Computations: The Distinctness of Computational Explanation in Neuroscience

Synthese

In a recent paper, Kaplan (2011) takes up the task of extending Craver’s (2007) mechanistic account of explanation in neuroscience to the new territory of computational neuroscience. He presents the model to mechanism mapping (3M) criterion as a condition for a model’s explanatory adequacy. This mechanistic approach is intended to replace earlier accounts which posited a level of computational analysis conceived as distinct and autonomous from underlying mechanistic details. In this paper I discuss work in computational neuroscience that creates difficulties for the mechanist project. Carandini and Heeger (2012) propose that many neural response properties can be understood in terms of canonical neural computations. These are “standard computational modules that apply the same fundamental operations in a variety of contexts.” Importantly, these computations can have numerous biophysical realisations, and so straightforward examination of the mechanisms underlying these computations carries little explanatory weight. Through a comparison between this modelling approach and minimal models in other branches of science, I argue that computational neuroscience frequently employs a distinct explanatory style, namely, efficient coding explanation. Such explanations cannot be assimilated into the mechanistic framework but do bear interesting similarities with evolutionary and optimality explanations elsewhere in biology.

Proposal for a nonlinear top-down toy model of the brain

Physica A: Statistical Mechanics and its Applications, 2002

Solutions to Newton's equations for particles in one-body potentials of form V1(xi) = p A (i) 2p x 2p i , where p ¿ 0 and an integer, can be regarded as generators of inÿnite sequences of correlated frequencies { m}. Simple nonlinear potentials can hence provide e cient ways of generating correlated information. Two-body potentials V2(xi − xj) can provide ways to communicate aspects of that information between the correlated frequency sequences. Temperature and noise can play a role in introducing a time scale across which the frequencies retain their identity. Introduction of explicit time dependence in the energy terms might be appropriate for constructing top down versions of toy models for the brain, something that is lacking at the present time. The richness of the nonlinear system along with the e ects of heat baths, external noise and time dependence allows for the possibility of describing aging e ects, processing of information "templates" in the brain and of the development of correlations between such "templates". In short, nonlinearity, interactions, noise e ects and introduction of time-dependent energies might allow for the construction of "top-down" models of the brain with the eventual goal of possibly unifying the neurological, molecular biological, biochemical and psychiatric approaches toward studying the brain.