On the diversity and similarity of mathematical models in science (original) (raw)

The Real and the Mathematical in Quantum Modeling: From Principles to Models and from Models to Principles

Frontiers in Physics 5:19, 2017

The history of mathematical modeling outside physics has been dominated by the use of classical mathematical models, C-models, primarily those of a probabilistic or statistical nature. More recently, however, quantum mathematical models, Q-models, based in the mathematical formalism of quantum theory have become more prominent in psychology, economics, and decision science. The use of Q-models in these fields remains controversial, in part because it is not entirely clear whether Q-models are necessary for dealing with the phenomena in question or whether C-models would still suffice. My aim, however, is not to assess the necessity of Q-models in these fields, but instead to reflect on what the possible applicability of Q-models may tell us about the corresponding phenomena there, vis-à-vis quantum phenomena in physics. In order to do so, I shall first discuss the key reasons for the use of Q-models in physics. In particular, I shall examine the fundamental principles that led to the development of quantum mechanics. Then I shall consider a possible role of similar principles in using Q-models outside physics. Psychology, economics, and decision science borrow already available Q-models from quantum theory, rather than derive them from their own internal principles, while quantum mechanics was derived from such principles, because there was no readily available mathematical model to handle quantum phenomena, although the mathematics ultimately used in quantum did in fact exist then. I shall argue, however, that the principle perspective on mathematical modeling outside physics might help us to understand better the role of Q-models in these fields and possibly to envision new models, conceptually analogous to but mathematically different from those of quantum theory, that may be helpful or even necessary there or in physics itself. I shall, in closing, suggest one possible type of such models, singularized probabilistic models, SP-models, some of which are time-dependent, TDSP-models. The necessity of using such models may change the nature of mathematical modeling in science and, thus, the nature of science, as it happened in the case of Q-models, which not only led to a revolutionary transformation of physics but also opened new possibilities for scientific thinking and mathematical modeling beyond physics.

Conceptual variables, quantum theory, and statistical inference theory

arXiv (Cornell University), 2020

A different approach towards quantum theory is proposed in this paper. The basis is taken to be conceptual variables, physical variables that may be accessible or inaccessible, i.e., it may be possible or impossible to assign numerical values to them. In an epistemic process, the accessible variables are just ideal observations as observed by an actor or by some communicating actors. Group actions are defined on these variables, and using group representation theory this is the basis for developing the Hilbert space formalism here. Operators corresponding to accessible conceptual variables are derived as a result of the formalism, and in the discrete case it is argued that the possible physical values are the eigenvalues of these operators. The Born formula is derived under specific assumptions. The whole discussion here is a supplement to the author's book [1]. The interpretation of quantum states (or eigenvector spaces) implied by this approach is as focused questions to nature together with sharp answers to those questions. Resolutions if the identity are then connected to the questions themselves; these may be complementary in the sense defined by Bohr. This interpretation may be called a general epistemic interpretation of quantum theory. It is similar to Zwirn's recent Convival Solipsism, and also to QBism, and more generally, can be seen as a concrete implementation of Rovelli's Relational Quantum Mechanics. The focus in the present paper is, however, as much on foundation as on interpretation. But the simple consequences of an epistemic interpretation for some so called quantum paradoxes are discussed. Connections to statistical inference theory are discussed in a preliminary way, both through an example and through a brief discussion of quantum measurement theory.

A Unified Scientific Basis for Inference

arXiv (Cornell University), 2012

Every experiment or observational study is made in a context. This context is being explicitly considered in this book. To do so, a conceptual variable is defined as any variable which can be defined by (a group of) researchers in a given setting. Such variables are classified. Sufficiency and ancillarity are defined conditionally on the context. The conditionality principle, the sufficiency principle and the likelihood principle are generalized, and a tentative rule for when one should not condition on an ancillary is motivated by examples. The theory is illustrated by the case where a nuisance parameter is a part of the context, and for this case, model reduction is motivated. Model reduction is discussed in general from the point of view that there exists a mathematical group acting upon the parameter space. It is shown that a natural extension of this discussion also gives a conceptual basis from which essential parts of the formalism of quantum mechanics can be derived. This implies an epistemological basis for quantum theory, a kind of basis that has also been advocated by part of the quantum foundation community in recent years. Born's celebrated formula is shown to follow from a focused version of the likelihood principle together with some reasonable assumptions on rationality connected to experimental evidence. Some statistical consequences of Born's formula are sketched. The questions around Bell's inequality are approached by using the conditionality principle for each observer. The objective aspects of the world are identified with the ideal inference results upon which all observers agree (epistemological objectivity).

This paper was given as a talk at the conference Foundations of Probability and Physics organized by A. Khrennikov

2001

The acquisition and representation of basic experimental information under the probabilistic paradigm is analysed. The multinomial probability distribution is identified as governing all scientific data collection, at least in principle. For this distribution there exist unique random variables, whose standard deviation becomes asymptotically invariant of physical conditions. Representing all information by means of such random variables gives the quantum mechanical probability amplitude and a real alternative. For predictions, the linear evolution law (Schrödinger or Dirac equation) turns out to be the only way to extend the invariance property of the standard deviation to the predicted quantities. This indicates that quantum theory originates in the structure of gaining pure, probabilistic information, without any mechanical underpinning. I.

The Statistical Origins of Quantum Mechanics

Physics Research International, 2010

It is shown that Schrödinger's equation may be derived from three postulates. The first is a kind of statistical metamorphosis of classical mechanics, a set of two relations which are obtained from the canonical equations of particle mechanics by replacing all observables by statistical averages. The second is a local conservation law of probability with a probability current which takes the form of a gradient. The third is a principle of maximal disorder as realized by the requirement of minimal Fisher information. The rule for calculating expectation values is obtained from a fourth postulate, the requirement of energy conservation in the mean. The fact that all these basic relations of quantum theory may be derived from premises which are statistical in character is interpreted as a strong argument in favor of the statistical interpretation of quantum mechanics. The structures of quantum theory and classical statistical theories are compared, and some fundamental differences ...

Quantum Mechanics from Symmetry and Statistical Modeling

1999

Quantum theory is derived from a set of plausible assumptions related to the following general setting: For a given system there is a set of experiments that can be performed, and for each such experiment an ordinary statistical model is defined. The parameters of the single experiments are functions of a hyperparameter, which defines the state of the system. There is a symmetry group acting on the hyperparameters, and for the induced action on the parameters of the single experiment a simple consistency property is assumed, called permissibility of the parametric function. The other assumptions needed are rather weak. The derivation relies partly on quantum logic, partly on a group representation of the hyperparameter group, where the invariant spaces are shown to be in 1-1 correspondence with the equivalence classes of permissible parametric functions. Planck's constant only plays a role connected to generators of unitary group representations.