A General Non-Probabilistic Theory of Inductive Reasoning (original) (raw)

Epistemic Objectivity behind Inductive Probability: Beyond Carnap-Popper Controversy on the Problem of Inductive Logic

2019

Science neither aims at having the monopoly over the truth about the world nor establishing a dogmatic knowledge. Natural light of experience is held by empiricists to be the reliable source of human knowledge. Inductive logic has been a leading tool of empirical experiments in justifying and confirming scientific theories with evidence. Science cannot reach where it has reached without inductive logic. Inductive logic has, therefore, played an important role in making science what it is today. Inductive logic helps science to justify its theories not form convictional opinions of scientists but from factual propositions. However, inductive logic has been problematic in the sense that its logic of justification led philosophers of science to demarcation, the distinction of episteme from doxa. At present, some philosophers of science and scientists attempt to justify why science carries out a reliable knowledge. Some have argued for structuralism and realism of scientific theories rather than believing in the course of miracles and others for their historicity. Both views are explanatories of how science works and progresses. This essay recalls the arguments for structures of scientific theories and their historicity. First, the essay analyses the controversy between Rudolf Carnap and Karl Popper on how the problem of inductive logic in confirming scientific theories can be solved. In so doing, the essay refers to empirical probabilities as well as the limits calculus. Second, the essay merges frequentist and Bayesian approaches to determine how scientific theories are to be confirmed or refuted. Third, the use of a new form of Bayesian Theorem will show how mathematical and logical structures respond to some of the important questions that arise from the historical and realistic views about scientific theories. The essay argues for epistemic objectivity behind inductive probability, the key issue of the controversy in question, and proves that the truth about the world is symmetric. Keywords: Science; Induction; Probability; Demarcation; Deduction; Frequentism; Bayesianism.

Cognitive Foundations of Inductive Inference and Probability : An Axiomatic Approach ¤

2000

We suggest an axiomatic approach to the way in which past cases, or observations, are or should be used for making predictions and for learning. In our model, a predictor is asked to rank eventualities based on possible memories. A \memory" consists of repetitions of past cases, and can be identi ̄ed with a vector, attaching a nonnegative integer (number of occurrences) to each case. Mild consistency requirements on these rankings imply that they have a numerical representation that is linear in the number of case repetitions. That is, there exists a matrix assigning numbers to eventuality-case pairs, such that, for every memory vector, multiplication of the matrix by the vector yields a numerical representation of the ordinal plausibility ranking given that memory. Interpreting this result for the ranking of theories or hypotheses, rather than of speci ̄c eventualities, it is shown that one may ascribe to the predictor subjective conditional probabilities of cases given theori...

From Bayesian epistemology to inductive logic

Journal of Applied Logic, 2013

Inductive logic admits a variety of semantics (Haenni et al., 2011, Part 1). This paper develops semantics based on the norms of Bayesian epistemology (Williamson, 2010, Chapter 7). §1 introduces the semantics and then, in §2, the paper explores methods for drawing inferences in the resulting logic and compares the methods of this paper with the methods of Barnett and Paris (2008). §3 then evaluates this Bayesian inductive logic in the light of four traditional critiques of inductive logic, arguing (i) that it is language independent in a key sense, (ii) that it admits connections with the Principle of Indifference but these connections do not lead to paradox, (iii) that it can capture the phenomenon of learning from experience, and (iv) that while the logic advocates scepticism with regard to some universal hypotheses, such scepticism is not problematic from the point of view of scientific theorising. §1 Bayesian Epistemology as Semantics for Inductive Logic This section introduces the use of Bayesian epistemology as semantics for inductive logic. The material presented here is based on Williamson (2010), to which the reader is referred for more details. ¶ Bayesian Epistemology: A Primer. At root, Bayesian epistemology concerns the question of how strongly one should believe the various propositions that one can express. The Bayesian theory that answers this question can be developed in a number of ways, but it is usual to base the theory on the betting interpretation of degrees of belief. According to the betting interpretation, one believes proposition θ to degree x iff, were one to offer a betting quotient for θ-a number q such that one would pay qS to receive S in return should θ turn out to be true, where unknown stake S ∈ R may depend on q-then q = x. This interpretation of degrees of belief naturally goes hand in hand with the claim that, were one to bet according to one's degrees of belief via the betting interpretation, then one shouldn't expose oneself to avoidable losses. In particular, arguably one's degrees of belief should minimise worst-case expected loss.

Induction as conditional probability judgment

Memory & Cognition, 2007

Studies of inductive inference are usually framed in terms of projecting an unfamiliar (blank) property from one category to another, as in 1. Wolves have sesamoid bones, therefore bears have sesamoid bones.

Two Views of Belief: Belief as Generalized Probability and Belief as Evidence

Artificial Intelligence, 1992

Belief functions are mathematical objects defined to satisfy three axioms that look somewhat similar to the Kolmogorov axioms defining probability functions. We argue that there are (at least) two useful and quite different ways of understanding belief functions. The first is as a generalized probability function (which technically corresponds to the inner measure induced by a probability function). The second is as a way of representing evidence. Evidence, in turn, can be understood as a mapping from probability functions to probability functions. It makes sense to think of updating a belief if we think of it as a generalized probability. On the other hand, it makes sense to combine two beliefs (using, say, Dempster's rule of combination) only if we think of the belief functions as representing evidence. Many previous papers have pointed out problems with the belief function approach; the claim of this paper is that these problems can be explained as a consequence of confounding these two views of belief functions.

Direct Inference and Probabilistic Accounts of Induction

Journal for General Philosophy of Science

Schurz (2019, ch. 4) argues that probabilistic accounts of induction fail. In particular, he criticises probabilistic accounts of induction that appeal to direct inference principles, including subjective Bayesian approaches (e.g., Howson 2000) and objective Bayesian approaches (see, e.g., Williamson 2017). In this paper, I argue that Schurz’ preferred direct inference principle, namely Reichenbach’s Principle of the Narrowest Reference Class, faces formidable problems in a standard probabilistic setting. Furthermore, the main alternative direct inference principle, Lewis’ Principal Principle, is also hard to reconcile with standard probabilism. So, I argue, standard probabilistic approaches cannot appeal to direct inference to explicate the logic of induction. However, I go on to defend a non-standard objective Bayesian account of induction: I argue that this approach can both accommodate direct inference and provide a viable account of the logic of induction. I then defend this ac...

Naive probability: A mental model theory of extensional reasoning

Psychological Review, 1999

This article outlines a theory of naive probability. According to the theory, individuals who are unfamiliar with the probability calculus can infer the probabilities of events in an extensional way: They construct mental models of what is true in the various possibilities. Each model represents an equiprobable alternative unless individuals have beliefs to the contrary, in which case some models will have higher probabilities than others. The probability of an event depends on the proportion of models in which it occurs. The theory predicts several phenomena of reasoning about absolute probabilities, including typical biases. It correctly predicts certain cognitive illusions in inferences about relative probabilities. It accommodates reasoning based on numerical premises, and it explains how naive reasoners can infer posterior probabilities without relying on Bayes's theorem. Finally, it dispels some common misconceptions of probabilistic reasoning.

Mutual Influence between Different Views of Probability and Statistical Inference

PARADIGMA

In this paper, we analyse the various meanings of probability and its different applications, and we focus especially on the classical, the frequentist, and the subjectivist view. We describe the different problems of how probability can be measured in each of the approaches, and how each of them can be well justified by a mathematical theory. We analyse the foundations of probability, where the scientific analysis of the theory that allows for a frequentist interpretation leads to unsolvable problems. Kolmogorov’s axiomatic theory does not suffice to establish statistical inference without further definitions and principles. Finally, we show how statistical inference essentially determines the meaning of probability and a shift emerges from purely objectivist views to a complementary conception of probability with frequentist and subjectivist constituents. For didactical purpose, the result of the present analyses explains basic problems of teaching, originating from a biased focus...