Between History and Logic (original) (raw)

Advertisement for the philosophy of the computational sciences

The Oxford Handbook of Philosophy of Science, edited by Paul Humphreys, 2016

This chapter deals with those fields that study computing systems. Among these computational sciences are computer science, computational cognitive science, computational neuroscience, and artificial intelligence. In the first part of the chapter, it is shown that there are varieties of computation, such as human computation, algorithmic machine computation, and physical computation. There are even varieties of versions of the Church-Turing thesis. The conclusion is that different computational sciences are often about different kinds of computation. The second part of the chapter discusses three specific philosophical issues. One is whether computers are natural kinds. Another issue is the nature of computational theories and explanations. The last section of the chapter relates remarkable results in computational complexity theory to problems of verification and confirmation.

Recent Developments in Computing and Philosophy

First Paragraph: Because the label "computing and philosophy" can seem like an ad hoc attempt to tie computing to philosophy, it is important to explain why it is not, what it studies (or does) and how it differs from research in, say, "computing and history," or "computing and biology". The American Association for History and Computing is "dedicated to the reasonable and productive marriage of history andcomputer technology for teaching, researching and representing history through scholarship and public history" (http://theaahc.org). More pervasive, work in computing and biology enjoys the convenient name of "bioinformatics...the science of using information to understand biology..., a subset of the larger field of computational biology, the application of quantitative analytical techniques in modeling biological systems (http://oreilly.com/catalog/bioskills/ chapter/ch01.html). The recent venture of the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers to publish the Transactions on Computational Biology and Bioinformatics (TCBB) bears witness to the reach of computing and biology and underscores its objective. TCBB intends to report" archival research results related to the algorithmic, mathematical, statistical, and computational methods that are central in bioinformatics and computational biology; the development and testing of effective computer programs in bioinformatics; the development and optimization of biological databases; and important biological results that are obtained from the use of these methods, programs, and databases" (http://tcbb.acm.org).

When Logic Meets Engineering: Introduction to Logical Issues in the History and Philosophy of Computer Science

The birth, growth, stabilization and subsequent understanding of a new field of practical and theoretical enquiry is always a conceptual process including several typologies of events, phenomena and figures spanning often over a long historical period. This is especially true when the field in question is not uniquely identified by either the Academia, or the Laboratory, or the Industry. Computing is a paradigmatic case. So diverse and conflicting are its origins, that the debates on the nature of computer science have characterized its whole history. From its early beginnings onward, computing has been variously labelled as a technology, a science and as a form of mathematics. It has been said that computing is a discipline dealing with machines that compute (see Newell, Perlis and Simon 1967), with information processed by such machines (see or with the algorithms that direct the behaviour of such processes (see . Today, when computers are so extensively present in our lives, one would expect that theoreticians and practitioners in the field of computing would have found, at least, some consensus on these questions. The opposite is true however and there is still much controversy on the scientific, engineering and experimental qualifications pertaining to the discipline. 1 The aim of the present special issue is to investigate these tensions within computer science by focusing on some of the figures and questions at the core of its relation with logic.

Editorial introduction to the Topical Issue “Computer Modeling in Philosophy”

Open Philosophy

The role played by logic in 20 th century philosophy, it can be argued, will be played by computational modeling in the 21 st. This special issue is devoted to discussion, analysis, but primarily to examples of computer-aided or computer-instantiated modeling. Over the past several decades, social epistemology and philosophy of science have been important areas in the development of computational philosophy.1 Here we focus on current work in a wider spread of sub-disciplines: ethics, social philosophy, philosophy of perception, philosophy of mind, metaphysics and philosophy of religion. The first two pieces in the collection concentrate on computational techniques and philosophical methodology quite generally. Istvan Berkeley's "The Curious Case of Connectionism" opens the collection, with an examination and analysis of three stages in the history of a major theoretical approach that continues in contemporary computational philosophy. He characterizes a first stage of connectionism as ending abruptly with the critique by Minsky and Papert.2 A second stage had an important impact on philosophy, but Berkeley documents its waning influence with the declining appearance of the terms 'connectionism' and 'connectionist' in the Philosopher's Index. He proposes deep learning as a third stage of connectionism, with new computational technologies promising the possibility of important philosophical application. The search for formal methods of inquiry and discovery, as opposed to mere justification, can be seen historically as a project in Aristotle, Bacon, Leibniz, and Mill. But in the 20 th century, at the hands of Popper, Reichenbach, Rawls, and others, that search was largely abandoned. In "The Evaluation of Discovery: Models, Simulation and Search through 'Big Data'," Joseph Ramsey, Kun Zhang, and Clark Glymour argue that the contemporary development of algorithms for search through big data offers a rebirth for formal methods of discovery. The authors point out that search algorithms also pose a major problem of validation, however. What we want is output with both 'precision,' a high probability that the hypotheses it returns are true, and 'recall,' the probability that the true hypotheses are returned. How are we to assess precision and recall for causal relations if, as in many cases, our data base is huge, the potential correlations are many, but the empirical base available for direct assessment is vanishingly small? Here recourse is often made to modeling or simulation, assessing a search method using not actual data but data simulated from known patterns, substructures, or 'motifs' within a domain. Ramsey, Zhang, and Glymour illustrate the approach with two cases, one from neuroscience and another from astrophysics. They emphasize the inherent risk in the simulated data strategy, particularly in cases in which sample selection is not automated and not guaranteed to be representative. In specific contexts, with appropriate safeguards, careful application of search algorithms