On the Correspondence Between Proofs and lambda-Terms (original) (raw)
Related papers
Mechanical procedure for proof construction via closed terms in typed λ calculus
Journal of Automated Reasoning, 1988
In this paper is presented an algorithm for constructing natural deduction proofs in the propositional intuitionistic and classical logics according to the analogy relating intuitionistic propositional formulas and natural deduction proofs, respectively, to types and terms of simple type theory. Proofs are constructed as closed terms in the simple typed λ calculus. The soundness and completeness of this method are proved.
Mechanical Procedure for Proof Construction via Closed Terms in Typed \lambda Calculus
Journal of Automated Reasoning - JAR, 1988
In this paper is presented an algorithm for constructing natural deduction proofs in the propositional intuitionistic and classical logics according to the analogy relating intuitionistic propositional formulas and natural deduction proofs, respectively, to types and terms of simple type theory. Proofs are constructed as closed terms in the simple typed 2 calculus. The soundness and completeness of this method are proved.
Mechanical procedure for proof construction via closed terms in typed ? calculus
Journal of Automated Reasoning, 1988
In this paper is presented an algorithm for constructing natural deduction proofs in the propositional intuitionistic and classical logics according to the analogy relating intuitionistic propositional formulas and natural deduction proofs, respectively, to types and terms of simple type theory. Proofs are constructed as closed terms in the simple typed 2 calculus. The soundness and completeness of this method are proved.
On Proofs and Types in Second Order Logic
2015
In my dissertation I address some questions concerning the proof theory of second order logic and its constructive counterpart, System F (Girard 1971). These investigations follow two distinct (though historically related) viewpoints in proof theory, which are compared throughout the text: on the one side, the proof theoretic semantics tradition inaugurated by Dummett and Prawitz (Prawitz 1971, Dummett 1991), focusing on the analysis of the inferential content of proofs; on the other side, the interactionist tradition arising from Kleene's realizability (Kleene 1945) and the Tait/Girard reducibility technique (Tait 1967, Girard 1971), which interprets proofs as untyped programs and focuses, rather, on the behavioral content of proofs, i.e. the way in which they interact through the cut-elimination algorithm. A distinction is made between the issues of justifying and understanding ("explaining why" and "explaining how", as in Girard 2000) impredicative reasoning, i.e. between non elementary results like the Hauptsatz and the combinatorial analysis of proofs, seen as programs, i.e. recursive objects. As for justifi cation, an epistemological analysis of the circularity involved in the second order Hauptsatz is developed; it is shown that the usual normalization arguments for second order logic do not run into the vicious circularity claimed by Poincaré and Russell, but involve a diff erent, epistemic, form of circularity. Still, this weaker circularity makes justifi cation, in a sense, pointless; in particular, some examples of inconsistent higher order theories admitting epistemically circular normalization arguments are discussed. As for the explanation issue, a constructive and combinatorial (i.e. independent from normalization) analysis of higher order order quantifi cation is developed along two directions, with some related technical results. The fi rst direction arises from the parametric and dinatural interpretations of polymorphism (Reynolds 1983, Girard-Scott-Scedrov 1992), which provide a clear mathematical meaning to Carnap's defense of impredicative quanti fication (Carnap 1983). In particular, the violation of the parametric condition leads to paradoxes which are often ignored in the philosophical literature (with the exception of Longo-Fruchart 1997). The analysis of the combinatorial content of these interpretations leads to a 1-completeness theorem (every normal closed -term in the universal closure of a simple type is typable in simple type theory), which connects the interactionist and the inferential conceptions of proof. The second direction follows the analysis of the typing conditions of the -terms associated with intuitionistic second order proofs. To the \vicious circles" in the proofs there correspond recursive (i.e. circular) speci fications for the types of the -terms. The geometrical structure of these vicious circles is investigated (following Lechenadec 1989, Malecki 1990, Giannini - Ronchi Della Rocca 1991), leading to a combinatorial characterization of typability in some inconsistent extension of System F: since, as Girard's paradox shows, a typable term need not be normalizing, one is indeed naturally led to consider not normalizing theories. Such investigations go in the direction both of a mathematical understanding of the structure generated by the vicious circles of impredicative theories and of the development of a proof-theoretic analysis of potentially incorrect or uncertain proofs.
The λ Calculus and the Unity of Structural Proof Theory
Theory of Computing Systems / Mathematical Systems Theory, 2009
In the context of intuitionistic implicational logic, we achieve a perfect correspondence (technically an isomorphism) between sequent calculus and natural deduction, based on perfect correspondences between left-introduction and elimination, cut and substitution, and cut-elimination and normalisation. This requires an enlarged system of natural deduction that refines von Plato’s calculus. It is a calculus with modus ponens and primitive substitution; it is also a “coercion calculus”, in the sense of Cervesato and Pfenning. Both sequent calculus and natural deduction are presented as typing systems for appropriate extensions of the λ-calculus. The whole difference between the two calculi is reduced to the associativity of applicative terms (sequent calculus = right associative, natural deduction = left associative), and in fact the achieved isomorphism may be described as the mere inversion of that associativity. The novel natural deduction system is a “multiary” calculus, because “applicative terms” may exhibit a list of several arguments. But the combination of “multiarity” and left-associativity seems simply wrong, leading necessarily to non-local reduction rules (reason: normalisation, like cut-elimination, acts at the head of applicative terms, but natural deduction focuses at the tail of such terms). A solution is to extend natural deduction even further to a calculus that unifies sequent calculus and natural deduction, based on the unification of cut and substitution. In the unified calculus, a sequent term behaves like in the sequent calculus, whereas the reduction steps of a natural deduction term are interleaved with explicit steps for bringing heads to focus. A variant of the calculus has the symmetric role of improving sequent calculus in dealing with tail-active permutative conversions.
Partial inductive definitions as type-systems for λ-terms
BIT, 1992
To every partial inductive definition D, a natural deduction calculus ND(D) is associated. Not every such system will have the normalization property; specifically, there are definitions D' for which ND(D') permits non-normalizable deductions. A lambda calculus is formulated where the terms are used as objects realizing deductions in ND(D), and is shown to have the Church-Rosser property. Since ND(D) permits non-normalizable deductions, there will be typed terms which are non-normalizable. It will, for example, be possible to obtain a typed fixed-point operator.
We present a new Curry-Howard correspondence for classical first-order natural deduction. We add to the lambda calculus an operator which represents, from the viewpoint of programming, a mechanism for raising and catching multiple exceptions, and from the viewpoint of logic, the excluded middle over arbitrary prenex formulas. The machinery will allow to extend the idea of learning -originally developed in Arithmetic -to pure logic. We prove that our typed calculus is strongly normalizing and show that proof terms for simply existential statements reduce to a list of individual terms forming a Herbrand disjunction. A by-product of our approach is a natural-deduction proof and a computational interpretation of Herbrand's Theorem.
J-Calc: A typed lambda calculus for Intuitionistic Justification Logic
Special Issue Workshop on Intuitionistic Modal Logic and Applications 2013
In this paper we offer a system J-Calc that can be regarded as a typed λ-calculus for the {→, ⊥} fragment of Intuitionistic Justification Logic. We offer different interpretations of J-Calc, in particular, as a two phase proof system in which we proof check the validity of deductions of a theory T based on deductions from a stronger theory T and computationally as a type system for separate compilations. We establish some first metatheoretic results.
Short Proofs of Normalization for the simply-typed -calculus, permutative conversions and Godel's T
Archive for Mathematical Logic, 2002
Inductive characterizations of the sets of terms, the subset of strongly normalizing terms and normal forms are studied in order to reprove weak and strong normalization for the simplytyped λ-calculus and for an extension by sum types with permutative conversions. The analogous treatment of a new system with generalized applications inspired by generalized elimination rules in natural deduction, advocated by von Plato, shows the flexibility of the approach which does not use the strong computability/candidate styleà la Tait and Girard. It is also shown that the extension of the system with permutative conversions by η-rules is still strongly normalizing, and likewise for an extension of the system of generalized applications by a rule of "immediate simplification". By introducing an infinitely branching inductive rule the method even extends to Gödel's T.