The weak lambda calculus as a reasonable machine (original) (raw)

An invariant cost model for the lambda calculus

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2006

We define a new cost model for the call-by-value lambda-calculus satisfying the invariance thesis. That is, under the proposed cost model, Turing machines and the call-by-value lambdacalculus can simulate each other within a polynomial time overhead. The model only relies on combinatorial properties of usual beta-reduction, without any reference to a specific machine or evaluator. In particular, the cost of a single beta reduction is proportional to the difference between the size of the redex and the size of the reduct. In this way, the total cost of normalizing a lambda term will take into account the size of all intermediate results (as well as the number of steps to normal form).

An Efficient Interpreter for the Lambda-Calculus

Journal of Computer and System Sciences, 1981

A call-by-need reduction algorithm for the LAMBDA-calculus is presented. Call-by-need is as efficient as call-by-value and is equivalent to call-by-name in languages without side effects. The algorithm, which is the core of a running system, is presented by illustrating successive transformations of a straightforward implementation of the classical definition of reduction in the LAMBDA-calculus.

The cost of usage in the λ-calculus

A new "inductive" approach to standardization for the λ-calculus has been recently introduced by Xi, allowing him to establish a double-exponential upper bound |M | 2 |σ| for the length of the standard reduction relative to an arbitrary reduction σ originated in M. In this paper we refine Xi's analysis, obtaining much better bounds, especially for computations producing small normal forms. For instance, for terms reducing to a boolean, we are able to prove that the length of the standard reduction is at most a mere factorial of the length of the shortest reduction sequence. The methodological innovation of our approach is that instead of counting the cost for producing something, as is customary, we count the cost of consuming things. The key observation is that the part of a λ-term that is needed to produce the normal form (or an arbitrary rigid prefix) may rapidly augment along a computation, but can only decrease very slowly (actually, linearly).

A Linear-non-Linear Model for a Computational Call-by-Value Lambda Calculus (Extended Abstract)

Lecture Notes in Computer Science, 2008

We give a categorical semantics for a call-by-value linear lambda calculus. Such a lambda calculus was used by Selinger and Valiron as the backbone of a functional programming language for quantum computation. One feature of this lambda calculus is its linear type system, which includes a duplicability operator "!" as in linear logic. Another main feature is its call-by-value reduction strategy, together with a side-effect to model probabilistic measurements. The "!" operator gives rise to a comonad, as in the linear logic models of Seely, Bierman, and Benton. The side-effects give rise to a monad, as in Moggi's computational lambda calculus. It is this combination of a monad and a comonad that makes the present paper interesting. We show that our categorical semantics is sound and complete.

On constructor rewrite systems and the lambda-calculus

Automata, Languages and Programming, 2009

We prove that orthogonal constructor term rewrite systems and lambda-calculus with weak (i.e., no reduction is allowed under the scope of a lambda-abstraction) call-by-value reduction can simulate each other with a linear overhead. In particular, weak call-by-value betareduction can be simulated by an orthogonal constructor term rewrite system in the same number of reduction steps. Conversely, each reduction in an term rewrite system can be simulated by a constant number of beta-reduction steps. This is relevant to implicit computational complexity, because the number of beta steps to normal form is polynomially related to the actual cost (that is, as performed on a Turing machine) of normalization, under weak call-by-value reduction. Orthogonal constructor term rewrite systems and lambda-calculus are thus both polynomially related to Turing machines, taking as notion of cost their natural parameters.

Normalization by Evaluation for Typed Weak lambda-Reduction

2018

Weak reduction relations in the λ-calculus are characterized by the rejection of the so-called ξ-rule, which allows arbitrary reductions under abstractions. A notable instance of weak reduction can be found in the literature under the name restricted reduction or weak λ-reduction. In this work, we attack the problem of algorithmically computing normal forms for λ, the λ-calculus with weak λ-reduction. We do so by first contrasting it with other weak systems, arguing that their notion of reduction is not strong enough to compute λ-normal forms. We observe that some aspects of weak λ-reduction prevent us from normalizing λ directly, thus devise a new, better-behaved weak calculus λ, and reduce the normalization problem for λ to that of λ. We finally define type systems for both calculi and apply Normalization by Evaluation to λ, obtaining a normalization proof for λ as a corollary. We formalize all our results in Agda, a proof-assistant based on intensional Martin-Löf Type Theory. 201...

Transformations and reduction strategies for typed lambda expressions

ACM Transactions on Programming Languages and Systems, 1984

A scheme is described that allows languages supporting higher order functions to be efficiently implemented using a standard run-time stack. A machine for evaluating typed lambda expressions is first constructed. In essence, the machine simply extends the standard SECD machine to allow partial application of abstractions representing multiadic functions. Some transformations of these typed lambda expressions into a so-called simple form are then described. Evaluation of simple-form expressions on the extended SECD machine produces stacklike environment structures rather than the treelike structures produced by expressions of arbitrary form. This allows implementation of the machine using a standard runtime stack. The SECD machine is then further modified so that closures are applied "in situ" rather than returned as values. The order of reduction is also changed so that the evaluation of function-valued expressions is deferred until they can be applied to sufficient arguments to allow reduction to nonfunctional values. It is shown that this function-deferring machine can be implemented using a standard run-time stack and thus can evaluate arbitrary lambda expressions without prior transformation to simple form. Finally, application of the above schemes to standard programming languages, such as ALGOL, Pascal, Ada, and LISP, is considered.

Efficient self-interpretation in lambda calculus

Journal of Functional Programming, 1992

We start by giving a compact representation schema for -terms and show how this leads to an exceedingly small and elegant self-interpreter. We then de ne the notion of a self-reducer, and show how this too can be written as a small -term. Both the self-interpreter and the self-reducer are proved correct. We nally give a constructive proof for the second xed point theorem for the representation schema. All the constructions have been implemented on a computer, and experiments verify their correctness. Timings show that the self-interpreter and self-reducer are quite e cient, being about 35 and 50 times slower than direct execution using a call-byneed reduction strategy.