Incremental analysis of logic programs (original) (raw)
Related papers
Optimized algorithms for incremental analysis of logic programs
Static Analysis, 1996
Global analysis of logic programs can be performed effectively by the use of one of several existing efficient algorithms. However, the traditional global analysis scheme in which all the program code is known in advance and no previous analysis information is available is unsatisfactory in many situations. Incrementa! analysis of logic programs has been shown to be feasible and much more efficient in certain contexts than traditional (non-incremental) global analysis. However, incremental analysis poses additional requirements on the fixpoint algorithm used. In this work we identify these requirements, present an important class of strategies meeting the requirements, present sufficient a priori conditions for such strategies, and propose, implement, and evalúate experimentally a novel algorithm for incremental analysis based on these ideas. The experimental results show that the proposed algorithm performs very efficiently in the incremental case while being comparable to (and, in some cases, considerably better than) other state-of-the-art analysis algorithms even for the non-incremental case. We argüe that our discussions, results, and experiments also shed light on some of the many tradeoffs involved in the design of algorithms for logic program analysis.
Incremental analysis of constraint logic programs
ACM Transactions on Programming Languages and Systems, 2000
Global analyzers traditionally read and analyze the entire program at once, in a nonincremental way. However, there are many situations which are not well suited to this simple model and which instead require reanalysis of certain parts of a program which has already been analyzed. In these cases, it appears inefficient to perform the analysis of the program again from scratch, as needs to be done with current systems. We describe how the fixed-point algorithms used in current generic analysis engines for (constraint) logic programming languages can be extended to support incremental analysis. The possible changes to a program are classified into three types: addition, deletion, and arbitrary change. For each one of these, we provide one or more algorithms for identifying the parts of the analysis that must be recomputed and for performing the actual recomputation. The potential benefits and drawbacks of these algorithms are discussed. Finally, we present some experimental results o...
Incremental analysis of real programming languages
Proceedings of the ACM SIGPLAN 1997 conference on Programming language design and implementation - PLDI '97, 1997
A major research goal for compilers and environments is the automatic derivation of tools from formal specifications. However, the formal model of the language is often inadequate; in particular, LR(k) grammars are unable to describe the natural syntax of many languages, such as C ++ and Fortran, which are inherently non-deterministic. Designers of batch compilers work around such limitations by combining generated components with ad hoc techniques (for instance, performing partial type and scope analysis in tandem with parsing). Unfortunately, the complexity of incremental systems precludes the use of batch solutions. The inability to generate incremental tools for important languages inhibits the widespread use of language-rich interactive environments. We address this problem by extending the language model itself, introducing a program representation based on parse dags that is suitable for both batch and incremental analysis. Ambiguities unresolved by one stage are retained in this representation until further stages can complete the analysis, even if the resolution depends on further actions by the user. Representing ambiguity explicitly increases the number and variety of languages that can be analyzed incrementally using existing methods. To create this representation, we have developed an efficient incremental parser for general context-free grammars. Our algorithm combines Tomita's generalized LR parser with reuse of entire subtrees via state-matching. Disambiguation can occur statically, during or after parsing, or during semantic analysis (using existing incremental techniques); program errors that preclude disambiguation retain multiple interpretations indefinitely. Our representation and analyses gain efficiency by exploiting the local nature of ambiguities: for the SPEC95 C programs, the explicit representation of ambiguity requires only 0.5% additional space and less than 1% additional time during reconstruction.
Abstract Interpretation Using Advanced Logic Programming Techniques
2003
The software reliability problem becomes more and more important as the size of software systems increase at an unprecedented rate. Several formal methods have been developed in order to contribute to the solution of this problem. One of these methods is Static Program Analysis, that aims to evaluate dynamic behavior of programs statically. To accomplish this, some kind of approximation of the program has to be made. The purpose of abstract interpretation is to formalize this idea of approximation. The main idea of abstract interpretation is that in order to reason about a complex system, some information must be lost, therefore the observation of executions must be either partial or at a high level of abstraction. The Thesis describes the method of abstract interpretation, introduces the notion of fixpoint semantics in order to make programs abstractly evaluatable and illustrates the technique of static program analysis by abstract interpretation. In the second part of the thesis an environment for constructing efficient and simple abstract interpreters is presented, based on a logic programming language XSB, that supports fixpoint calculations through tabling. Finally this environment is extended by adding constraint solving mechanisms to enhance the efficiency of abstract evaluation. To illustrate the use of these environments for implementing abstract evaluation of programs, several abstract interpreters are presented and various example runs are discussed.
Path Dependent Analysis of Logic Programs
Higher-Order and Symbolic Computation (formerly LISP and Symbolic Computation), 2003
This paper presents an abstract semantics that uses information about execution paths to improve precision of data flow analyses of logic programs. The abstract semantics is illustrated by abstracting execution paths using call strings of fixed length and the last transfer of control. Abstract domains that have been developed for logic program analyses can be used with the new abstract semantics without modification.
A suite of analysis tools based on a general purpose abstract interpreter
Lecture Notes in Computer Science, 1994
This paper reports on one aspect of an ongoing project that is developing and experimenting with new compiling technology. The overall system is called the Kernel Based Compiler System, reflecting the fact that the representation of a program used in much of the processing is that of kernel terms, essentially the Lambda Calculus augmented with constants. While the compiling technology is applicable to a wide spectrum of programming languages as well as a wide spectrum of target machine architectures, we expect High Performance FORTRAN (HPF) and its extensions to be programming languages of particular concern because of their anticipated importance to the High Performance Computing Community. The compiler being developed is "unbundled" in the sense that it consists of several components, C1,'.., CN, and compilation consists of applying component C1 to program text and applying Cj, for j = 2,...,N to the result computed by component Cj-1. One of the issues that we are using the compiler system to study is the suitability of various new linguistic constructs for particular application areas and techniques for the efficient realization of these constructs on a variety of High Performance Computers. Thus, we want it to be as straightforward as possible to extend each Cj, to deal with the new constructs. Ideally, we want each component to have a firm mathematical basis so that, for example, we can prove appropriate correctness results. No compiler for a non-trivial language or target is ever really simple in one sense because it is inevitably a large program. However, we have strived to make each component, Cj, simple in the sense that it offers general purpose mechanisms that can be specialized to a particular tasks, so that we can separate the concerns regarding the mechanisms and those regarding the particular specializations in order to achieve simplicity. This paper is concerned with a suite of Analysis components of the system that annotate a program with static estimates of certain aspects of the behavior *** Work suppor'ted by ARPA Contract Nr.
Science of Computer Programming, 2005
The technique of Abstract Interpretation has allowed the development of very sophisticated global program analyses which are at the same time provably correct and practical. We present in a tutorial fashion a novel program development framework which uses abstract interpretation as a fundamental tool. The framework uses modular, incremental abstract interpretation to obtain information about the program. This information is used to validate programs, to detect bugs with respect to partial specifications written using assertions (in the program itself and/or in system libraries), to generate and simplify run-time tests, and to perform high-level program transformations such as multiple abstract specialization, parallelization, and resource usage control, all in a provably correct way. In the case of validation and debugging, the assertions can refer to a variety of program points such as procedure entry, procedure exit, points within procedures, or global computations. The system can reason with much richer information than, for example, traditional types. This includes data structure shape (including pointer sharing), bounds on data structure sizes, and other operational variable instantiation properties, as well as procedure-level properties such as determinacy, termination, non-failure, and bounds on resource consumption (time or space cost). CiaoPP, the preprocessor of the Ciao multi-paradigm programming system, which implements the described functionality, will be used to illustrate the fundamental ideas.
Improving program analyses, by structure untupling
The Journal of Logic Programming, 2000
It is well-known that adding structural information to an analysis domain can increase the precision of the analysis with respect to the original domain. This paper presents a program transformation based on untupling and specialisation which can be applied to upgrade (logic) program analysis by providing additional structural information. It can be applied to (almost) any type of analysis and in conjunction with (almost) any analysis framework for logic programs. The approach is an attractive alternative to the more complex PatR construction which automatically enhances an abstract domain R
ACM Transactions on Programming Languages and Systems, 1999
We report on a detailed study of the application and e ectiveness of program analysis based on abstract interpretation to automatic program parallelization. We study the case of parallelizing logic programs using the notion of strict independence. We rst propose and prove correct a methodology for the application in the parallelization task of the information inferred by abstract interpretation, using a parametric domain. The methodology is generic in the sense of allowing the use of di erent analysis domains. A number of well-known approximation domains are then studied and the transformation into the parametric domain de ned. The transformation directly illustrates the relevance and applicability of each abstract domain for the application. Both local and global analyzers are then built using these domains and embedded in a complete parallelizing compiler. Then, the performance of the domains in this context is assessed through a number of experiments. A comparatively wide range of aspects is studied, from the resources needed by the analyzers in terms of time and memory to the actual bene ts obtained from the information inferred. Such bene ts are evaluated both in terms of the characteristics of the parallelized code and of the actual speedups obtained from it. The results show that data ow analysis plays an important role in achieving e cient parallelizations, and that the cost of such analysis can be reasonable even for quite sophisticated abstract domains. Furthermore, the results also o er signi cant insight into the characteristics of the domains, the demands of the application, and the trade-o s involved.
Electronic Notes in Theoretical Computer Science, 2005
We present an abstract interpretation framework for a subset of Java (without concurrency). The framework uses a structural abstract domain whose concretization function is parameterized on a relation between abstract and concrete locations. When structurally incomptatible objects may be referred to by the same variable at a given program point, structural information is discarded and replaced by an approximated information about the objects (our presentation concentrates on type information). Plain structural information allows precise intra-procedural analysis but is quickly lost when returning from a method call. To overcome this limitation, relational structural information is introduced, which enables a precise inter-procedural analysis without resorting to inlining. The paper contains an overview of the work. We describe parts of the standard and abstract semantics; then, we briefly explain the fixpoint algorithms used by our implementation; lastly, we provide experimental results for small programs.