Towards automatic specialization of Java programs (original) (raw)
Related papers
Automatic program specialization for Java
ACM Transactions on Programming Languages and Systems, 2003
The object-oriented style of programming facilitates program adaptation and enhances program genericness, but at the expense of efficiency. We demonstrate experimentally that state-of-the-art Java compilers fail to compensate for the use of object-oriented abstractions in the implementation of generic programs, and that program specialization can eliminate a significant portion of these overheads. We present an automatic program specializer for Java, illustrate its use through detailed case studies, and demonstrate experimentally that it can significantly reduce program execution time. Although automatic program specialization could be seen as being subsumed by existing optimizing compiler technology, we show that specialization and compiler optimization are in fact complementary.
Declarative specialization for object-oriented-program specialization
Proceedings of the 2004 ACM SIGPLAN symposium on Partial evaluation and semantics-based program manipulation - PEPM '04, 2004
The use of partial evaluation for specializing programs written in imperative languages such as C and Java is hampered by the difficulty of controlling the specialization process. We have developed a simple, declarative language for controlling the specialization of Java programs, and interfaced this language with the JSpec partial evaluator for Java. This language, named Pesto, allows declarative specialization of programs written in an object-oriented style of programming. The Pesto compiler automatically generates the context information needed for specializing Java programs, and automatically generates guards that enable the specialized code in the right context.
Run-time Bytecode Specialization: A Portable Approach to Generating Optimized Specialized Code
This paper proposes a run-time bytecode specialization (BCS) technique that analyzes programs and generates specialized programs at run-time in an intermediate language. By using an intermediate language for code generation, the system can optimize the specialized programs af- ter specialization. As the intermediate language, the system uses Java virtual machine language (JVML), which allows the system to easily achieve practical portability and to use sophisticated just-in-time compil- ers as its back-end. The binding-time analysis algorithm, which is based on a type system, covers a non-object-oriented subset of JVML. A spe- cializer, which generates programs a per-instruction basis, can perform method inlining at run-time. The performance measurement showed that a non-trivial application program specialized at run-time by BCS runs approximately 3-4 times faster than the unspecialized one. Despite the large amount of overheads at JIT compilation of specialized code, we ob- served ...
A Lightweight Approach to Program Specialization
2000
Within the imperative programming paradigm, program slic- ing has been widely used as a basis to solve many software engineering problems, like debugging, testing, differencing, specialization, and merg- ing. In this work, we present a lightweight approach to program spe- cialization of lazy functional logic programs which is based on dynamic slicing. The kind of specialization performed by our approach
An Effective Automated Approach to Specialization of Code
2007
Application performance is heavily dependent on the compiler optimizations. Modern compilers rely largely on the information made available to them at the time of compilation. In this regard, specializing the code according to input values is an effective way to communicate necessary information to the compiler. However, the static specialization suffers from possible code explosion and dynamic specialization requires runtime compilation activities that may degrade the overall performance of the application. This article proposes an automated approach for specializing code that is able to address both the problems of code size increase and the overhead of runtime activities. We first obtain optimized code through specialization performed at static compile time and then generate a template that can work for a large set of values through runtime specialization. Our experiments show significant improvement for different SPEC benchmarks on Itanium-II(IA-64) and Pentium-IV processors using icc and gcc compilers.
Towards Unifying Inheritance and Automatic Program Specialization
DAIMI Report Series, 2002
Inheritance allows a class to be specialized and its attributes refined, but implementation specialization can only take place by overriding with manually implemented methods. Automatic program specialization can generate a specialized, efficient implementation. However, specialization of programs and specialization of classes (inheritance) are considered different abstractions. We present a new programming language, Lapis, that unifies inheritance and program specialization at the conceptual, syntactic, and semantic levels. <br /> This paper presents the initial development of Lapis, which uses inheritance with covariant specialization to control the automatic application of program specialization to class members. Lapis integrates object-oriented concepts, block structure, and techniques from automatic program specialization to provide both a language where object-oriented designs can be efficiently implemented and a simple yet powerful partial evaluator for an object-orient...
Selective Specialization for Object-Oriented Languages
Sigplan Notices, 1995
Dynamic dispatching is a major source of run-time overhead in object-oriented languages, due both to the direct cost of method lookup and to the indirect effect of preventing other optimizations. To reduce this overhead, optimizing compilers for object-oriented languages analyze the classes of objects stored in program variables, with the goal of bounding the possible classes of message receivers enough so that the compiler can uniquely determine the target of a message send at compile time and replace the message send with a direct procedure call. Specialization is one important technique for improving the precision of this static class information: by compiling multiple versions of a method, each applicable to a subset of the possible argument classes of the method, more precise static information about the classes of the method's arguments is obtained. Previous specialization strategies have not been selective about where this technique is applied, and therefore tended to significantly increase compile time and code space usage, particularly for large applications. In this paper, we present a more general framework for specialization in object-oriented languages and describe a goaldirected specialization algorithm that makes selective decisions to apply specialization to those cases where it provides the highest benefit. Our results show that our algorithm improves the performance of a group of sizeable programs by 65% to 275% while increasing compiled code space requirements by only 4% to 10%. Moreover, when compared to the previous state-of-the-art specialization scheme, our algorithm improves performance by 11% to 67% while simultaneously reducing code space requirements by 65% to 73%. * In our syntax, λ(arg1:type){ ... code ... } is a closure that accepts one argument, and λ(type):type is a static type declaration for such a closure. self is the name of the receiver of a method. Other arguments of the form Class::arg represent methods that are dispatched on multiple arguments (multi-methods). Alternatively, in a singly-dispatched language, they could be written in a double-dispatching style [Ingalls 86]. Dynamically-dispatched message sends are shown in this font.
Applicability of Method Specialization Techniques to Java
2010
Method specialization is an optimization used to eliminate virtual call sites and open up opportunities for other compiler optimizations. Existing method specialization techniques do not explicitly handle dynamic class-loading or are suitable for a dynamic compilation environment. This thesis examines previous method specialization techniques and illustrates the transformations with a running example. These techniques are also reviewed to determine the applicability of each method for use in a dynamic compilation environment that support dynamic class-loading (such as Java). Additionally a new method specialization framework is given that is designed for a dynamic compilation environment and handles dynamic class-loading. Aspects that need to be examined when making method specialization decisions for a dynamic compiler are listed and analyzed. Finally numbers regarding opportunities for method specialization the SPECjvm98 and SPECjbb2000 benchmarks suites are listed and investigated. your selflessness, your understanding, and your love. At points I have trouble understanding how you could be so understanding with my schedule of research, swim practices, swim meets, and training camps. Every day I grow in awe of your character and grow in love for your heart. I am so lucky to have the privilege of being with you. I love you.
Program specialization via algorithmic unfold/fold transformations
ACM Computing Surveys, 1998
We brie y analyze the relationship between partial evaluation and unfold/fold program transformation. These two techniques have some common objectives, but they have been developed according to di erent methodologies. As a promising direction to take for the future, we propose to embed partial evaluation in the richer framework of the unfold/fold program transformation technique. We also propose the use of algorithmic strategies, that is, mechanically generated sequences of transformation rules, for obtaining high quality specialized programs in a fully automatic way. We nally indicate some features of the program specialization system that may be designed according to our proposal.
An extension mechanism for the Java language
1999
This thesis presents the design and implementation of an extensible dialect of the Java language, named OpenJava. Although the Java language is well polished and dedicated to cover applications of a wide range of computational domain, it still lacks some mechanisms necessary for typical kinds of applications. Our OpenJava enables programmers to extend the Java language and implement such mechanisms on demand. It is an advanced macro processor based on the technique called compile-time reection.