A Framework for Sparse Matrix Code Synthesis from High-level Specifications (original) (raw)
Related papers
Compiling imperfectly-nested sparse matrix codes with dependences
2000
We present compiler technology for generating sparse matrix code from (i) dense matrix code and (ii) a description of the indexing structure of the sparse matrices. This technology embeds statement instances into a Cartesian product of statement iteration and data spaces, and produces efficient sparse code by identifying common enumerations for multiple references to sparse matrices. This approach works for imperfectly-nested codes with dependences, and produces sparse code competitive with hand-written library code.
On automatic data structure selection and code generation for sparse computations
Lecture Notes in Computer Science, 1994
Traditionally restructuring compilers were only able to apply program transformations in order to exploit certain characteristics of the target architecture. Adaptation of data structures was limited to e.g. linearization or transposing of arrays. However, as more complex data structures are required to exploit characteristics of the data operated on, current compiler support appears to be inappropriate. In this paper we present the implementation issues of a restructuring compiler that automatically converts programs operating on dense matrices into sparse code, i.e. after a suited data structure has been selected for every dense matrix that in fact is sparse, the original code is adapted to operate on these data structures. This simpli es the task of the programmer and, in general, enables the compiler to apply more optimizations.
The SPARAMAT Approach to Automatic Comprehension of Sparse Matrix Computations
Automatic program comprehension is particularly useful when applied to sparse matrix codes, since it allows to abstract e.g. from specific sparse matrix storage formats used in the code. In this paper we describe SPARAMAT, a system for speculative automatic program comprehension suitable for sparse matrix codes, and its implementation.
Compilation of sparse array programming models
Proceedings of the ACM on Programming Languages, 2021
This paper shows how to compile sparse array programming languages. A sparse array programming language is an array programming language that supports element-wise application, reduction, and broadcasting of arbitrary functions over dense and sparse arrays with any fill value. Such a language has great expressive power and can express sparse and dense linear and tensor algebra, functions over images, exclusion and inclusion filters, and even graph algorithms. Our compiler strategy generalizes prior work in the literature on sparse tensor algebra compilation to support any function applied to sparse arrays, instead of only addition and multiplication. To achieve this, we generalize the notion of sparse iteration spaces beyond intersections and unions. These iteration spaces are automatically derived by considering how algebraic properties annotated onto functions interact with the fill values of the arrays. We then show how to compile these iteration spaces to efficient code. When co...
Aspect-Oriented Programming of Sparse Matrix Code
1997
The expressiveness conferred by high-level and object-oriented languages is often impaired by concerns that cross-cut a program's basic functionality. Execution time, data representation, and numerical stability are three such concerns that are of great interest to numerical analysts. Using aspect-oriented programming we have created AML, a system for sparse matrix computation that deals with these concerns separately and explicitly while preserving the expressiveness of the original functional language. The resulting code maintains the efficiency of highly tuned low-level code, yet is ten times shorter.
An approach for code generation in the Sparse Polyhedral Framework
Parallel Computing, 2016
Applications that manipulate sparse data structures contain memory reference patterns that are unknown at compile time due to indirect accesses such as A[B[i]]. To exploit parallelism and improve locality in such applications, prior work has developed a number of run-time reordering transformations (RTRTs). This paper presents the Sparse Polyhedral Framework (SPF) for specifying RTRTs and compositions thereof and algorithms for automatically generating efficient inspector and executor code to implement such transformations. Experimental results indicate that the performance of automatically generated inspectors and executors competes with the performance of handwritten ones in some cases.
Sparse computation data dependence simplification for efficient compiler-generated inspectors
Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation
This paper presents a combined compile-time and runtime loop-carried dependence analysis of sparse matrix codes and evaluates its performance in the context of wavefront parallellism. Sparse computations incorporate indirect memory accesses such as x[col[j]] whose memory locations cannot be determined until runtime. The key contributions of this paper are two compile-time techniques for significantly reducing the overhead of runtime dependence testing: (1) identifying new equality constraints that result in more efficient runtime inspectors, and (2) identifying subset relations between dependence constraints such that one dependence test subsumes another one that is therefore eliminated. New equality constraints discovery is enabled by taking advantage of domain-specific knowledge about index arrays, such as col[j]. These simplifications lead to automatically-generated inspectors that make it practical to parallelize such computations. We analyze our simplification methods for a collection
Enabling Code Generation within the Sparse Polyhedral Framework
2010
Loop transformation frameworks based on the polyhedral model use libraries such as Polylib, ISL, and Omega to represent and manipulate polyhedra and use tools like CLooG to generate loops that scan the modified polyhedra. Most of these libraries are restricted to iteration space sets and memory/array access functions with affine constraints that preclude the specification of run-time reordering transformations (i.e., inspector/executor strategies) within the existing code generation tools. Automatic generation of inspector and executor code is important for the parallelization and data locality improvements in irregular computations such as those that manipulate sparse data structures. We enable the specification of run-time reordering transformations at compile time in the Sparse Polyhedral Framework (SPF) by representing indirect memory references and run-time generated data and iteration reorderings using uninterpreted function symbols. This paper presents techniques for manipula...
Object-Oriented Techniques for Sparse Matrix Computations in Fortran 2003
ACM Transactions on Mathematical Software, 2012
The efficiency of a sparse linear algebra operation heavily relies on the ability of the sparse matrix storage format to exploit the computing power of the underlying hardware. Since no format is universally better than the others across all possible kinds of operations and computers, sparse linear algebra software packages should provide facilities to easily implement and integrate new storage formats within a sparse linear algebra application without the need to modify it; it should also allow to dynamically change a storage format at run-time depending on the specific operations to be performed. Aiming at these important features, we present an Object Oriented design model for a sparse linear algebra package which relies on Design Patterns. We show that an implementation of our model can be efficiently achieved through some of the unique features of the Fortran 2003 language. Experimental results show that the proposed software infrastructure improves the modularity and ease of use of the code at no performance loss.
Level 3 Basic Linear Algebra Subprograms for sparse matrices: a user level interface
ACM Transactions on Mathematical Software
This paper proposes a set of Level 3 Basic Linear Algebra Subprograms and associated kernels for sparse matrices. A major goal is to design and develop a common framework to enable efficient, and portable, implementations of iterative algorithms for sparse matrices on high-performance computers. We have designed the routines to shield the developer of mathematical software from most of the complexities of the various data structures used for sparse matrices. We have kept the interface and suite of codes as simple as possible while at the same time including sufficient functionality to cover most of the requirements of iterative solvers, and sufficient flexibility to cover most sparse matrix data structures. An important aspect of our framework is that it can be easily extended to incorporate new kernels if the need arises. We discuss the design, implementation and use of subprograms for the multiplication of a full matrix by a sparse one and for the solution of sparse triangu...