DryadLINQ: A system for general-purpose distributed data-parallel computing using a high-level language (original) (raw)
Related papers
SCOPE: easy and efficient parallel processing of massive data sets
Proceedings of The Vldb Endowment, 2008
Companies providing cloud-scale services have an increasing need to store and analyze massive data sets such as search logs and click streams. For cost and performance reasons, processing is typically done on large clusters of shared-nothing commodity machines. It is imperative to develop a programming model that hides the complexity of the underlying system but provides flexibility by allowing users to extend functionality to meet a variety of requirements.
MadLINQ: large-scale distributed matrix computation for the cloud
2012
The computation core of many data-intensive applications can be best expressed as matrix computations. The MadLINQ project addresses the following two important research problems: the need for a highly scalable, efficient and fault-tolerant matrix computation system that is also easy to program, and the seamless integration of such specialized execution engines in a general purpose data-parallel computing system.
Synchronizing Execution of Big Data in Distributed and Parallelized Environments
2014
In modern information era, the amount of data has exploded. Current trend further indicates exponential growth of data in future. This prevalent humungous amount of datareferred as the big datahas given rise to the problem of finding "needle from haystack" (i.e., extracting meaningful information from big data). Large body of researchers and practitioners are focusing on big data analytics to address the problem. One of the major issues in this regard is the computation requirement of big data analytics. In recent years, the proliferation of many loosely-coupled distributed computing infrastructures (e.g., modern public, private, & hybrid clouds, high performance computing clusters, and grids) have enabled high computing capability to be offered for large-scale computation. This has allowed the execution of the big data analytics to gather pace in recent years across organizations and enterprises. However, even with the high computing capability, it is a big challenge to efficiently extract valuable information from vast astronomical data. Hence, we require unforeseen scalability of performance to deal with the execution of big data analytics. A big question in this regard is how to maximally leverage the high computing capabilities from the aforementioned loosely-coupled distributed infrastructure to ensure fast and accurate execution of big data analytics. In this regard, this chapter focuses on synchronous parallelization of big data analytics over a distributed system environment to optimize performance.
Performance Analytical Model of Parallel Programs with Dryad: Dataflow Graph Runtime
Abstract In order to meet the big data challenge of today's society, several parallel execution models on distributed memory architectures have been proposed: MapReduce, Iterative MapReduce, graph processing, and dataflow graph processing. Dryad is a distributed data-parallel execution engine that model program as dataflow graphs. In this paper, we evaluated the runtime and communication overhead of Dryad in a realistic setting.
A unified mapreduce programming interface for multi-core and distributed architectures
2015
In order to improve performance, simplicity and scalability of large datasets processing, Google proposed the MapReduce parallel pattern. This pattern has been implemented in several ways for different architectural levels, achieving significant results for high performance computing. However, developing optimized code with those solutions requires specialized knowledge in each framework’s interface and programming language. Recently, the DSL-POPP was proposed as a framework with a high-level language for patternsoriented parallel programming, aimed at abstracting complexities of parallel and distributed code. Inspired on DSL-POPP, this work proposes the implementation of a unified MapReduce programming interface with rules for code transformation to optimized solutions for shared-memory multi-core and distributed architectures. The evaluation demonstrates that the proposed interface is able to avoid performance losses, while also achieving a code and a development cost reduction fr...
A Unified MapReduce Domain-Specific Language for Distributed and Shared Memory Architectures
Twenty-Seventh International Conference on Software Engineering and Knowledge Engineering, 2015
MapReduce is a suitable and efficient parallel programming pattern for processing big data analysis. In recent years, many frameworks/languages have implemented this pattern to achieve high performance in data mining applications, particularly for distributed memory architectures (e.g., clusters). Nevertheless, the industry of processors is now able to offer powerful processing on single machines (e.g., multi-core). Thus, these applications may address the parallelism in another architectural level. The target problems of this paper are code reuse and programming effort reduction since current solutions do not provide a single interface to deal with these two architectural levels. Therefore, we propose a unified domain-specific language in conjunction with transformation rules for code generation for Hadoop and Phoenix++. We selected these frameworks as state-of-the-art MapReduce implementations for distributed and shared memory architectures, respectively. Our solution achieves a programming effort reduction from 41.84% and up to 95.43% without significant performance losses (below the threshold of 3%) compared to Hadoop and Phoenix++.
Data Parallelism for Large-scale Distributed Computing
Large-scale computing systems are attractive for networked applications by providing scalable infrastructures. To launch distributed data-intensive computing applications in such infrastructures, communication cost, for example to transfer data files to compute nodes, can be a critical challenge due to point-topoint bandwidth scarcity. One way to improve communication performance is to employ parallelism in data retrieval. In this paper, we consider data parallelism for large-scale, data-intensive computing. Our approach is to utilize multiple replica servers in parallel for data retrieval. To improve performance and fault tolerance, we present a new parallel data retrieval algorithm based on a replicated retrieval of slowdown blocks. Then, we explore a broad set of resource selection techniques to identify computation nodes that have good download performance to data servers for given jobs. Our experimental results using trace data collected from PlanetLab show the benefits of our approach in large-scale, failure-prone environments.
Twister: A runtime for iterative MapReduce
2010
MapReduce programming model has simplified the implementation of many data parallel applications. The simplicity of the programming model and the quality of services provided by many implementations of MapReduce attract a lot of enthusiasm among distributed computing communities. From the years of experience in applying MapReduce to various scientific applications we identified a set of extensions to the programming model and improvements to its architecture that will expand the applicability of MapReduce to more classes of applications. In this paper, we present the programming model and the architecture of Twister an enhanced MapReduce runtime that supports iterative MapReduce computations efficiently. We also show performance comparisons of Twister with other similar runtimes such as Hadoop and DryadLINQ for large scale data parallel applications.
Efficient Data-parallel Computing on Small Heterogeneous Clusters
2012
Cluster-based data-parallel frameworks such as MapReduce, Hadoop, and Dryad are increasingly popular for a large class of compute-intensive tasks. Such systems are designed for large-scale clusters, and employ several techniques to decrease the run time of jobs in the presence of failures, slow machines, and other effects. In this paper, we apply Dryad to smaller-scale, “ad-hoc” clusters such as those formed by aggregating the servers and workstations in a small office. We first show that, while Dryad’s greedy scheduling algorithm performs well at scale, it is significantly less optimal in a small (5-10 machine) cluster environment where nodes have widely differing performance characteristics. We further show that in such cases, performance models of dataflow operators can be constructed which predict runtimes of vertex processes with sufficient accuracy to allow a more intelligent planner to achieve significant performance gains for a variety of jobs, and we show how to efficiently...