Performance comparison of java based parallel programming models (original) (raw)

Parallel programming models: a survey

Parallel programming and the design of efficient parallel programs is a development area of growing importance. Parallel programming models are almost used to integrate parallel software concepts into a sequential code. These models represent an abstraction of the hardware capabilities to the programmer. In fact, a model is a bridge between the application to be parallelized and the machine organization. Up to now, a variety of programming models have been developed, each having its own approach. This paper enumerates various existing parallel programming models in the literature. The purpose is to perform a comparative evaluation of the mostly used ones, namely MapReduce, Cilk, Cilk++, OpenMP and MPI, within some extracted features.

A comparative study of Java and C performance in two large-scale parallel applications

Concurrency and Computation: Practice and Experience, 2009

In the 1990s the Message Passing Interface Forum defined MPI bindings for Fortran, C, and C++. With the success of MPI these relatively conservative languages have continued to dominate in the parallel computing community. There are compelling arguments in favour of more modern languages like Java. These include portability, better runtime error checking, modularity, and multi-threading. But these arguments have not converted many HPC programmers, perhaps due to the scarcity of full-scale scientific Java codes, and the lack of evidence for performance competitive with C or Fortran. This paper tries to redress this situation by porting two scientific applications to Java. Both of these applications are parallelized using our thread-safe Java messaging system-MPJ Express. The first application is the Gadget-2 code, which is a massively parallel structure formation code for cosmological simulations. The second application uses the finite-domain time-difference method for simulations in the area of computational electromagnetics. We evaluate and compare the performance of the Java and C versions of these two scientific applications, and demonstrate that the Java codes can achieve performance comparable with legacy applications written in conventional HPC languages.

Survey on Parallel Programming Model

2008

The development of microprocessors design has been shifting to multi-core architectures. Therefore, it is expected that parallelism will play a significant role in future generations of applications. Throughout the years, there has been a myriad number of parallel programming models proposed. In choosing a parallel programming model, not only the performance aspect is important, but also qualitative the aspect of how well parallelism is abstracted to developers. A model with a well abstraction of parallelism leads to a higher application-development productivity. In this paper, we propose seven criteria to qualitatively evaluate parallel programming models. Our focus is on how parallelism is abstracted and presented to application developers. As a case study, we use these criteria to investigate six well-known parallel programming models in the HPC community.

Approaching developments on parallel programming models through Java

Multicore platforms allow developers to optimize applications by intelligent partitioning different workloads on different processor cores. Currently, application programs are optimized to use multiple processor resources, resulting in faster application performance. Our earlier research work focused native thread for Java on windows thread, Pthread, Intel TBB, respectively, we developed NativeThreads, NativePthread, Java Native Intel TBB beneath windows 32-bit platform. This article aims to identify the future directions of native thread for Java on windows thread, Pthread, Intel TBB through JNI beneath windows 64-bit platforms and other platform besides. Furthermore, it articulates additional opening to pursue approaching developments on parallel programming models through Java

Concurrent approach to Data Parallel Model using Java

Parallel programming models exist as an abstraction of hardware and memory architectures. There are several parallel programming models in commonly use; they are shared memory model, thread model, message passing model, data parallel model, hybrid model, Flynn's models, embarrassingly parallel computations model, pipelined computations model. These models are not specific to a particular type of machine or memory architecture. This paper expresses the model program for concurrent approach to data parallel model through java programming.

A Comparison Of Shared Memory Parallel Programming Models

2000

The dominant parallel programming models for shared memory computers, Pthreads and OpenMP, are both thread-centric in that they are based on explicit management of tasks and enforce data dependencies and output ordering through task management. By com- parison, the Cray XMT programming model is data-centric where the primary concern of the programmer is managing data dependencies, allowing threads to progress

Performance Evaluation of Message Passing vs. Multithreading Parallel Programming Paradigms on Multi-Core Systems

International Journal of New Computer Architectures and their Applications, 2014

Present and future multi-core computational system architecture attracts researchers to utilize this architecture as an adequate and inexpensive solution to achieve high performance computation for many problems. The multi-core architecture enables us to implement shared memory and/or message passing parallel processing paradigms. Therefore, we need appropriate standard libraries in order to utilize the resources of this architecture efficiently and effectively. In this work, we evaluate the performance of message passing using two versions of the well-known message passing interface (MPI) library: MPICH1 vs. MPICH2. Furthermore, we compared the performance of shared memory using OpenMP that supports multithreading with MPI. The results show that the performance when MPICH2 is used is better than MPICH1. The results indicate that multithreading performs better than message passing.

Performance Analysis of Parallel Programming Tools

__ Numerous parallel programming tools have been developed so far for supporting parallel programs. This paper presents performance analysis of wide range of parallel programming simulation tools. This paper also compares the features of different tools. PVM and MPI are most widely used standards for parallel and distributed computing. MPI has better performance in high performance massively parallel processing (MMPs) computer systems to provide highly optimized and efficient implementations than PVM. In MMP, all of the processing elements are connected together to be one very large computer. This is in contrast to the distributed computing where massive numbers of separate computers, connected through a network, are used to solve a single large problem. PVM is most suitable in heterogeneous networks to gain optimal performance. One may favor the other tools depending on the need. With the help of our performance comparison one can choose which one would be the better for a particular application.

A Benchmark Suite for Evaluating Parallel Programming Models

PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware, 2011

The transition to multi-core processors enforces software developers to explicitly exploit thread-level parallelism to increase performance. The associated programmability problem has led to the introduction of a plethora of parallel programming models that aim at simplifying software development by raising the abstraction level. Since industry has not settled for a single model, however, multiple significantly different approaches exist. This work presents a benchmark suite which can be used to classify and compare such parallel programming models and, therefore, aids in selecting the appropriate programming model for a given task. After a detailed explanation of the suite's design, preliminary results for two programming models, Pthreads and OmpSs/SMPSs, are presented and analyzed, leading to an outline of further extensions of the suite.