APPLICATIONS ON HIGH PERFORMANCE CLUSTER COMPUTERS (original) (raw)

High Performance Computing Clusters

International Journal of Computer Applications, 2014

A computer cluster is a group of internconnected computers which are connected to form a single computer. Interconnections between computers in a cluster are made through local area networks. Problems regarding computing are solved by using high performance computing(HPC) which is an amalgamation between super computers and computing clusters.HPC combines of systems administration and parallel programming into a combination of computer architecture, system software, programming languages, algorithms and computational techniques. This paper consist of mechanism required for the creation of a 96 node single cluster.

Cluster Computing: High-Performance, High-Availability, and High-Throughput Processing on a Network of Computers

Handbook of Nature-Inspired and Innovative Computing, 2006

The emergence of cluster platforms was driven by a number of academic projects, such as Beowulf [2], Berkeley NOW [3], and HPVM [4] that prove the advantage of clusters over other traditional platforms. These advantages include low-entry costs to access supercomputing-level performance, the ability to track technologies, incrementally upgradeable system, open source development platforms, and vendor independence. Today, clusters are widely used for research and development of science, engineering, commerce and industry applications that demand high performance computations. In addition, clusters encompass strengths such as high availability and scalability that motivate wide usage in nonsupercomputing applications as well, such as clusters working as web and database servers.

PC cluster as a platform for parallel applications

The complexity and size of the current generation of supercomputers leads to the emergence of cluster computing which is characterized by its scalability, flexibility of configuration and upgrade, high availability and improvement of cost and time. This paper, explains the importance of cluster computing and its advantages and disadvantages. Also, it presents the types of schedulers and the steps of building the cluster. The work herein also evaluates this cluster by two case studies: matrix multiplication as a simple case study and sobel edge detection as a heavy computation one.

Low cost cluster architectures for parallel and distributed processing

2000

Cluster based architectures are standing out in the last years as an alternative for the construction of versatile, low cost parallel machines. This versatility permits their use as much as a teaching tool or as a research environment in the field of parallel and distributed processing. This paper describes some of the possibilities found today on the market for the construction of cluster based parallel machines and proposes different configurations based on cost and application areas.

High performance computing at Intel: the OSCAR software solution stack for cluster computing

2001

This is an exciiing time in high perjormance compuiing (HPC). Radical change has become ihe norm as clirsters of commerciul o f the sheu (COTS) cotnpirters have come to domitiaie HPC. The hatdware trends are clear. Microprocessor technology has coniinired to ,follow Moore 's law. These high performance processors on iwo io four processor SMP boards mcike ideal nodes for building sirpercompiiier-class c1tister.s. On the neiworking front. cotnmerciallv available networks nre delivering impressive performance niimbers. With In/inibanrl prodircis expected in ihe next year or iwo, we'll have U quuntiitn leap in rietvvork performance with a significant drop in price. I n short. ihe hardware for HPC is in good shape and iis steadilv getting better. What about the sojtware'

Four Decades of Cluster Computing

Parallel Computing: Technology Trends

During the latter half of the 1970s high performance computers (HPC) were constructed using specially designed and manufactured hardware. The preferred architectures were vector or array processors, as these allowed for high speed processing of a large class of scientific/engineering applications. Due to the high cost of the development and construction of such HPC systems, the number of available installations was limited. Researchers often had to apply for compute time on such systems and wait for weeks before being allowed access. Cheaper and more accessible HPC systems were thus in great need. The concept to construct high performance parallel computers with distributed Multiple Instruction Multiple Data (MIMD) architectures using standard off-the-shelf hardware promised the construction of affordable supercomputers. Considerable scepticism existed at the time about the feasibility that MIMD systems could offer significant increases in processing speeds. The reasons for this wer...

A New Architecture for Efficient Parallel Computing in Workstation Clusters: Conceptions and Experiences

The acceptance of parallel computing in workstation clusters has increased in the past years. One important reason for this is the cost-efficiency of workstation clusters as an alternative to specialized distributed-memory parallel computer systems. A potential bottleneck for distributedmemory architectures is the interconnection network between the processing elements. This is the main disadvantage of clusters which arises due to the local area network (LAN) connecting the workstations. A LAN does not reach the low latency, high bandwidth and capacity of a specialized interconnection network used in distributed-memory parallel computer architectures.

Performance Evaluation of Cluster Computing.

International Journal of Engineering Sciences & Research Technology, 2013

Cluster Computing addresses the latest results in these fields that support High Performance Distributed Computing (HPDC). In HPDC environments, parallel and/or distributed computing techniques are applied to the solution of computationally intensive applications across networks of computers. A cluster computing is a type of parallel or distributed computer system, which consists of a collection of interconnected stand-alone computers working together as a single integrated computing resource. The key components of a cluster include multiple standalone computers (PCs, Workstations, or SMPs), operating systems, high-performance interconnects, middleware, parallel programming environments, and applications. It assumes that the reader is familiar with the standard commodity hardware and software components such as stand-alone computers, operating systems such as Linux and Windows, and standard communication software such as TCP/IP. There are many applications which can benefit from parallelisation. Employing clusters of computers provides a method to utilise commodity components, minimising cost and and maximising longevity of the individual parts.

High Performance Computing Environments Without the Fuss: The Bootable Cluster CD

19th IEEE International Parallel and Distributed Processing Symposium

This paper confronts the issue of bringing high performance computing (HPC) education to those who do not have access to a dedicated clustering environments in an easy, fully-functional, inexpensive manner through the use of the "Bootable Cluster CD" (BCCD). As an example, many primarily undergraduate institutions (PUI's) do not have the facilities, time, or money to purchase hardware, maintain user accounts, configure software components, and keep ahead of the latest security advisories for a dedicated clustering environment. The BCCD project's primary goal is to support an instantaneous, drop-in distributed computing environment. A consequence of providing such an environment is the ability to promote the education of high performance computing issues at the undergraduate level through the ability to turn an ordinary lab of networked workstations temporarily into a non-invasive, fully-functional clustering classroom. The BCCD itself is a self-contained clustering environment in a bootable CD format. Using the BCCD, students, educators and researchers are able to gain insight into configuration, utilization, troubleshooting, debugging, and administration issues uniquely associated with parallel computing in a live, easy to use "drop-in" clustering environment. As the name implies, the BCCD provides a full, cohesive clustering environment running GNU/Linux when booted from the CDROM drives of networked workstations.