The study of distributed computing algorithms by multithread applications (original) (raw)

THEORY OF DISTRIBUTED COMPUTING AND PARALLEL PROCESSING WITH ITS APPLICATIONS, ADVANTAGES AND DISADVANTAGES

Distributed computing is a field of computer science that studies distributed systems. A distributed system consists of multiple autonomous computers that communicate through a computer network. The computers interact with each other in order to achieve a common goal. A computer program that runs in a distributed system is called a distributed program, and distributed programming is the process of writing such programs .There are several autonomous computational entities, each of which has its own local memory. • The entities communicate with each other by message passing. • The system has to tolerate failures in individual computers. • The structure of the system is not known in advance, the system may consist of different kinds of computers and network links, and the system may change during the execution of a distributed program. • Each computer has only a limited, incomplete view of the system. Each computer may know only one part of the input. Parallel processing is simultaneous use of more than one CPU or processor core to execute a program or multiple computational threads. In practice, it is often difficult to divide a program in such a way that separate CPUs or cores can execute different portions without interfering with each other. Most computers have just one CPU, but some models have several, and multi-core processor chips are becoming the norm. There are even computers with thousands of CPUs.With single-CPU, single-core computers, it is possible to perform parallel processing by connecting the computers in a network. However, this type of parallel processing requires very sophisticated software called distributed processing software. In parallel computing, all processors may have access to a shared memory to exchange information between processors. • In distributed computing, each processor has its own private memory (distributed memory). Information is exchanged by passing messages between the processors. • Distributed computing has to be less bandwidth intensive. Therefore, the tasks have to be fewer codependents. • Parallel processing is faster and has higher bandwidth between nodes, but is harder to scale. KEYWORDS: Distributed Computing, Parallel Computing, CPU

DISTRIBUTED COMPUTING IN DATA PROCESSING

The paper describes basic operations associated with the use of the distributed computing toolbox and its application for processing of extensive and complex mathematical problems using the computer network and the set of computers for parallel processing of separate components of the whole algorithm.

IJERT-Concurrent Programming and Parallel distributed O.S

International Journal of Engineering Research and Technology (IJERT), 2014

https://www.ijert.org/concurrent-programming-and-parallel-distributed-o.s https://www.ijert.org/research/concurrent-programming-and-parallel-distributed-o.s-IJERTV1IS6077.pdf This paper consists of two topics, one is Concurrent Programming & Parallel distributed O.S. In a concurrent program, several streams of operations may execute concurrently, each stream of operations executes as it would in a sequential program. While coming to parallel distributed O.S, A distributed operating system is the logical aggregation of operating system software over a collection of independent, networked, communicating, and physically separate computational nodes.

Implementation of a parallel algorithm on a distributed network

Pres. at the 31st AIAA …, 1993

The objective of this research is to investigate the poten-tial of using a network of concurrently operating worksta-tions for solving large compute-intensive problems typical of Computational Fluid Dynamics. Such problems have a communication structure based primarily on ...

Introduction to distributed algorithms

2004

This manuscript aims at offering an introductory description of distributed programming abstractions and of the algorithms that are used to implement them in different distributed environments. The reader is provided with an insight on important problems in distributed computing, knowledge about the main algorithmic techniques that can be used to solve these problems, and examples of how to apply these techniques when building distributed applications.

A Survey of Basic Issues of Parallel Execution on a Distributed System

1995

This report examines the basic issues involved in implementing parallel execution in a distributed computational environment. The study was carried out by considering our claim that both a compiler should be directly involved in detecting processes of a program to run in parallel on a distributed system, and that a distributing operating system, in particular global scheduling should provide a

Distributed pC++ Basic Ideas for an Object Parallel Language

Scientific Programming, 1993

pC++ is an object-parallel extension to the C++ programming language. This paper describes the current language de nition and illustrates the programming style. Examples of parallel linear algebra operations are presented and a fast poisson solver is described in complete detail.