On Issues of Designing an Optimal Processing in Distributed Operating System 1 (original) (raw)

TECHNOLOGICAL REVIEW ON ISSUES OF DESIGNING AN OPTIMAL PROCESSING IN DISTRIBUTED OPERATING SYSTEM

An extensive comprehensive equitable researched review paper is that the implementation of entire survey on finding the overall designing criteria of distributed system and analysis out the system issues of an allocated performance of a constructural processor. The structure illustrates the comprehensive transparency, reliability, authenticity, flexibility, security and adaptability of optimist distributed system. The constructive issues of the system is studied with inter process mechanism communication with the generic studies and the respective comparative studies to decrease the user likewise the server crashing and redundancy ahead with the data integrity and performance.

An Optimized Algorithm for Enhancement of Performance of Distributed Computing System

International Journal of Computer Applications, 2013

Distributed Computing System (DCS) presents a platform consisting of multiple computing nodes connected in some fashion to which various modules of a task can be assigned. A node is any device connected to a computer network. Nodes can be computers or various other network applications. A task should be assigned to a processor whose capabilities are most appropriate for the execution of that task. In a DCS, a number of tasks are allocated to different processors in such a way that the overall performance in terms of time, cost should be minimized and reliability should be maximized. For a large set of tasks that is being allocated into a DCS, several allocation methods are possible. These allocations can have significant impact on quality of services such as time, cost or reliability. Execution time is the time in which a single instruction is executed. Execution cost can be termed as the amount of value of resource used. In DCS reliability is highly dependent on its network and failures of network have adverse impact on the system performance. In DCS the whole workload is divided into small and independent units, called tasks and it allocates onto the available processors. In this paper a simple algorithm for task allocation in terms of optimum time or optimum cost or optimum reliability is presented where the numbers of tasks are more then the number of processors.

Research issues in distributed operating systems

1986

As distributed computing becomes more widespread, both in high-energy physics and in other applications, centralized operating systems will gradually give way to distributed ones. In this paper we discuss some current research on five issues that are central to the design of distributed operating systems: communications primitives, naming and protection, resource management, fault tolerance, and system services. For each of these issues, some principles, examples, and other considerations will be given.

A Comprehensive Model for the Design of Distributed Computer Systems

IEEE Transactions on Software Engineering, 1987

The availability of micro-, mini-, and supercomputers has complicated the laws governing the economies of scale in computers. A recent study by Ein-Dor [7] concludes that it is most effective to accomplish any task on the least powerful type of computer capable of performing it. This change in cost/performance, and the promise of increased reliability, modularity, and better response time has resulted in an increased tendency to decentralize and distribute computing power. But some economic factors, such as the communication expenses incurred and increased storage with distributed systems are working against the tendency to decentralize. It is clear that in many instances the optimal solution will be an integration of computers of varying power. The problem of finding this optimal integration is complex. The designer of such a system may have conflicting objectives, including low investment and operation cost, quick response to user queries, and higher availability of data. Choosing proper alternatives without computational aid may be difficult if not impossible. This paper addresses the distributed computer system design problem of selecting a proper class of processor for each location and allocating data files/databases. The initial design is based on the type and volume of transactions, and number of files expected in the system. A goal programming approach is presented to help the designer arrive at a good design in this multiobjective environment. The problem is formulated as a nonlinear goal programming problem, and a heuristic based on a modified pattern search approach is used to arrive at a good solution. Index Terms-Distributed data management, distributed system, file allocation, file availability, goal programming, processing cost, software path length.

THEORY OF DISTRIBUTED COMPUTING AND PARALLEL PROCESSING WITH ITS APPLICATIONS, ADVANTAGES AND DISADVANTAGES

Distributed computing is a field of computer science that studies distributed systems. A distributed system consists of multiple autonomous computers that communicate through a computer network. The computers interact with each other in order to achieve a common goal. A computer program that runs in a distributed system is called a distributed program, and distributed programming is the process of writing such programs .There are several autonomous computational entities, each of which has its own local memory. • The entities communicate with each other by message passing. • The system has to tolerate failures in individual computers. • The structure of the system is not known in advance, the system may consist of different kinds of computers and network links, and the system may change during the execution of a distributed program. • Each computer has only a limited, incomplete view of the system. Each computer may know only one part of the input. Parallel processing is simultaneous use of more than one CPU or processor core to execute a program or multiple computational threads. In practice, it is often difficult to divide a program in such a way that separate CPUs or cores can execute different portions without interfering with each other. Most computers have just one CPU, but some models have several, and multi-core processor chips are becoming the norm. There are even computers with thousands of CPUs.With single-CPU, single-core computers, it is possible to perform parallel processing by connecting the computers in a network. However, this type of parallel processing requires very sophisticated software called distributed processing software. In parallel computing, all processors may have access to a shared memory to exchange information between processors. • In distributed computing, each processor has its own private memory (distributed memory). Information is exchanged by passing messages between the processors. • Distributed computing has to be less bandwidth intensive. Therefore, the tasks have to be fewer codependents. • Parallel processing is faster and has higher bandwidth between nodes, but is harder to scale. KEYWORDS: Distributed Computing, Parallel Computing, CPU

SINGLE-THREADING BASED DISTRIBUTED-MULTIPROCESSOR-MACHINES AFFECTING BY DISTRIBUTED-PARALLEL-COMPUTING TECHNOLOGY

Journal of University of Duhok, 2023

The objective of this study is to propose a methodology for developing a distributed memory system with multiple computers and multicore processors. This system can be implemented on distributed-shared memory systems, utilizing the principles of client/server architecture. The presented system consists of two primary components: monitoring and managing programs executed on distributed-multi-core architectures with 2, 4, and 8 CPUs in order to accomplish a specific task. In the context of problemsolving, the network has the capacity to support multiple servers along with one client. During the implementation phase, it is imperative to consider three distinct scenarios that encompass the majority of design alternatives. The proposed system has the capability to compute the Total-Task-Time (TTT) on the client side, as well as the timings of all relevant servers, including Started, Elapsed, CPU, Kernel, User, Waiting, and Finish. When designing User Programs (UPs), the following creation scenario is carefully considered: The term "single-process-multi-thread" (SPMT) refers to a computing paradigm where a single process is executed by multiple threads The results unequivocally indicate that an augmentation in processing capacity corresponds to a proportional enhancement in the speed at which problems are solved. This pertains specifically to the quantity of servers and the number of processors allocated to each server. Consequently, the duration required to finish the assignment increased by a factor of 9.156, contingent upon three distinct scenarios involving SPMT UPs. The C# programming language is utilized for the coding process in the implementation of this system.

Performance Evaluation in Distributed System using

Distributed computing system (DCS) is the collection of heterogeneous and geographically dispersed computing nodes. Nodes cooperatively work to complete the task in the DCS. But because of the dynamic nature of DCS, nodes may fail randomly thus performance is an important factor to be considered. In order to achieve improved performance resource management plays an important role. In this paper dynamic load balancing is focused to achieve better performance results even in case of node failure using regenerative theory.

Improved Strategy for Distributed Processing and Network Application Development

The complexity of software development abstraction and the new development in multi-core computers have shifted the burden of distributed software performance from network and chip designers to software architectures and developers. We need to look at software development strategies that will integrate parallelization of code, concurrency factors, multithreading, distributed resources allocation and distributed processing. In this paper, a new software development strategy that integrates these factors is further experimented on parallelism. The strategy is multidimensional aligns distributed conceptualization along a path. This development strategy mandates application developers to reason along usability, simplicity, resource distribution, parallelization of code where necessary, processing time and cost factors realignment as well as security and concurrency issues in a balanced path from the originating point of the network application to its retirement.

Distributed Computing System Architectures : Hardware

Distributed Operating Systems, 1987

Distributed Computing System(DCS) architectures have taken various forms through a considerably short development stage. In a DCS the underlying hardware characteristics are to be transparent to the application level processing. This transparency is regarded to be the function of the operating systems. The standardization efforts in communication protocols and layered approach in the analysis of DCSs has contributed to the advances in distributed control, thus the distributed operating systems. Fortunate]y, most distributed architectures are of one of several .• inds: compu ter communication networks, local area networks, and various multicomputer systems (often merged with the first two). The tightly coupled multiprocessor systems, associative processors, data flow machines, and similar parallel architectures are strictly excluded from our discussion of the DCSs. In this presentation the emphasis is given to computer networks, local area networks, and the channel sharing mechanisms such as point-to-point and multi-point connections which have been basis of the communication medium in many DCSs.

Practical approach to distributed systems' design

Proc. 5th Seminar on Computer Networks 1998

The paper, based on authors’ experience from several distributed systems integration projects, summarizes briefly practical designer’s view on methodological requirements and overall system organization, including clues as to the organization of the application layer, use of operating system and preferred communication protocols.