A New Large-Scale Distributed System (original) (raw)
Related papers
The performance of weak-consistency replication protocols
1992
Weak-consistency replication protocols can be used to build wide-area services that are scalable, fault-tolerant, and useful for mobile computer systems. We have developed the timestamped antientropy protocol, which provides reliable eventual delivery with a variety of message orderings. Pairs of replicas periodically exchange update messages; in this way updates eventually propagate to all replicas. In this paper we present a detailed analysis of the fault tolerance and the consistency provided by this protocol. The protocol is extremely robust in the face of site and network failure, and it scales well to large numbers of replicas. We are investigating an architecture for building distributed services that emphasizes scalability and fault tolerance. This allows applications to respond gracefully to changes in demand and to site and network failure. It also provides a single mechanism to support wide-area services and mobile computing systems. It uses weak-consistency replication techniques to build a flexible distributed service. We use data replication to meet availability demands and enable scalability. The replication is dynamic in that new servers can be added or removed to accommodate changes in demand. The system is asynchronous, and servers are as independent as possible; it never requires synchronous cooperation of large numbers of sites. This improves its ability to handle both communication and site failures. Eventually or weakly consistent replication protocols do not perform synchronous updates. Instead, updates are first delivered to one site, then propagated asynchronously to others. The value a server returns to a client read request depends on whether that server has observed the update yet. Eventually, every server will observe the update. Several existing information systems, such as Usenet [1] and the Xerox Grapevine system [2], use similar techniques. Delayed propagation means that clients do not wait for updates to reach distant sites, and the faulttolerance of the replicated data cannot be compromised by clients that misbehave. It also allows updates to be sent using bulk transfer protocols, which provide the best efficiency on high-bandwidth high-latency networks. These transfers can occur at off-peak times. Replicas can be disconnected from the network for a period of time, and will be updated once they are reconnected. On the other hand, clients must be able to tolerate some inconsistency, and the application may need to provide a mechanism to reconcile conflicting updates. Large numbers of replicas allow replicas to be placed near clients, and spread query load over more sites. This decreases both the communication latency for client requests and the amount of long-distance traffic that must be carried on backbone network links. Mobile computing systems can maintain a local replica, ensuring that users can use access information even when disconnected from the network. These protocols can be compared with consistent replication protocols, such as voting protocols. Consistent protocols cannot practically handle hundreds or thousands of replicas, while weak-consistency protocols can. Consistent protocols require the synchronous participation of a large number of replicas, which can be impossible when a client resides on a portable system or when the network is partitioned. It is also difficult to share processing load across many replicas. The communication traffic and associated latency are often unacceptably large for a service with replicas scattered over several continents.
On the availability of replicated contents in the Web
2003
This study considers the problem of locating proxies in the Web in order to maximize object availability. A read one/write-all protocol is used and a placement based on the Dynamic Progamming (DP) technique is presented. The study then derives the properties of the resulting replicated system and studies its availability as a function of the read write ratio for uniform client requests.
2003
There are many examples for the fact that the growth of a system is limited if this system relies on important central instances for supply or control. In a growing system, the central instances turn into bottlenecks, thus making the system inefficient. Examples for this phenomenon can be found in computer science, in history, and even in the behavior of dinosaurs. Distributed systems have great potential to bypass such bottlenecks, but due to "friction" issues such as incomplete knowledge, we may not be able to exploit their potential. In this thesis we show that some of these issues can be handled quite well in weak and realistic models. More specifically, we present and discuss three examples:
APRE: A replication method for unstructured P2P networks
2006
We present APRE, a replication method for structureless Peer-to-Peer overlays. The goal of our method is to achieve real-time replication of even the most sparsely located content relative to demand. APRE adaptively expands or contracts the replica set of an object in order to improve the sharing process and achieve a low load distribution among the providers. To achieve that, it utilizes search knowledge to identify possible replication targets inside query-intensive areas of the overlay. We present detailed simulation results where APRE exhibits both efficiency and robustness relative to the number of requesters and the respective request rates. The scheme proves particularly useful in the event of flash crowds, managing to quickly adapt to sudden surges in load.
Replication algorithms for the World-Wide Web
Journal of Systems Architecture, 2004
This paper addresses the two fundamental issues in replication, namely deciding on the number and placement of the replicas and the distribution of requests among replicas. We first introduce a static centralized algorithm for replicating objects that can keep a balanced load on servers. To better meet the requirements of the dynamic nature of the Internet traffic and the rapid change in the access pattern of the WWW, we also propose a dynamic distributed algorithm where each server relies on some collected information to decide on where to replicate and migrate objects to achieve good performance and faulttolerance levels.
Accessing Replicated Data in a Large-Scale Distributed System
Replicating a data object improves the availability of the data, and can improve access latency by locating copies of the object near to their use. When accessing replicated objects across an in-ternetwork, the time to access different replicas is non-uniform. Further, the probability that a particular replica is inaccessible is much higher in an internetwork than in a local-area network (LAN) because of partitions and the many intermediate hosts and networks that can fail. We report three replica-accessing algorithms which can be tuned to minimize either access latency or the number of messages sent. These algorithms assume only an unreliable datagram mechanism for communicating with replicas. Our work extends previous investigations into the performance of replication algorithms by assuming unreliable communication. We have investigated the performance of these algorithms by measuring the communication behavior of the Internet, and by building discrete-event simulations based on our measurements. We find that almost all message failures are either transient or due to long-term host failure, so that retry-ing messages a few times adds only a small amount to the overall message traffic while improving both access latency as long as the probability of message failure is small. Moreover, the algorithms which retry messages on failure provide significantly improved availability over those which do not.
A Consistency Protocol Multi-Layer for Replicas Management in Large Scale Systems
Large scale systems such as computational Grid is a distributed computing infrastructure that can provide globally available network resources. The evolution of information processing systems in Data Grid is characterized by a strong decentralization of data in several fields whose objective is to ensure the availability and the reliability of the data in the reason to provide a fault tolerance and scalability, which cannot be possible only with the use of the techniques of replication. Unfortunately the use of these techniques has a height cost, because it is necessary to maintain consistency between the distributed data. Nevertheless, to agree to live with certain imperfections can improve the performance of the system by improving competition. In this paper, we propose a multi-layer pro-tocol combining the pessimistic and optimistic approaches conceived for the data consistency maintenance in large scale systems. Our approach is based on a hierarchical representation model with t...
Data replication in a distributed system: A performance study
Database and Expert Systems …, 1996
In this paper we investigate the performance issues of data replication in a loosely coupled distributed database system, where a set of database servers are connected via a network. A database replication scheme, Replication with Divergence, which allows some degree of divergence between the primary and the secondary copies of the same data object, is compared to other two schemes that, respectively, disallows replication and maintains all replicated copies consistent at all times. The impact of some tunable factors, such as cache size and the update propagation probability, on the performance of Replication with Divergence is also investigated. These results shed light on the performance issues that were not addressed in previous studies on replication of distributed database systems.