Overlay Network Research Papers - Academia.edu (original) (raw)
Abstract Summary form only given. In a broadcasting problem, a message is sent from a source to all the other nodes in the network. Blind flooding is a classical mechanism for broadcasting, where each node retransmits received message to... more
Abstract Summary form only given. In a broadcasting problem, a message is sent from a source to all the other nodes in the network. Blind flooding is a classical mechanism for broadcasting, where each node retransmits received message to all its neighbors. We ...
Service Oriented Computing and its most famous implementation technology Web Services (WS) are becoming an important enabler of networked business models. Discovery mechanisms are a critical factor to the overall utility of Web Services.... more
Service Oriented Computing and its most famous implementation technology Web Services (WS) are becoming an important enabler of networked business models. Discovery mechanisms are a critical factor to the overall utility of Web Services. So far, discovery mechanisms based on the UDDI standard rely on many centralized and area-specific directories, which poses information stress problems such as performance bottlenecks and fault tolerance. In this context, decentralized approaches based on Peer to Peer overlay networks have been proposed by many researchers as a solution. In this paper, we propose a new structured P2P overlay network infrastructure designed for Web Services Discovery. We present theoretical analysis backed up by experimental results, showing that the proposed solution outperforms popular decentralized infrastructures for web discovery, Chord (and some of its successors), BATON (and it’s successor) and Skip-Graphs.
Despite a plethora of research in the area, none of the mechanisms proposed so far for Denial-of-Service (DoS) mitigation has been widely deployed. We argue in this paper that these deployment difficulties are primarily due to economic... more
Despite a plethora of research in the area, none of the mechanisms proposed so far for Denial-of-Service (DoS) mitigation has been widely deployed. We argue in this paper that these deployment difficulties are primarily due to economic inefficiency, rather than to technical shortcomings of the proposed DoS-resilient technologies. We identify economic phenomena, negative externality---the benefit derived from adopting a technology depends on the action of others---and economic incentive misalignment---the party who suffers from an economic loss is different from the party who is in the best position to prevent that loss---as the main stumbling blocks of adoption. Our main contribution is a novel DoS mitigation architecture, Burrows, with an economic incentive realignment property. Burrows is obtained by re-factoring existing key DoS mitigation technologies, and can increase the "social welfare," i.e., economic benefit, of the entire Internet community---both infrastructure ...
Internet was designed for network services without any intention for secure communication. Exponential growth of internet and its users have developed an era of global competition and rivalry. Denial of service attack by multiple nodes is... more
Internet was designed for network services without any intention for secure communication. Exponential growth of internet and its users have developed an era of global competition and rivalry. Denial of service attack by multiple nodes is capable of disturbing the services of rival servers. The attack can be for multiple reasons e. g. extortion or to beat the rivals. Peer
Clouds have evolved as the next-generation platform that facilitates creation of wide-area on-demand renting of computing or storage services for hosting application services that experience highly variable workloads and requires high... more
Clouds have evolved as the next-generation platform that facilitates creation of wide-area on-demand renting of computing or storage services for hosting application services that experience highly variable workloads and requires high availability and performance. Interconnecting Cloud computing system components (servers, virtual machines (VMs), application services) through peer-to-peer routing and information dissemination structure are essential to avoid the problems of provisioning efficiency bottleneck and single point of failure that are predominantly associated with traditional centralized or hierarchical approaches. These limitations can be overcome by connecting Cloud system components using a structured peer-to-peer network model (such as distributed hash tables (DHTs)). DHTs offer deterministic information/query routing and discovery with close to logarithmic bounds as regards network message complexity. By maintaining a small routing state of O (log n) per VM, a DHT structure can guarantee deterministic look-ups in a completely decentralized and distributed manner. This chapter presents: (i) a layered peer-to-peer Cloud provisioning architecture; (ii) a summary of the current state-of-the-art in Cloud provisioning with particular emphasis on service discovery and load-balancing; (iii) a classification of the existing peer-to-peer network management model with focus on extending the DHTs for indexing and managing complex provisioning information; and (iv) the design and implementation of novel, extensible software fabric (Cloud peer) that combines public/private clouds, overlay networking, and structured peer-to-peer indexing techniques for supporting scalable and self-managing service discovery and load-balancing in Cloud computing environments. Finally, an experimental evaluation is presented that demonstrates the feasibility of building next-generation Cloud provisioning systems based on peer-to-peer network management and information dissemination models. The experimental test-bed has been deployed on a public cloud computing platform, Amazon EC2, which demonstrates the effectiveness of the proposed peer-to-peer Cloud provisioning software fabric.
In recent years, Internet has known an unprecedented growth, which, in turn, has lead to an increased demand for real-time and multimedia applications that have high Quality-of-Service (QoS) demands. This evolution lead to difficult... more
In recent years, Internet has known an unprecedented growth, which, in turn, has lead to an increased demand for real-time and multimedia applications that have high Quality-of-Service (QoS) demands. This evolution lead to difficult challenges for the Internet Service Providers (ISPs) to provide good QoS for their clients as well as for the ability to provide differentiated service subscriptions for those clients who are willing to pay more for value added services. Furthermore, a tremendous development of several types of overlay ...
Abstract—The paper reports on recent developments and challenges focused on multimedia distribution over IP. These are subject for research within the research project” Routing in Overlay Networks (ROVER)”, recently granted by the EuroNGI... more
Abstract—The paper reports on recent developments and challenges focused on multimedia distribution over IP. These are subject for research within the research project” Routing in Overlay Networks (ROVER)”, recently granted by the EuroNGI Network of Excellence (NoE). Participants in the project are Blekinge Institute of Technology (BTH) in Karlskrona, Sweden, University of Bradford in UK, University of Catalunia in Barcelona, Spain and University of Pisa in Italy. The foundation of multimedia distribution is provided by several components, ...
- by Panos Kalnis and +1
- •
- Privacy, Anonymity, Mobile Systems, Experimental Study
The performance of a distributed system is affected by the various functions of its components. The interaction between components such as network nodes, computer systems and system programs is examined with special interest accorded to... more
The performance of a distributed system is affected by the various functions of its components. The interaction between components such as network nodes, computer systems and system programs is examined with special interest accorded to its effect on system reliability. At affordable time and space costs, the analytic hierarchy process (AHP) is used to determine how the reliability of a distributed system may be controlled by appropriately assigning weights to its components. Illustrative case studies, that display the system structure, the assignment of weights and the AHP handling are presented.
- by Stefano Salsano and +1
- •
- Distributed Computing, Quality of Service, ATM networks, Scaling up