Overlay Network Research Papers - Academia.edu (original) (raw)
2025, 2018 IEEE International Conference on Communications Workshops (ICC Workshops)
One major benefit of named-data networking (NDN) is its potential to control network load by leveraging in-network caching and request aggregation. Both the network operator and consumers benefit from these features, as operating costs... more
One major benefit of named-data networking (NDN) is its potential to control network load by leveraging in-network caching and request aggregation. Both the network operator and consumers benefit from these features, as operating costs are reduced and quality-of-experience is increased. However, request aggregation, combined with NDN's loop prevention mechanisms, can create denial-of-service (DoS) against client interests (intentionally and unintentionally) by clients employing multicast forwarding. In this paper, we discuss this problem and propose three increasingly efficient solutions to address the problem; our arguments are backed by simulation and numerical analyses.
2025, Proceedings of the 3rd international conference on Quality of service in heterogeneous wired/wireless networks - QShine '06
creates new opportunities to save costs by converging data and telephone services. The primary question of our study is whether emerging metropolitan networks can meet the QoS requirements necessary to connect GSM and UMTS base stations.... more
creates new opportunities to save costs by converging data and telephone services. The primary question of our study is whether emerging metropolitan networks can meet the QoS requirements necessary to connect GSM and UMTS base stations. These requirements on delay, jitter, and loss are significantly more stringent than the ones for VoIP and have to be met at the presence of bursty cross traffic. Therefore, we have probed ETH's campus network which spans the metropolitan area of Zurich with traffic comparable to encapsulated E1 traffic from base stations and have measured perceived QoS. Our findings show that the campus network generally meets the QoS requirements. However, the perceived QoS degrades with increasing network utilization. To further investigate the impact of the configuration of Ethernet switches on perceived QoS, we have conducted a simulation study. The results show that the perceived QoS is better when switch buffers are limited to small sizes of 1 MB compared to setups where the number of frames in the buffer is limited. From these results we infer that metropolitan Gigabit Ethernets are well suited for connecting GSM and UMTS base stations.
2025, ACM SIGCOMM Computer Communication Review
2025
Path probing is essential to maintaining an efficient overlay network topology. However, the cost of complete probing can be as high as ¢ ¤£ ¦¥ ¨ § © , which is prohibitive in large- scale overlay networks. Recently we proposed a method... more
Path probing is essential to maintaining an efficient overlay network topology. However, the cost of complete probing can be as high as ¢ ¤£ ¦¥ ¨ § © , which is prohibitive in large- scale overlay networks. Recently we proposed a method that trades probing overhead for inference accuracy in sparse networks such as the Internet. The method uses physical path information to infer path quality for all of the ¥ £ ¦¥ © overlay paths, while actually probing only a subset of the paths. In this paper we propose and evaluate a distributed approach to implementing this method. We describe a minimum diameter, link-stress bounded overlay spanning tree, which is used to collect and disseminate path quality information. All nodes in the tree collaborate to infer the quality of all paths. Simulation results show this approach can achieve a high-level of inference accuracy while reducing probing overhead and balancing link stress on the spanning tree.
2025
This paper describes Service Clouds, a distributed infrastructure designed to facilitate rapid prototyping and deployment of services that enhance communication performance, robustness, and security. The infrastructure combines adaptive... more
This paper describes Service Clouds, a distributed infrastructure designed to facilitate rapid prototyping and deployment of services that enhance communication performance, robustness, and security. The infrastructure combines adaptive middleware functionality with an overlay network substrate in order to support dynamic instantiation and reconfiguration of services. The Service Clouds architecture includes a collection of lowlevel facilities that can be either invoked directly by applications or used to compose more complex services. After describing the Service Clouds architecture, we present results of two experimental case studies conducted on the PlanetLab Internet testbed, the first to improve throughput of bulk data transfer, and the second to enhance the robustness of multimedia streaming.
2025
In pervasive computing environments, conditions are highly variable and resources are limited. In order to meet the needs of applications, systems must adapt dynamically to changing situations. Since adaptation at one system layer may be... more
In pervasive computing environments, conditions are highly variable and resources are limited. In order to meet the needs of applications, systems must adapt dynamically to changing situations. Since adaptation at one system layer may be insufficient, crosslayer, or vertical approaches to adaptation may be needed. Moreover, adaptation in distributed systems often requires horizontal cooperation among hosts. This cooperation is not restricted to the source and destination(s) of a data stream, but might also include intermediate hosts in an overlay network or mobile ad hoc network. We refer to this combined capability as universal adaptation. We contend that the model defining interaction between adaptive middleware and the operating system is critical to realizing universal adaptation. We explore this hypothesis by evaluating the Kernel-Middleware eXchange (KMX), a specific model for crosslayer, cross-system adaptation. We present the KMX architecture and discuss its potential role in supporting universal adaptation in pervasive computing environments. We then describe a prototype implementation of KMX and show results of an experimental case study in which KMX is used to improve the quality of video streaming to mobile nodes in a hybrid wired-wireless network.
2025
Use of multiple paths between node pairs can enable an overlay network to bypass Internet link failures. Selecting high quality primary and backup paths is challenging, however. To maximize communication reliability, an overlay multipath... more
Use of multiple paths between node pairs can enable an overlay network to bypass Internet link failures. Selecting high quality primary and backup paths is challenging, however. To maximize communication reliability, an overlay multipath routing protocol must account for both the failure probability of a single path and link sharing among multiple paths. We propose a practical solution that exploits physical topology information and end-to-end path quality measurement results to select high quality path pairs. Simulation results show the proposed approach is effective in achieving higher multipath reliability in overlay networks at reasonable communication cost.
2025
Path probing is essential to maintaining an efficient overlay network topology. However, the cost of a full-scale probing is as high as ¢ ¤£ ¦¥ ¨ § © , which is prohibitive in large-scale overlay networks. Several methods have been... more
Path probing is essential to maintaining an efficient overlay network topology. However, the cost of a full-scale probing is as high as ¢ ¤£ ¦¥ ¨ § © , which is prohibitive in large-scale overlay networks. Several methods have been proposed to reduce probing overhead, although at a cost in terms of probing completeness. In this paper, an orthogonal solution is proposed that trades probing overhead for estimation accuracy in sparse networks such as the Internet. The proposed solution uses network-level path composition information (for example, as provided by a topology server) to infer path quality without full-scale probing. The inference metrics include latency, loss rate and available bandwidth. This approach is used to design several probing algorithms, which are evaluated through analysis and simulation. The results show that the proposed method can significantly reduce probing overhead while providing bounded quality estimations for all ¥ £ ¦¥ © overlay paths. The solution is well suited to medium-scale overlay networks in the Internet. In other environments, it can be combined with extant probing algorithms to further improve performance. © , the latency of £ ¤ is © , and the latency of ¤ is © . Then we have the following linear equations:
2025, 24th International Conference on Distributed Computing Systems, 2004. Proceedings.
Path probing is essential to maintaining an efficient overlay network topology. However, the cost of complete probing can be as high as ¢ ¤£ ¦¥ ¨ § © , which is prohibitive in large- scale overlay networks. Recently we proposed a method... more
Path probing is essential to maintaining an efficient overlay network topology. However, the cost of complete probing can be as high as ¢ ¤£ ¦¥ ¨ § © , which is prohibitive in large- scale overlay networks. Recently we proposed a method that trades probing overhead for inference accuracy in sparse networks such as the Internet. The method uses physical path information to infer path quality for all of the ¥ £ ¦¥ © overlay paths, while actually probing only a subset of the paths. In this paper we propose and evaluate a distributed approach to implementing this method. We describe a minimum diameter, link-stress bounded overlay spanning tree, which is used to collect and disseminate path quality information. All nodes in the tree collaborate to infer the quality of all paths. Simulation results show this approach can achieve a high-level of inference accuracy while reducing probing overhead and balancing link stress on the spanning tree.
2025, Proceedings of the 2nd workshop on Middleware for pervasive and ad-hoc computing -
In pervasive computing environments, conditions are highly variable and resources are limited. In order to meet the needs of applications, systems must adapt dynamically to changing situations. Since adaptation at one system layer may be... more
In pervasive computing environments, conditions are highly variable and resources are limited. In order to meet the needs of applications, systems must adapt dynamically to changing situations. Since adaptation at one system layer may be insufficient, crosslayer, or vertical approaches to adaptation may be needed. Moreover, adaptation in distributed systems often requires horizontal cooperation among hosts. This cooperation is not restricted to the source and destination(s) of a data stream, but might also include intermediate hosts in an overlay network or mobile ad hoc network. We refer to this combined capability as universal adaptation. We contend that the model defining interaction between adaptive middleware and the operating system is critical to realizing universal adaptation. We explore this hypothesis by evaluating the Kernel-Middleware eXchange (KMX), a specific model for crosslayer, cross-system adaptation. We present the KMX architecture and discuss its potential role in supporting universal adaptation in pervasive computing environments. We then describe a prototype implementation of KMX and show results of an experimental case study in which KMX is used to improve the quality of video streaming to mobile nodes in a hybrid wired-wireless network.
2025, 25th IEEE International Conference on Distributed Computing Systems Workshops
Use of multiple paths between node pairs can enable an overlay network to bypass Internet link failures. Selecting high quality primary and backup paths is challenging, however. To maximize communication reliability, an overlay multipath... more
Use of multiple paths between node pairs can enable an overlay network to bypass Internet link failures. Selecting high quality primary and backup paths is challenging, however. To maximize communication reliability, an overlay multipath routing protocol must account for both the failure probability of a single path and link sharing among multiple paths. We propose a practical solution that exploits physical topology information and end-to-end path quality measurement results to select high quality path pairs. Simulation results show the proposed approach is effective in achieving higher multipath reliability in overlay networks at reasonable communication cost.
2025, 11th IEEE International Conference on Network Protocols, 2003. Proceedings.
Path probing is essential to maintaining an efficient overlay network topology. However, the cost of a full-scale probing is as high as O(n 2 ), which is prohibitive in large-scale overlay networks. Several methods have been proposed to... more
Path probing is essential to maintaining an efficient overlay network topology. However, the cost of a full-scale probing is as high as O(n 2 ), which is prohibitive in large-scale overlay networks. Several methods have been proposed to reduce probing overhead, although at a cost in terms of probing completeness. In this paper, an orthogonal solution is proposed that trades probing overhead for estimation accuracy in sparse networks such as the Internet. The proposed solution uses network-level path composition information (for example, as provided by a topology server) to infer path quality without full-scale probing. The inference metrics include latency, loss rate and available bandwidth. This approach is used to design several probing algorithms, which are evaluated through analysis and simulation. The results show that the proposed method can significantly reduce probing overhead while providing bounded quality estimations for all n × (n -1) overlay paths. The solution is well suited to medium-scale overlay networks in the Internet. In other environments, it can be combined with extant probing algorithms to further improve performance.
2025, Lecture Notes in Computer Science
We recently introduced Service Clouds, a distributed infrastructure designed to facilitate rapid prototyping and deployment of autonomic communication services. In this paper, we propose a model that extends Service Clouds to the wireless... more
We recently introduced Service Clouds, a distributed infrastructure designed to facilitate rapid prototyping and deployment of autonomic communication services. In this paper, we propose a model that extends Service Clouds to the wireless edge of the Internet. This model, called Mobile Service Clouds, enables dynamic instantiation, composition, configuration, and reconfiguration of services on an overlay network to support mobile computing. We have implemented a prototype of this model and applied it to the problem of dynamically instantiating and migrating proxy services for mobile hosts. We conducted a case study involving data streaming across a combination of PlanetLab nodes, local proxies, and wireless hosts. Results are presented demonstrating the effectiveness of the prototype in establishing new proxies and migrating their functionality in response to node failures.
2025
This paper addresses the problem of mapping software services onto an overlay network, specifically, the probing to locate suitable nodes on which to instantiate or configure data processing operators. We propose a distributed algorithm,... more
This paper addresses the problem of mapping software services onto an overlay network, specifically, the probing to locate suitable nodes on which to instantiate or configure data processing operators. We propose a distributed algorithm, called Dynamis, that can improve existing probing algorithms. Experimental results on the PlanetLab testbed show that Dynamis can dramatically reduce probing overhead while producing high-quality services.
2025, IEEE Transactions on Network and Service Management
This paper describes Service Clouds, a distributed infrastructure designed to facilitate rapid prototyping and deployment of adaptive communication services. The infrastructure combines adaptive middleware functionality with an overlay... more
This paper describes Service Clouds, a distributed infrastructure designed to facilitate rapid prototyping and deployment of adaptive communication services. The infrastructure combines adaptive middleware functionality with an overlay network substrate in order to support dynamic instantiation and reconfiguration of services. The Service Clouds architecture includes a collection of low-level facilities that can be invoked directly by applications or used to compose more complex services. After describing the Service Clouds architecture, we present results of experimental case studies conducted on the PlanetLab Internet testbed alone and a mobile computing testbed.
2025, Computer Communications
Path probing is essential to maintaining an efficient overlay network topology. However, the cost of a full-scale probing is as high as O(n 2 ), which is prohibitive in large-scale overlay networks. Several methods have been proposed to... more
Path probing is essential to maintaining an efficient overlay network topology. However, the cost of a full-scale probing is as high as O(n 2 ), which is prohibitive in large-scale overlay networks. Several methods have been proposed to reduce probing overhead, although at a cost in terms of probing completeness. In this paper, an orthogonal solution is proposed that trades probing overhead for estimation accuracy in sparse networks such as the Internet. The proposed solution uses network-level path composition information (for example, as provided by a topology server) to infer path quality without full-scale probing. The inference metrics include latency, loss rate and available bandwidth. This approach is used to design several probing algorithms, which are evaluated through extensive simulation. The results show that the proposed method can reduce probing overhead significantly while providing bounded quality estimations for all of the n • (n À 1) overlay paths. The solution is well suited to medium-scale overlay networks in the Internet. In other environments, it can be combined with extant probing algorithms to further improve performance.
2025, arXiv (Cornell University)
The metrics play increasingly fundamental role in the design, development, deployment and operation of telecommunication systems. Despite their importance, the studies of metrics are usually limited to a narrow area or a well-defined... more
The metrics play increasingly fundamental role in the design, development, deployment and operation of telecommunication systems. Despite their importance, the studies of metrics are usually limited to a narrow area or a well-defined objective. Our study aims to more broadly survey the metrics that are commonly used for analyzing, developing and managing telecommunication networks in order to facilitate understanding of the current metrics landscape. The metrics are simple abstractions of systems, and they directly influence how the systems are perceived by different stakeholders. However, defining and using metrics for telecommunication systems with ever increasing complexity is a complicated matter which has not been so far systematically and comprehensively considered in the literature. The common metrics sources are identified, and how the metrics are used and selected is discussed. The most commonly used metrics for telecommunication systems are categorized and presented as energy and power metrics, quality-of-service metrics, quality-of-experience metrics, security metrics, and reliability and resilience metrics. Finally, the research directions and recommendations how the metrics can evolve, and be defined and used more effectively are outlined.
2025
Named Data Networking (NDN), a data-centric enabled-cache architecture, as one of the candidates for the future Internet, has the potential to overcome many of the current Internet difficulties (e.g., security, mobility, multicasting).... more
Named Data Networking (NDN), a data-centric enabled-cache architecture, as one of the candidates for the future Internet, has the potential to overcome many of the current Internet difficulties (e.g., security, mobility, multicasting). Influenced by using cache in intermediate equipment, NDN has gained attention as a prominent method of Internet content sharing. Managing the NDN caches and reducing the cache redundancy are the important goals in this paper. Our main contribution in this research is toward caching optimization in comparison with betweenness probabilistic in-network caching strategy. Therefore, with respect to combined impacts of long-term centrality-based metric and Linear Weighted Moving Average (LWMA) of short-term parameters such as user incoming pending requests and unique outgoing hit requests on caching management, a flexible probability caching strategy is proposed. Moreover, a simple Randomized-SVD approach is applied to combine averaged short-term and long-term metrics. The output of this data-fusion algorithm is used to allocate a proper probability to the caching strategy. Evaluation results display an increase in the hit ratios of NDN routers' content-stores for the proposed method. In addition, the producer's hit ratio and the Interest-Data Round Trip Time, compared to the betweenness scheme, is decreased.
2025, International Journal of Digital Innovation and Discoveries
Big data is developing quickly, combining industrial techniques and scholarly research to address the difficulties of organising and comprehending large-scale datasets. This dissertation investigates how the AMBER software's Molecular... more
Big data is developing quickly, combining industrial techniques and scholarly research to address the difficulties of organising and comprehending large-scale datasets. This dissertation investigates how the AMBER software's Molecular Dynamics simulations can benefit from the increased processing power provided by High-Performance Mobile Cloud Computing (HPMCC). Fortunately, demonstrates a way to more effectively divide computational demands through parallel processing with the Message Passing Interface (MPI) by establishing connections between laptops and virtual machines over a mobile cloud infrastructure. This approach provides a scalable and affordable replacement for conventional supercomputers and speeds up processing. Furthermore, this research addresses security issues with Single Sign-On (SSO) technologies, showcasing cloud computing's capacity to manage massive data demands, especially in scientific research.
2025, 2008 IEEE 68th Vehicular Technology Conference
In this paper, we introduce a routing solution called "Landmark Overlays for Urban Vehicular Routing Environments" (LOUVRE), an approach that efficiently builds a landmark overlay network on top of an urban topology. We define urban... more
In this paper, we introduce a routing solution called "Landmark Overlays for Urban Vehicular Routing Environments" (LOUVRE), an approach that efficiently builds a landmark overlay network on top of an urban topology. We define urban junctions as overlay nodes and create an overlay link if and only if the traffic density of the underlying network guarantees the multi-hop vehicular routing between the two overlay nodes. LOUVRE contains a distributed traffic density estimation scheme which is used to evaluate the existence of an overlay link. Then, efficient routing is performed on the overlay network, guaranteeing a correct delivery of each packet. We evaluate LOUVRE against the benchmark routing protocols of GPSR and GPCR and show that LOUVRE performs higher in packet delivery and achieves lower hop count.
2025, IEEE Vehicular Technology Magazine
e introduce a routing solution called landmark overlays for urban vehicular routing environments (LOUVRE), an approach that efficiently builds a landmark overlay network on top of an urban topology. We define urban junctions as overlay... more
e introduce a routing solution called landmark overlays for urban vehicular routing environments (LOUVRE), an approach that efficiently builds a landmark overlay network on top of an urban topology. We define urban junctions as overlay nodes and create an overlay link if and only if the traffic density of the underlying network guarantees the multihop vehicular routing between the two overlay nodes. LOUVRE contains a distributed traffic density estimation scheme that is used to evaluate the existence of an overlay link. Then, efficient routing is performed on the overlay network, guaranteeing a correct delivery of each packet. We evaluate LOUVRE against the benchmark routing protocols of greedy perimeter stateless routing (GPSR) and greedy perimeter coordinator routing (GPCR) and show that LOUVRE performs higher in packet delivery and achieves lower hop count. The ever-growing spread of vehicles and roadside traffic monitors with the advancement of navigation systems and the low cost of wireless network devices provide incentives for car manufacturers to equip vehicles with real-time traffic reports, promising peer-to-peer (P2P) applications, and externally driven services to vehicles. However, for these applications and services to materialize, there is a need for standards of ubiquitous high-speed communications and homogeneous communication interfaces among different automotive manufacturers. For this purpose, the Intelligent Transportation Systems (ITS) have proposed the wireless access in vehicular environments (WAVE) standards that define an architecture that collectively enables vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) wireless communications . In order to enable multihop wireless vehicular communications, vehicles are equipped with WAVE devices and interconnected with one other to form a vehicular ad hoc network (VANET), which is a particularly challenging class of mobile ad hoc networks (MANETs). VANETs are distributed, self-organizing networks built from moving vehicles, and are thus characterized by very high speed, a strong nonuniform distribution of vehicles, and a
2025, 2008 Australasian Telecommunication Networks and Applications Conference
Multipath overlay routing technologies are seen as alternative solutions for VoIP because they inherit path diversity from peer-to-peer overlay networks. We discuss and compare the performances of two relay path selection approaches... more
Multipath overlay routing technologies are seen as alternative solutions for VoIP because they inherit path diversity from peer-to-peer overlay networks. We discuss and compare the performances of two relay path selection approaches proposed for VoIP overlay systems through extensive simulations. We propose a new method for relay path computation that takes into account both path disjointness and other network quality factors (such as packet delay or loss). We further apply our method in different overlay network scenarios by varying the supernode distribution. It is found that there is a considerable improvement of path performance when relaying traffic through highly connected ASs using the new method.
2025
the Session Initiation Protocol (SIP) This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Copyright Notice Copyright (C) The Internet... more
the Session Initiation Protocol (SIP) This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Copyright Notice Copyright (C) The Internet Society (2006). The Session Initiation Protocol (SIP) is a flexible yet simple tool for establishing interactive communications sessions across the Internet. Part of this flexibility is the ease with which it can be extended. In order to facilitate effective and interoperable extensions to SIP, some guidelines need to be followed when developing SIP extensions. This document outlines a set of such
2025
The Secure Internet Indirection Infrastructure (Secure-i 3 ) is a proposal for a flexible and secure overlay network that, if universally deployed, would effectively block a number of denial-of-service problems in the Internet. The Host... more
The Secure Internet Indirection Infrastructure (Secure-i 3 ) is a proposal for a flexible and secure overlay network that, if universally deployed, would effectively block a number of denial-of-service problems in the Internet. The Host Identity Protocol (HIP), on the other hand, is a proposal for deploying opportunistic, IPsec based end-to-end security, allowing any hosts to communicate in a secure way through the Internet. In this paper, we explore various possibilities for combining ideas from Secure-i 3 and HIP, thereby producing an architecture that is more efficient and secure than Secure-i 3 and more flexible and denial-of-service resistant than HIP.
2025
The Secure Internet Indirection Infrastructure (Secure-i 3 ) is a proposal for a flexible and secure overlay network that, if universally deployed, would effectively block a number of denial-of-service problems in the Internet. The Host... more
The Secure Internet Indirection Infrastructure (Secure-i 3 ) is a proposal for a flexible and secure overlay network that, if universally deployed, would effectively block a number of denial-of-service problems in the Internet. The Host Identity Protocol (HIP), on the other hand, is a proposal for deploying opportunistic, IPsec based end-to-end security, allowing any hosts to communicate in a secure way through the Internet. In this paper, we explore various possibilities for combining ideas from Secure-i 3 and HIP, thereby producing an architecture that is more efficient and secure than Secure-i 3 and more flexible and denial-of-service resistant than HIP.
2025, The transactions of the Institute of Electrical Engineers of Japan.C
2025
»The fact that we live in a time when clouds can be calculated in all their randomness thanks to Mandelbrot's fractals and then appear on a screen as calculated, unfilmed images distinguishes the present from any previous time.« (Kittler... more
»The fact that we live in a time when clouds can be calculated in all their randomness thanks to Mandelbrot's fractals and then appear on a screen as calculated, unfilmed images distinguishes the present from any previous time.« (Kittler 2002:37)
2025, Proceedings IEEE 24th Annual Joint Conference of the IEEE Computer and Communications Societies.
We propose a new scheme for content distribution of large files that is based on network coding. With network coding, each node of the distribution network is able to generate and transmit encoded blocks of information. The randomization... more
We propose a new scheme for content distribution of large files that is based on network coding. With network coding, each node of the distribution network is able to generate and transmit encoded blocks of information. The randomization introduced by the coding process eases the scheduling of block propagation, and, thus, makes the distribution more efficient. This is particularly important in large unstructured overlay networks, where the nodes need to make block forwarding decisions based on local information only. We compare network coding to other schemes that transmit unencoded information (i.e. blocks of the original file) and, also, to schemes in which only the source is allowed to generate and transmit encoded packets. We study the performance of network coding in heterogeneous networks with dynamic node arrival and departure patterns, clustered topologies, and when incentive mechanisms to discourage free-riding are in place. We demonstrate through simulations of scenarios of practical interest that the expected file download time improves by more than 20-30% with network coding compared to coding at the server only and, by more than 2-3 times compared to sending unencoded information. Moreover, we show that network coding improves the robustness of the system and is able to smoothly handle extreme situations where the server and nodes leave the system.
2025, IEEE Access
The main goals of fifth generation (5G) systems are to significantly increase the network capacity and to support new 5G service requirements. Ultra network densification with small cells is among the key pillars for 5G evolution. The... more
The main goals of fifth generation (5G) systems are to significantly increase the network
capacity and to support new 5G service requirements. Ultra network densification with small cells is
among the key pillars for 5G evolution. The inter-small-cell 5G backhaul network involves massive data
traffic. Hence, it is important to have a centralized, efficient multi-hop routing protocol for backhaul
networks to manage and speed up the routing decisions among small cells, while considering the 5G service
requirements. This paper proposes a parallel multi-hop routing protocol to speed up routing decisions in
5G backhaul networks. To this end, we study the efficiency of utilizing the parallel platforms of cloud
computing and high-performance computing (HPC) to manage and speed up the parallel routing protocol
for different communication network sizes and set recommendations for utilizing cloud resources to adopt
the parallel protocol. Our numerical results indicate that the HPC parallel implementation outperforms the
cloud computing implementation, in terms of routing decision speed-up and scalability to large network
sizes. In particular, for a large network size with 2048 nodes, our HPC implementation achieves a routing
speed-up of 37x. However, the best routing speed-up achieved using our cloud computing implementation
is 15.5x, and is recorded using one virtual machine (VM) for a network size of 1024 nodes. In summary,
there is a trade-off between a better performance for HPC vs. flexible resources of cloud computing. Thus,
choosing best fit platform for 5G routing protocols depends on the deployment scenarios at 5G core or edge
network.
2025
Peer to peer (P2P) networks have using commonly used for tasks such as file sharing or file distribution, and for building distributed applications in large scale network. Their performance measures are generally based on simulation... more
Peer to peer (P2P) networks have using commonly used for tasks such as file sharing or file distribution, and for building distributed applications in large scale network. Their performance measures are generally based on simulation methods software such as NS-2, P2PSim, OpenNet, etc. Hence, the absence of a validation of the simulation model is a critical issue. In this paper, we propose a new analytical model derived from (8), to evaluate the performance of HPM protocol (hierarchical Peer-to-Peer model). Performance is done principally in terms of total download time of requested resources in the P2P network, and then we analyze the impact of various parameters associated with the heterogeneity of nodes.
2025, 2011 IEEE International Workshop Technical Committee on Communications Quality and Reliability (CQR)
2025, Lecture Notes in Computer Science
The paper studies the problem of allocating bandwidth resources of a Service Overlay Network, to optimize revenue. Clients bid for network capacity in periodically held auctions, under the condition that resources allocated in an auction... more
The paper studies the problem of allocating bandwidth resources of a Service Overlay Network, to optimize revenue. Clients bid for network capacity in periodically held auctions, under the condition that resources allocated in an auction are reserved for the entire duration of the connection, not subject to future contention. This makes the optimal allocation coupled over time, which we formulate as a Markov Decision Process (MDP). Studying first the single resource case, we develop a receding horizon approximation to the optimal MDP policy, using current revenue and the expected revenue in the next step to make bandwidth assignments. A second approximation is then found, suitable for generalization to the network case, where bids for different routes compete for shared resources. In that case we develop a distributed implementation of the auction, and demonstrate its performance through simulations.
2025
Content Delivery Network (CDN) refers to a collection of servers strategically positioned across multiple geographical locations, with the primary purpose of caching content close to end users. A Content Delivery Network (CDN) facilitates... more
Content Delivery Network (CDN) refers to a collection of servers strategically positioned across multiple geographical locations, with the primary purpose of caching content close to end users. A Content Delivery Network (CDN) facilitates the expeditious transmission of essential components for rendering digital content on the Internet, encompassing HTML pages, JavaScript files, style sheets, images, and videos. The utilization of Content Delivery Network (CDN) services is witnessing an upsurge in popularity, presently accounting for a significant proportion of web traffic, even encompassing major platforms like Facebook, Netflix, and Amazon. Due to the importance of this site, this paper will discuss the concept of CDN, and the important topics related to it related to the network, what are its types and divisions, and its importance in the world of information technology. The research analyzes the most important studies presented by researchers from 2021 to 2023.
2025, HAL (Le Centre pour la Communication Scientifique Directe)
Decentralized social networks have attracted the attention of a large number of researchers with their promises of scalability, privacy, and ease of adoption. Yet, current implementations require users to install specific software to... more
Decentralized social networks have attracted the attention of a large number of researchers with their promises of scalability, privacy, and ease of adoption. Yet, current implementations require users to install specific software to handle the protocols they rely on. The WebRTC framework holds the promise of removing this requirement by making it possible to run peer-to-peer applications directly within web browsers without the need of any external software or plugins. In this demo, we present WebGC, a WebRTC-based library that supports gossip-based communication between web browsers and enables them to operate with Node-JS applications. Due to their inherent scalability, gossip-based protocols constitute a key component of a large number of decentralized applications including social networks. We therefore hope that WebGC can represent a useful tool for developers and researchers.
2025, Proceedings 20th IEEE International Parallel & Distributed Processing Symposium
An execution environment consisting of virtual machines (VMs) interconnected with a virtual overlay network can use the naturally occurring traffic of an existing, unmodified application running in the VMs to measure the underlying... more
An execution environment consisting of virtual machines (VMs) interconnected with a virtual overlay network can use the naturally occurring traffic of an existing, unmodified application running in the VMs to measure the underlying physical network. Based on these characterizations, and characterizations of the application's own communication topology, the execution environment can optimize the execution of the application using application-independent means such as VM migration and overlay topology changes. In this paper we demonstrate the feasibility of such free automatic network measurement by fusing the Wren passive monitoring and analysis system with Virtuoso's virtual networking system. We explain how Wren has been extended to support online analysis, and we explain how Virtuoso's adaptation algorithms have been enhanced to use Wren's physical network level information to choose VM-to-host mappings, overlay topology, and forwarding rules.
2025, Conference on Local Computer Networks
Overlay multicast streaming is built out of loosely coupled end-hosts (peers) that contribute resources to stream media to other peers. Peers, however, can be malicious. They may intentionally wish to disrupt the multicast service or... more
Overlay multicast streaming is built out of loosely coupled end-hosts (peers) that contribute resources to stream media to other peers. Peers, however, can be malicious. They may intentionally wish to disrupt the multicast service or cause confusions to other peers. We propose two new schemes to detect malicious peers in overlay multicast streaming. These schemes compute a level of trust for each peer in the network. Peers with a trust value below a threshold are considered to be malicious. Results from our simulations indicate that the proposed schemes can detect malicious peers with medium to high accuracy, depending on cheating patterns and malicious peer percentages.
2025, Proceedings. 2006 31st IEEE Conference on Local Computer Networks
Overlay multicast streaming is built out of loosely coupled end-hosts (peers) that contribute resources to stream media to other peers. Peers, however, can be malicious. They may intentionally wish to disrupt the multicast service or... more
Overlay multicast streaming is built out of loosely coupled end-hosts (peers) that contribute resources to stream media to other peers. Peers, however, can be malicious. They may intentionally wish to disrupt the multicast service or cause confusions to other peers. We propose two new schemes to detect malicious peers in overlay multicast streaming. These schemes compute a level of trust for each peer in the network. Peers with a trust value below a threshold are considered to be malicious. Results from our simulations indicate that the proposed schemes can detect malicious peers with medium to high accuracy, depending on cheating patterns and malicious peer percentages.
2025, Second International Conference on Wireless and Mobile Communications, ICWMC 2006
All the proposed IP mobility protocols assume that the mobile nodes always have a mobility-aware IP stack. On the other hand, efficient micro-mobility solutions entail specific topologies and mobile-aware routers, requiring major changes... more
All the proposed IP mobility protocols assume that the mobile nodes always have a mobility-aware IP stack. On the other hand, efficient micro-mobility solutions entail specific topologies and mobile-aware routers, requiring major changes in the existing infra-structures. Major advantages are foreseen if mobility can be supported using the existing legacy infra-structure, on both client and network sides, allowing a smooth upgrade process. This paper describes such kind of solution, by proposing an efficient terminal independent mobility architecture (eTIMIP -enhanced TIMIP) which is compliant with the macro-mobility standard and which uses an overlay network to provide transparent micro-mobility support in all existing networks, using an enhanced version of the previously proposed TIMIP protocol. Simulation results have revealed the efficiency, the transparency and the reliability of the proposed architecture through comparison to other proposals.
2025, Information Technology Journal
2025, 24th International Conference on Distributed Computing Systems, 2004. Proceedings.
Increasing application requirements have placed heavy emphasis on building overlay networks to efficiently deliver data to multiple receivers. A key performance challenge is simultaneously achieving adaptivity to changing network... more
Increasing application requirements have placed heavy emphasis on building overlay networks to efficiently deliver data to multiple receivers. A key performance challenge is simultaneously achieving adaptivity to changing network conditions and scalability to large numbers of users. In addition, most current algorithms focus on a single performance metric, such as delay or bandwidth, particular to individual application requirements. In this paper, we introduce a two-fold approach for creating robust, high-performance overlays called Adaptive Multi-Metric Overlays (AMMO). First, AMMO uses an adaptive, highlyparallel, and metric-independent protocol, TreeMaint, to build and maintain overlay trees. Second, AMMO provides a mechanism for comparing overlay edges along specified application performance goals to guide TreeMaint transformations. We have used AMMO to implement and evaluate a single-metric (bandwidth-optimized) tree similar to Overcast and a two-metric (delay-constrained, cost-optimized) overlay.
2025
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents... more
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt. The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html.
2025
This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that... more
This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as “work in progress.” The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt To view the list Internet-Draft Shadow Directories, see http://www.ietf.org/shadow.html.
2025, Journal of Statistical Mechanics: Theory and Experiment
We investigate several variants of a network creation model: a group of agents builds up a network between them while trying to keep the costs of this network small. The cost function consists of two addends, namely (i) a constant amount... more
We investigate several variants of a network creation model: a group of agents builds up a network between them while trying to keep the costs of this network small. The cost function consists of two addends, namely (i) a constant amount for each edge an agent buys and (ii) the minimum number of hops it takes sending messages to other agents. Despite the simplicity of this model, various complex network structures emerge depending on the weight between the two addends of the cost function and on the selfish or unselfish behaviour of the agents.
2025, IEEE Security & Privacy Magazine
Anonymous communication systems hide conversations against unwanted observations. Deploying an anonymous communications infrastructure presents surprises unlike those found in other types of systems. For example, given that users... more
Anonymous communication systems hide conversations against unwanted observations. Deploying an anonymous communications infrastructure presents surprises unlike those found in other types of systems. For example, given that users shouldn't need to trust each other or any part of the system, no single authority or organization should be able to observe complete traffic information for anyone's communication. This makes commercialization difficult and requires a rethinking of incentives for both users and infrastructure participants'in no small part because a user's security depends directly on the infrastructure's size and the number of other system users. To address these and related issues, we designed Tor (the onion routing), a widely used low-latency, general-purpose anonymous communication infrastructure'an overlay network for anonymizing TCP streams over the real-world Internet. [1] Tor requires no special privileges or kernel modifications, needs little synchronization or coordination between nodes, and provides a reasonable trade-off between anonymity, usability, and efficiency. Since deployment in October 2003, the public Tor network has grown to about a thousand volunteer-operated nodes worldwide and traffic averaging more than 110 Mbytes per second from hundreds of thousands of concurrent users, ranging from ordinary citizens concerned about their privacy to law enforcement and government intelligence agencies looking to operate on the Internet without being noticed and corporations that don't want to reveal information to their competitors. This article discusses how to use Tor, who uses it, how it works, why we designed it the way we did, and why that design makes it usable and stable. I.
2025, International Journal of Digital Innovation and Discoveries
Big data is developing quickly, combining industrial techniques and scholarly research to address the difficulties of organising and comprehending large-scale datasets. This dissertation investigates how the AMBER software's Molecular... more
Big data is developing quickly, combining industrial techniques and scholarly research to address the difficulties of organising and comprehending large-scale datasets. This dissertation investigates how the AMBER software's Molecular Dynamics simulations can benefit from the increased processing power provided by High-Performance Mobile Cloud Computing (HPMCC). Fortunately, demonstrates a way to more effectively divide computational demands through parallel processing with the Message Passing Interface (MPI) by establishing connections between laptops and virtual machines over a mobile cloud infrastructure. This approach provides a scalable and affordable replacement for conventional supercomputers and speeds up processing. Furthermore, this research addresses security issues with Single Sign-On (SSO) technologies, showcasing cloud computing's capacity to manage massive data demands, especially in scientific research.
2025, 2012 IEEE 32nd International Conference on Distributed Computing Systems
Thalamita crenata is one of the most common swimming crabs of the mangrove creeks of the East African coast. In Mida Creek, Kenya, this species inhabits the extreme seaward fringe of the mangrove swamp and the intertidal platform in front... more
Thalamita crenata is one of the most common swimming crabs of the mangrove creeks of the East African coast. In Mida Creek, Kenya, this species inhabits the extreme seaward fringe of the mangrove swamp and the intertidal platform in front of the mangal, sheltering in small pools during low tide. Gut content analysis reveals that T. crenata is a generalistic predator, its diet being mainly composed of bivalves and slow-moving crustaceans. Both the stomach fullness and the relative presence of animal prey in the contents were significantly higher in crabs collected at sunset than in those caught at dawn. Stomach fullness seems to depend also on the tidal rhythm; in fact, it is higher during spring tide periods. Females had stomachs slightly fuller than those of males, while there was no difference in diet between juveniles and older specimens. Thalamita crenata forages more actively during daytime, thus differing from the majority of swimming crabs. Both the great abundance of this species and its diet, based on a wide range of slow-moving or sessile species, testify to the importance of the role played by this predator in the mangrove ecosystem of Mida Creek.
2025, Proceedings IEEE INFOCOM 2006. 25TH IEEE International Conference on Computer Communications