Network Protocols Research Papers - Academia.edu (original) (raw)
2025, Signals and Communication Technology
2025, International Journal of Software Engineering & Applications
Mining software repositories (MSR) research had significantly contributed to software engineering. However, MSR results integration across repositories is a recent concern that is getting more attention from researchers each day. Some... more
Mining software repositories (MSR) research had significantly contributed to software engineering. However, MSR results integration across repositories is a recent concern that is getting more attention from researchers each day. Some noticeable research in this sense is related to the approximation between MSR and semantic web, specially linked data approaches which makes it possible to integrate repositories and mined results. Manifested that way, we believe that current research is not fully addressing the practical integration of MSR results, specially, in software engineering due to not considering that these results needs to be integrated to the tools as assistance to activity performers, as a kind of decision making support. Based on this statement this research describes an approach, named Sambasore, which is concerned with MSR results inter-repository integration and also to decision making support processes, based on tool assistance modelling. To show its feasibility we describe the main concepts, some related works and also a proof of concept experiment applied to a software process modelling tool named Spider PM.
2025
In both hardware-only and software-only directory protocols the performance is often limited by memory access stall times. To increase the performance, several latency tolerating and reducing techniques have been proposed and shown... more
In both hardware-only and software-only directory protocols the performance is often limited by memory access stall times. To increase the performance, several latency tolerating and reducing techniques have been proposed and shown effective for hardware-only directory protocols. For software-only directory protocols, the efficiency of a technique depends not only on how effective it is as seen by the local processor, but also on how it impacts the software handler execution overhead in the node where a memory block is allocated. Based on architectural simulations and case studies of three techniques, we find that prefetching can degrade the performance of software-only directory protocols due to useless prefetches. A relaxed memory consistency model hides all write latency for software-only directory protocols, but the software handler overhead is virtually unaffected and now constitutes a larger portion of the execution time. Overall, latency tolerating techniques for software-only directory protocols must be chosen with more care than for hardware-only directory protocols.
2025, Future Generation Computer Systems
Invalidation-based cache coherence protocols have been extensively studied in the context of large-scale shared-memory multiprocessors. Under a relaxed memory consistency model, most of the write latency can be hidden whereas cache misses... more
Invalidation-based cache coherence protocols have been extensively studied in the context of large-scale shared-memory multiprocessors. Under a relaxed memory consistency model, most of the write latency can be hidden whereas cache misses still incur a severe performance problem. By contrast, update-based protocols have a potential to reduce both write and read penalties under relaxed memory consistency models because coherence misses can be completely eliminated. The purpose of this paper is to compare update-and invalidation-based protocols for their ability to reduce or hide memory access latencies and for their ease of implementation under relaxed memory consistency models. Based on a detailed simulation study, we find that write-update protocols augmented with simple competitive mechanisms -we call such protocols competitive-update protocols -can hide all the write latency and cut the read penalty by as much as 46% at the cost of some increase in the memory traffic. However, as compared to write-invalidate, update-based protocols require more aggressive memory consistency models and more local buffering in the second-level cache to be effective. In addition, their increased number of global writes may cause increased synchronization overhead in applications with high contention for critical sections.
2025, Analytical Assessment of Auxiliary Cache Utility Data Structures for Vector Databases and LLM Retrieval Pipelines
This paper presents an analytical framework for evaluating auxiliary cache utility data structures in vector databases and local large language model (LLM) retrieval pipelines. While frequency-based caching policies such as TinyLFU... more
This paper presents an analytical framework for
evaluating auxiliary cache utility data structures in vector
databases and local large language model (LLM) retrieval
pipelines. While frequency-based caching policies such as
TinyLFU are well-established in mature systems, the incremental
value of a dedicated auxiliary cache layer that sits between
application logic and existing caching mechanisms remains an
open question. We develop a formal mathematical model for
analyzing cache utility and present a structured assessment
methodology examining architectural trade-offs. Our analysis
encompasses system complexity, maintenance burden, and cache
coherence challenges against potential performance benefits. We
systematically compare architectural approaches across different
deployment scenarios and query workloads, identifying conditions
where auxiliary caches might provide meaningful benefits beyond
existing mechanisms. Rather than making empirical claims
that would require extensive benchmarking, we contribute a
decision framework that system architects can apply to their
specific contexts. We conclude by identifying key experimental
metrics and validation approaches that would be necessary
to quantitatively evaluate auxiliary cache implementations in
production environments.
2025
This report describes proposed changes in the security model and authentication scheme for the Network Time Protocol Version 4, which is an enhanced version of the current Versing 3. The changes are intended to replace the need to... more
This report describes proposed changes in the security model and authentication scheme for the Network Time Protocol Version 4, which is an enhanced version of the current Versing 3. The changes are intended to replace the need to securely distribute cryptographic keys in advance, while protecting against replay and man-in-the-middle attacks. As in other schemes described in the literature, the proposed scheme is based on the use of a public-key cryptosystem to verify a server secret and from this to generate session keys for each client separately. A particularly important consequence of this design in the case of NTP is that the mechanisms for time synchronization and cryptographic signature verification must be decoupled to preserving good timekeeping quality. The schemes to do this are the main body of this report, which also includes an extensive analysis of the vulnerabilities to various kinds of hardware and software failures, as well as hostile attack.
2025, International Journal of Information Technology Convergence and Services
W3C's Semantic Web intents a common framework that allows data to be shared and reused across application and enterprise. The semantic web and its related technologies are the main directions of future web development where... more
W3C's Semantic Web intents a common framework that allows data to be shared and reused across application and enterprise. The semantic web and its related technologies are the main directions of future web development where machine-processable information which supports user tasks. Ontologies are playing the vital role in Semantic Web. Researches on Ontology engineering had pointed out that an effective ontology application development methodology with integrated tool support is mandatory for its success. . Potential benefits are there to ontology engineering in making the toolset of Model Driven Architecture applicable to ontology modeling. Since Software and Ontology engineering are two complimentary branches, the scope of extension of the well proven methodologies and UML based modeling approaches used in software engineering to ontology engineering can bridge the gap between the engineering branches. This research paper is an attempt to suggest an exclusive hybrid methodology for ontology development from existing matured software engineering. Philosophical and engineering aspects of the newly derived methodology have been described clearly An attempt has been made for the application of proposed methodology with protégé editor. The full-fledged implementation of an domain ontology and its validation is the future research direction.
2025
IPv6 merupakan internet protocol yang dalam beberapa tahun kedepan akan menggantikan posisi IPv4 karena ketersediaan kapasitas alamat dari IPv4 yang hampir habis seiring dengan banyaknya pengguna gadget serta teknologi berbasis IP... more
IPv6 merupakan internet protocol yang dalam beberapa tahun kedepan akan menggantikan posisi IPv4 karena ketersediaan kapasitas alamat dari IPv4 yang hampir habis seiring dengan banyaknya pengguna gadget serta teknologi berbasis IP lainnya. Dibandingkan dengan IPv4, IPv6 jauh lebih baik dari segi kapasitasnya yang sangat besar, sekuritas, lalu QoS, dan dari segi mobilitasnya. Maka akan terjadi evolusi dari IPv4 ke IPv6 secara gradual dengan kondisi awal IPv4 yang menjadi mayoritas dibandingkan dengan IPv6, sedangkan dalam perkembangan berikutnya IPv6 lah yang akan menjadi lebih dominan. Dalam tugas akhir ini akan dilakukan pengujian terhadap pengimplementasian layanan multiplay melalui jaringan yang berbeda versi IP dengan client dan server, menggunakan metode tunneling dan dual stack. Diharapkan tiap skenario layanan yang diujikan akan memberikan output yang sesuai standar. Hasil pengujian, didapatkan untuk semua skenario layanan yang diujikan didapatkan keluaran nilai QoS yang sesu...
2025
Intra-Domain Mobility Management Protocol (IDMP) in ¢ ¤£ ¦¥ and § © gen- eration (3/4G) wireless cellular networks to reduce the latency of intradomain location updates and the mobility signaling traffic. We first present enhancements to... more
Intra-Domain Mobility Management Protocol (IDMP) in ¢ ¤£ ¦¥ and § © gen- eration (3/4G) wireless cellular networks to reduce the latency of intradomain location updates and the mobility signaling traffic. We first present enhancements to basic IDMP that provide fast intra-domain handoffs by using a duration-limited, proactive packet 'multicasting' scheme. We quantify the expected buffering requirements of our proposed multicasting scheme for typical 3/4G network characteristics and compare it with alternative IP-based fast handoff solutions. We also present a paging scheme under IDMP that replicates the current cellular paging structure. Our paging mechanism supports generic paging strategies and can significantly reduce the mobility-related IP signaling load.
2025, arXiv (Cornell University)
We propose in this paper a simulation implementation of Self-Organizing Networks (SON) optimization related to mobility load balancing (MLB) for LTE systems using ns-3 . The implementation is achieved toward two MLB algorithms dynamically... more
We propose in this paper a simulation implementation of Self-Organizing Networks (SON) optimization related to mobility load balancing (MLB) for LTE systems using ns-3 . The implementation is achieved toward two MLB algorithms dynamically adjusting handover (HO) parameters based on the Reference Signal Received Power (RSRP) measurements. Such adjustments are done with respect to loads of both an overloaded cell and its cells' neighbours having enough available resources enabling to achieve load balancing. Numerical investigations through selected key performance indicators (KPIs) of the proposed MLB algorithms when compared with another HO algorithm (already implemented in ns-3) based on A3 event highlight the significant MLB gains provided in terms global network throughput, packet loss rate and the number of successful HO without incurring significant overhead.
2025, Network Protocols and Algorithms
One of the major issues in current routing and MAC layers protocols for mobile ad hoc networks (MANETs) is the high energy resources consumed for process of route discovery and collision avoidance respectively. Proper use of location... more
One of the major issues in current routing and MAC layers protocols for mobile ad hoc networks (MANETs) is the high energy resources consumed for process of route discovery and collision avoidance respectively. Proper use of location information and dynamically adjustment of intermediate nodes' retransmission probabilities adopted by a number of algorithms contribute to a reduction in the number of retransmissions and consequently reduce bandwidth and power consumption, but this feat was achieved at a price on network reachability. Many other efforts were made to achieve greater power conservation by many authors. This paper reviewed some current literatures that were proposed to improve the energy conservation in MANET at both MAC and routing layers, it also highlight the performance demands required of these protocols to assist researcher in MANET energy conservation as a good starting point for developing energy conservation algorithm.
2025, International Journal of Communication Systems
Supporting quality of service (QoS) over the Internet is a very important issue and many mechanisms have already been devised or are under way towards achieving this goal. One of the most important approaches is the so‐called... more
Supporting quality of service (QoS) over the Internet is a very important issue and many mechanisms have already been devised or are under way towards achieving this goal. One of the most important approaches is the so‐called Differentiated Services (DiffServ) architecture, which provides a scalable mechanism for QoS support in a TCP/IP network. The main concept underlying DiffServ is the aggregation of traffic flows at an ingress (or egress) point of a network and the marking of the IP packets of each traffic flow according to several classification criteria. Diffserv is classified under two taxonomies: the absolute and the relative. In absolute DiffServ architecture, an admission control scheme is utilized to provide QoS as absolute bounds of specific QoS parameters. The relative DiffServ model offers also QoS guarantees per class but in reference to the guarantees given to the other classes defined. In this paper, relative proportional delay differentiation is achieved based on c...
2025, 23rd International Conference on Distributed Computing Systems, 2003. Proceedings.
In this paper, we present a real-time communication protocol for sensor networks, called SPEED. The protocol provides three types of real-time communication services, namely, real-time unicast, real-time area-multicast and real-time... more
In this paper, we present a real-time communication protocol for sensor networks, called SPEED. The protocol provides three types of real-time communication services, namely, real-time unicast, real-time area-multicast and real-time area-anycast. SPEED is specifically tailored to be a stateless, localized algorithm with minimal control overhead. End-to-end soft real-time communication is achieved by maintaining a desired delivery speed across the sensor network through a novel combination of feedback control and non-deterministic geographic forwarding. SPEED is a highly efficient and scalable protocol for sensor networks where the resources of each node are scarce. Theoretical analysis, simulation experiments and a real implementation on Berkeley motes are provided to validate our claims.
2025, International Journal of Information Technology and Web Engineering
Domain Name System (DNS) is the system for the mapping between easily memorizable host names and their IP addresses. Due to its criticality, the Internet Engineering Task Force (IETF) has defined a DNS Security Extension (DNSSEC) to... more
Domain Name System (DNS) is the system for the mapping between easily memorizable host names and their IP addresses. Due to its criticality, the Internet Engineering Task Force (IETF) has defined a DNS Security Extension (DNSSEC) to provide data-origin authentication. In this paper, we point out two drawbacks of the DNSSEC standard in its handling of DNS dynamic updates: 1) the on-line storage of a zone security key, creating a single point of attack for both inside and outside attackers, and 2) the violation of the role separation principle, which in the context of DNSSEC requires the separation of the roles of zone security managers from DNS name server administrators. To address these issues, we propose an alternative secure DNS architecture based on threshold cryptography. Unlike DNSSEC, this architecture adheres to the role separation principle without presenting any single point of attack. To show the feasibility of the proposed architecture, we developed a threshold cryptography toolkit based on the Java Cryptography Architecture (JCA) and built a proof-of-concept prototype with the toolkit. Our running results of the prototype on a representative platform show that the performance of our proposed architecture ranges from one to four times of DNSSEC's performance. Thus, through small performance overhead, our proposed architecture could achieve very high level of security.
2025
This document specifies PB-TNC, a Posture Broker protocol identical to the Trusted Computing Group’s IF-TNCCS 2.0 protocol. The document then evaluates PB-TNC against the requirements defined in the NEA Requirements specification. Status... more
This document specifies PB-TNC, a Posture Broker protocol identical to the Trusted Computing Group’s IF-TNCCS 2.0 protocol. The document then evaluates PB-TNC against the requirements defined in the NEA Requirements specification. Status of This Memo This is an Internet Standards Track document. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the
2025
Software Defined Network (SDN) defines a new paradigm in network architecture that separates the control plane and data plane. In SDN, interactions for Neighbor Discovery Protocol (NDP) process requires forging a packet that must be... more
Software Defined Network (SDN) defines a new paradigm in network architecture that separates the control plane and data plane. In SDN, interactions for Neighbor Discovery Protocol (NDP) process requires forging a packet that must be handled on controller side, because the amount of multicast traffic is high if not handled. A common approach used is Reactive Proxy, an approach where the controller can reply Neighbor Solicitation (NS) packet request directly on behalf of the hosts and reply with Neighbor Advertisement (NA) packet. But to achieve high scalability, this approach was developed into Semi Reactive SProxy. This approach is designed to offloading the proxy functionality that previously in the controller heading to the switch on the SDN network, without violating the principles of the SDN and is fully OpenFlow standard compliant. In this paper, experiments were carried out using a RYU controller simulated in the Mininet environment. The test results indicate that the approach...
2025, ACM SIGCOMM Computer Communication Review
We describe a new scalable application-layer multicast protocol, specifically designed for low-bandwidth, data streaming applications with large receiver sets. Our scheme is based upon a hierarchical clustering of the application-layer... more
We describe a new scalable application-layer multicast protocol, specifically designed for low-bandwidth, data streaming applications with large receiver sets. Our scheme is based upon a hierarchical clustering of the application-layer multicast peers and can support a number of different data delivery trees with desirable properties.We present extensive simulations of both our protocol and the Narada application-layer multicast protocol over Internet-like topologies. Our results show that for groups of size 32 or more, our protocol has lower link stress (by about 25%), improved or similar end-to-end latencies and similar failure recovery properties. More importantly, it is able to achieve these results by using orders of magnitude lower control traffic.Finally, we present results from our wide-area testbed in which we experimented with 32-100 member groups distributed over 8 different sites. In our experiments, average group members established and maintained low-latency paths and ...
2025
Distributed Denial-of-Service DDoS attacks have posed a challenge to Domain Name Systems DNS servers for many years. Surprisingly, no comprehensive solution has been found to protect against such attacks. In this paper, we proposed an... more
Distributed Denial-of-Service DDoS attacks have posed a challenge to Domain Name Systems DNS servers for many years. Surprisingly, no comprehensive solution has been found to protect against such attacks. In this paper, we proposed an approach by leveraging the RAID technology to protect the DNS network. Unlike DNS systems relying on one DNS server, our system incorporates multiple independent servers, each housing sets of servers holding distinct coded DNS records. This ensures that even if one server is targeted or damaged the other servers can continue functioning without any data loss or disruption in operations. Our findings demonstrate that this approach presents an answer, to the issue of DNS vulnerability towards DDoS attacks. Through the implementation of RAID technology and distribution of DNS records across servers, the suggested solution holds promising potential in creating a more reliable DNS systems capable of withstanding DDoS attacks while enhancing internet security and stability.
2025, IEEE
Traditional password-based authentication systems pose security challenges due to the vulnerabilities associated with weak passwords and the storage of password representations on servers. In this paper, I investigate an alternative... more
Traditional password-based authentication systems pose security challenges due to the vulnerabilities associated with weak passwords and the storage of password representations on servers. In this paper, I investigate an alternative approach to user authentication using device fingerprinting. Device fingerprinting involves collecting information about a device's attributes and configuration to create a unique identifier. Leveraging device fingerprints, I propose a password-less authentication system where users only need to remember their username. By comparing the computed device fingerprint with the stored identifier in the system's database, users can gain access if the fingerprints match above a certain threshold. I discuss the origins and evolution of device fingerprinting, highlighting its use by online analytics and advertising companies for user tracking purposes. Despite privacy concerns, I argue that device fingerprinting offers advantages in authentication by eliminating the need for user input and providing enhanced security against brute-force attacks. I also examine various fingerprinting techniques and their effectiveness in identifying devices across different browsers and system configurations. Finally, I conclude by discussing the potential of device fingerprinting as a viable authentication method and suggest avenues for future research in security implementations.
2025
In this paper, the authors propose a solution to improve the security of digital signature schemes, this solution is implemented on two levels of digital signature scheme construction. At the first level, the authors propose a new hard... more
In this paper, the authors propose a solution to improve the security of digital signature schemes, this solution is implemented on two levels of digital signature scheme construction. At the first level, the authors propose a new hard problem, different from the hard problems used before, and importantly, this hard problem belongs to the class of hard problems for which there is currently no solution (except by "brute force attack" method). At the second level, the authors propose a method to construct new digital signature algorithms based on this hard problem.
2025
In this article, the author proposes a method for construction digital signature schemes based on a new type of hard problem, the proposed solution here implements the construction of digital signature scheme on two levels: mathematical... more
In this article, the author proposes a method for construction digital signature schemes based on a new type of hard problem, the proposed solution here implements the construction of digital signature scheme on two levels: mathematical basis and algorithm design method based on this mathematical basis. At the first level, the author proposes a new type of hard problem, which there are currently no solutions. At the second level, the author proposes a method to construct new digital signature algorithms based on the type of hard problem.
2025, … Journal of Computer Science and Network …
Conventional TCP suffers from poor performance on high bandwidth delay product links meant for supporting transmission rates of multi Gigabits per seconds (Gbps). This is largely due to TCP's congestion control algorithm, which can be... more
Conventional TCP suffers from poor performance on high bandwidth delay product links meant for supporting transmission rates of multi Gigabits per seconds (Gbps). This is largely due to TCP's congestion control algorithm, which can be slow in taking advantage of large amounts of available bandwidth. A number of high-speed variants have been proposed recently, the major ones being BIC TCP, CUBIC, FAST, High-Speed TCP, Layered TCP, Scalable TCP and XCP. In this paper an effort has been made to comparatively analyze the aforementioned protocols based on various parameters viz., Throughput, Fairness, Stability, Performance, Bandwidth Utilization and Responsiveness and study the limitations of these protocols meant for the High Speed Networks.
2025, MobiArch 2006 - Proceedings of First ACM/IEEE International Workshop on Mobility in the Evolving Internet Architecture, in conjunction with IEEE GLOBECOM 2006
This paper presents a novel architecture, MobiSplit, to manage mobility in future IP based networks. The proposed architecture separates mobility management in two levels, local and global, that are managed in completely independent ways.... more
This paper presents a novel architecture, MobiSplit, to manage mobility in future IP based networks. The proposed architecture separates mobility management in two levels, local and global, that are managed in completely independent ways. The paper describes the flexibility advantages that this architecture brings to operators, and how it is appropriate for the current trend to multiple and very different access providers and operators. Heterogeneity, support for seamless handovers and multihoming, and scalability issues are analyzed in the paper.
2025
This paper presents an architecture based on the MANET (Mobile Ad Hoc Network) paradigm as an emergency communication system between users of electric bicycles. The solution consists of 4 mobile nodes representing the users and a main... more
This paper presents an architecture based on the
MANET (Mobile Ad Hoc Network) paradigm as an
emergency communication system between users of
electric bicycles. The solution consists of 4 mobile
nodes representing the users and a main fixed node,
which emulates a bicycle docking station. This architecture allows multi-hop communication between the
nodes, using the proactive routing protocols OLSR
(Optimized Link State Routing) and BATMAN (Better Approach to Mobile Ad Hoc Networking). The
study was divided into 3 main stages. First, an analysis of the wireless medium was performed to determine the maximum transmission distance and the
maximum bitrate between 2 nodes. Subsequently, the
throughput behavior was characterized in a multihop configuration consisting of 4 nodes in order to
establish the network capacity in terms of bandwidth.
Finally, a web application was implemented for the
transmission of audio and text traffic. Regarding the
evaluation of the proposal, two scenarios were designed to emulate the integration of a new cyclist
to the network and the communication between two
users in motion. The results reveal that OLSR provides a better system operation, with a throughput
of 2.54 Mbps at 3 hops and a PRR (Packet Reception
Rate) higher than 96%. In addition, it guarantees a
delay within the ITU-T (International Telecommunication Union-Telecommunication) G.114 recommendation for bidirectional communication.
2025, Conference on Electrical Engineering, Telematics, Industrial technology, and Creative Media (CENTIVE)
Abstrak -Teknologi telekomunikasi terutama internet semakin berkembang, baik dari sisi teknologi dan pengguna. Salah satu layanan internet yang banyak digunakan adalah website yang merupakan sarana penyedia informasi yang dibutuhkan dan... more
Abstrak -Teknologi telekomunikasi terutama internet semakin berkembang, baik dari sisi teknologi dan pengguna. Salah satu layanan internet yang banyak digunakan adalah website yang merupakan sarana penyedia informasi yang dibutuhkan dan penggunaannya sangat luas seperti sektor pendidikan, kesehatan, pemerintahan dan lain-lain. Namun peningkatan jumlah pengakses website dapat menyebabkan webserverdown. Mengatasi permasalahan tersebut, digunakan teknologi loadbalancing yang membantu membagi traffic menuju ke beberapa server sehingga meminimalkan beban server. Berdasarkan hal tersebut diperlukan penelitian untuk membuktikan peningkatan performa yang dialami webserver apabila menggunakan loadbalancing. Untuk membuktikan hal tersebut pada penelitian ini dibuat dua skenario yaitu menggunakan loadbalancingserver dengan algoritma leastconnection dan tanpa menggunakan loadbalancingserver. Pengujian pembagian traffic pada sistem loadbalancing dilakukan menggunakan 30 user yang melakukan akses secara bersamaan, dengan jumlah akses masing-masing user sebanyak 5 kali. Dari hasil pengujian didapatkan pembagian traffic sebesar 36% pada webserver 1, pada webserver 2 sebesar 32,67%, dan pada webserver 3 sebesar 31,33%. Pengujian parameter error rates, averageclicktime, dan throughput dilakukan dengan melakukan request layanan webserver menggunakan 100 user, 250 user, 500 user, 1000 user, dan 2000 user. Dari pengujian didapatkan nilai peningkatan performa saat menggunakan loadbalancingserver sebesar 32,51% pada parameter errorrates, 25,39% pada parameter averageclicktime, dan 12,97% pada parameter throughput.
2025
Assessing the performance of peer-to-peer algorithms is impossible without simulations since testing new algorithms by deploying them in an existing P2P network is prohibitively expensive. However, some P2P algorithms are sensitive to the... more
Assessing the performance of peer-to-peer algorithms is impossible without simulations since testing new algorithms by deploying them in an existing P2P network is prohibitively expensive. However, some P2P algorithms are sensitive to the network and traffic models that are used in the simulations. In order to produce realistic results, we therefore require simulations that resemble real-world P2P networks as closely as possible. We describe the Query-Cycle Simulator, a simulator for file-sharing P2P networks networks. We link the Query-Cycle Simulator to measurements on existing P2P networks and discuss some open issues in simulating these networks.
2025
Some of the fastest practical algorithms for IP route lookup are based on space-efficient encodings of multi-bit tries . Unfortunately, the time required by these algorithms grows in proportion to the address length, making them less... more
Some of the fastest practical algorithms for IP route lookup are based on space-efficient encodings of multi-bit tries . Unfortunately, the time required by these algorithms grows in proportion to the address length, making them less attractive for IPv6. This paper describes and evaluates a new data structure called a shape-shifting trie, in which the data structure nodes correspond to arbitrarily shaped subtrees of the underlying binary trie for a given set of address prefixes. The ability to adapt the node shape to the trie reduces the number of nodes that must be accessed to perform a lookup, especially for tries with large sparse regions. We give a fast algorithm for optimally dividing a trie into nodes so as to minimize the maximum lookup depth. We show that seven data structure accesses are sufficient for route tables with more than 150,000 IPv6 prefixes. This makes it possible to achieve wire-speed processing for OC192 link using a single QDRII SRAM chip.
2025, Proceedings of the 2014 ACM conference on SIGCOMM
2024, Lecture Notes in Computer Science
Clustering is a widely used approach to ease implementation of various problems such as routing and resource management in mobile ad hoc networks (MANET)s. We first look at minimum spanning tree(MST) based algorithms and then propose a... more
Clustering is a widely used approach to ease implementation of various problems such as routing and resource management in mobile ad hoc networks (MANET)s. We first look at minimum spanning tree(MST) based algorithms and then propose a new algorithm for clustering in MANETs. The algorithm we propose merges clusters to form higher level clusters by increasing their levels. We show the operation of the algorithm and analyze its time and message complexities.
2024
This paper discusses the Quality of Service metrics in the integration of FHAMIPv6/MPLS/Diffserv protocols. These metrics are analyzed using the load balancing algorithm in the presence of network congestion. The metrics used are delay,... more
This paper discusses the Quality of Service metrics in the integration of FHAMIPv6/MPLS/Diffserv protocols. These metrics are analyzed using the load balancing algorithm in the presence of network congestion. The metrics used are delay, jitter, and throughput. These metrics are analyzed when a handoff occurs in the ad hoc network. The results obtained show that the load balancing algorithm is a great option in the integration to optimize the Quality of Service in ad hoc and hybrid network when a handoff occurs.
2024, Very Large Data Bases
We study the problem of applying adaptive filters for approximate query processing in a distributed stream environment. We propose filter bound assignment protocols with the objective of reducing communication cost. Most previous works... more
We study the problem of applying adaptive filters for approximate query processing in a distributed stream environment. We propose filter bound assignment protocols with the objective of reducing communication cost. Most previous works focus on value-based queries (e.g., average) with numerical error tolerance. In this paper, we cover entity-based queries (e.g., nearest neighbor) with non-value-based error tolerance. We investigate different nonvalue-based error tolerance definitions and discuss how they are applied to two classes of entity-based queries: non-rank-based and rank-based queries. Extensive experiments show that our protocols achieve significant savings in both communication overhead and server computation.
2024, International Workshop on Distributed Algorithms
In this paper we offer a formal, rigorous proof of the correctness of Awerbuch's algorithm for network synchronization. We specify both the algorithm and the correctness condition using the I/O automaton model, which has previously been... more
In this paper we offer a formal, rigorous proof of the correctness of Awerbuch's algorithm for network synchronization. We specify both the algorithm and the correctness condition using the I/O automaton model, which has previously been used to describe and verify algorithms for concurrency control and resource allocation. We show that the model is also a powerful tool for reasoning about distributed graph algorithms. Our proof of correctness follows closely the intuitive arguments made by the designer of the algorithm by exploiting the model's natural support for such important design techniques as stepwise refinement and modularity. In particular, since the algorithm uses simpler algorithms for synchronization within and between 'clusters' of nodes, our proof can import as lemmas the correctness of these simpler algorithms. 1 Overview 1.1 Verification methods and models As computer science has matured as a discipline, its activity has broadened from writing programs to include reasoning about those programs: proving their correctness and efficiency, and proving bounds on the performance of any program that accomplishes the same task. Recently distributed computing has begun to broaden in this way (albeit a decade or two later than the part of computer science concerned with sequential, uniproeessor algorithms). There are several reasons why particular care is necessary to prove the correctness of algorithms when the algorithms
2024
In this article, the author proposes a method for construction digital signature schemes based on a new type of hard problem, the proposed solution here implements the construction of digital signature scheme on two levels: mathematical... more
In this article, the author proposes a method for construction digital signature schemes based on a new type of hard problem, the proposed solution here implements the construction of digital signature scheme on two levels: mathematical basis and algorithm design method based on this mathematical basis. At the first level, the author proposes a new type of hard problem, which there are currently no solutions. At the second level, the author proposes a method to construct new digital signature algorithms based on the type of hard problem.
2024, Computer Communications
This paper presents a method to generate, analyse and represent test cases from protocol specification. The language of temporal ordering specification (LOTOS) is mapped into an extended finite state machine (EFSM). Test cases are... more
This paper presents a method to generate, analyse and represent test cases from protocol specification. The language of temporal ordering specification (LOTOS) is mapped into an extended finite state machine (EFSM). Test cases are generated from EFSM. The generated test cases are modelled as a dependence graph. Predicate slices are used to identify infeasible test cases that must be eliminated. Redundant assignments and predicates in all the feasible test cases are removed by reducing the test case dependence graph. The reduced test case dependence graph is adapted for a local single-layer (LS) architecture. The reduced test cases for the LS architecture are enhanced to represent the tester's behaviour. The dynamic behaviour of the test cases is represented in the form of control graphs by inverting the events, assigning verdicts to the events in the enhanced dependence graph.
2024
In questa parte verranno presentati gli algortimi ottimizzati. La parte degli algortmi é strutturata in modo da permettere una lettura piú semplice o una piú approfondita. Per ogni algoritmo vi é prima una parte che descrive in modo... more
In questa parte verranno presentati gli algortimi ottimizzati. La parte degli algortmi é strutturata in modo da permettere una lettura piú semplice o una piú approfondita. Per ogni algoritmo vi é prima una parte che descrive in modo generale le caratteristiche dell' ...
2024
The multiterminal source coding problem has gained prominence recently due to its relevance to sensor networks. In general, rate-distortion in multiterminal source coding is an open problem. However, variations of multiterminal source... more
The multiterminal source coding problem has gained prominence recently due to its relevance to sensor networks. In general, rate-distortion in multiterminal source coding is an open problem. However, variations of multiterminal source coding, such as the chief executive officer problem, are addressed in the literature. In addition, full and partial cooperation among the sensors, and reliability and robustness against sensor failures are also being investigated in the literature. In this thesis, multiterminal Gaussian source coding was investigated in the context of sensor networks. The rate-distortion region was parameterized for a pair of correlated Gaussian sources using an implicit transform parameter. The approach made use of established results in multiple descriptor coding using pairwise orthogonal transforms. vii
2024
The multiterminal source coding problem has gained prominence recently due to its relevance to sensor networks. In general, rate-distortion in multiterminal source coding is an open problem. However, variations of multiterminal source... more
The multiterminal source coding problem has gained prominence recently due to its relevance to sensor networks. In general, rate-distortion in multiterminal source coding is an open problem. However, variations of multiterminal source coding, such as the chief executive officer problem, are addressed in the literature. In addition, full and partial cooperation among the sensors, and reliability and robustness against sensor failures are also being investigated in the literature. In this thesis, multiterminal Gaussian source coding was investigated in the context of sensor networks. The rate-distortion region was parameterized for a pair of correlated Gaussian sources using an implicit transform parameter. The approach made use of established results in multiple descriptor coding using pairwise orthogonal transforms. vii
2024, ipcsit.com
Image transfer in Wireless sensor Network (WSN) is very useful for information gathering since information obtained from analysis of the images is significant in certain situation. In general, WSN utilize an eight bit microcontroller and... more
Image transfer in Wireless sensor Network (WSN) is very useful for information gathering since information obtained from analysis of the images is significant in certain situation. In general, WSN utilize an eight bit microcontroller and IEEE802.15.4 compliant radio for processing and transferring the data to remote location. These specifications posed a serious restriction in transferring multimedia data since it require a huge bandwidth and processing capabilities. The purpose of this project is to develop the image transferring mechanism for realizing JPEG motion data transfer in WSN where the video is basically generated from the sequence of compressed image. Hence, image sequence transfer with data buffering mechanism is implemented. The scope covers developing sensor node circuit which is equipped with external memory and designing software for the sensor node which is capable to functioning as intended. Image sequence is produced at the control station after processing the received data.
2024, Proceedings of the 2013 workshop on Programming based on actors, agents, and decentralized control
In distributed object systems, it is desirable to enable migration of objects between locations, e.g., in order to support efficient resource allocation. Existing approaches build complex routing infrastructures to handle object-to-object... more
In distributed object systems, it is desirable to enable migration of objects between locations, e.g., in order to support efficient resource allocation. Existing approaches build complex routing infrastructures to handle object-to-object communication, typically on top of IP, using, e.g., message forwarding chains or centralized object location servers. These solutions are costly and problematic in terms of efficiency, overhead, and correctness. We show how location independent routing can be used to implement object overlays with complex messaging behavior in a sound, fully abstract, and efficient way, on top of an abstract network of processing nodes connected point-to-point by asynchronous channels. We consider a distributed object language with futures, essentially lazy return values. Futures are challenging in this context due to the strong global consistency requirements they impose. The key conclusion is that execution in a decentralized, asynchronous network can preserve the standard, network-oblivious behavior of objects with futures, in the sense of contextual equivalence. To the best of our knowledge, this is the first such result in the literature. We also believe the proposed execution model may be of interest in its own right in the context of large-scale distributed computing.
2024
Packet error in the IEEE 802.11 network is one source of performance degradation and its variability. Most of the previous works study how collision avoidance and hidden terminals affect 802.11 performance metrics, such as probability of... more
Packet error in the IEEE 802.11 network is one source of performance degradation and its variability. Most of the previous works study how collision avoidance and hidden terminals affect 802.11 performance metrics, such as probability of a collision and saturation throughput. In this paper we focus on the effect of packet errors on capacity and variability of the 802.11 MAC protocol. We develope a new analytical model, called Ô-Model, by extending the existing model (Tay and Chua's model) to incorporate packet error probability Ô. With Ô-Model, we successfully analyze capacity and variability of the 802.11 MAC protocol. The variability analysis shows that increasing packet error probability by ¡Ô has more effect on saturation throughput, than adding ¼ Ï ¡Ô stations, where Ï is the minimum contention window size, We also show the numerical validation of Ô-Model with 802.11 MAC-level simulator.
2024
Intrusion Detection Systems (IDS) have been developed to solve the problem of detecting the attacks on several network systems. In small-scale networks a single IDS is sufficient to detect attacks but this is inadequate in large-scale... more
Intrusion Detection Systems (IDS) have been developed to solve the problem of detecting the attacks on several network systems. In small-scale networks a single IDS is sufficient to detect attacks but this is inadequate in large-scale networks, where the number of packets across the network is enormous. In this paper, we present an Architectural Framework considering the large-scale network environment. We designed and implemented a Distributed Intrusion Detection system that relies on Smart Agents which monitor network traffic and report intrusion alerts to a central management node. Distribution is handled through the introduction of multiple sensors and the use of Smart Agents who are responsible for reporting and rate limiting of messages. Finally, we extended the IDMEF (Intrusion Detection Message Exchange Format) data model to support digital signatures and to strengthen the authentication of the system.
2024, 23rd International Conference on Distributed Computing Systems, 2003. Proceedings.
In this paper, we present a real-time communication protocol for sensor networks, called SPEED. The protocol provides three types of real-time communication services, namely, real-time unicast, real-time area-multicast and real-time... more
In this paper, we present a real-time communication protocol for sensor networks, called SPEED. The protocol provides three types of real-time communication services, namely, real-time unicast, real-time area-multicast and real-time area-anycast. SPEED is specifically tailored to be a stateless, localized algorithm with minimal control overhead. End-to-end soft real-time communication is achieved by maintaining a desired delivery speed across the sensor network through a novel combination of feedback control and non-deterministic geographic forwarding. SPEED is a highly efficient and scalable protocol for sensor networks where the resources of each node are scarce. Theoretical analysis, simulation experiments and a real implementation on Berkeley motes are provided to validate our claims.
2024, Proceedings of first ACM/IEEE international workshop on Mobility in the evolving internet architecture
We present an architectural foundation for research into architectural tradeoffs and protocol design approaches for cognitive radio networks at both local network and the global internetwork levels. We provide facilities for the... more
We present an architectural foundation for research into architectural tradeoffs and protocol design approaches for cognitive radio networks at both local network and the global internetwork levels. We provide facilities for the evaluation of a number of architectural issues including control and management protocols, support for collaborative PHY, dynamic spectrum coordination, flexible MAC layer protocols, ad hoc group formation and cross-layer adaptation. This architectural effort is intended to lead to the design of control/management and data interfaces between cognitive radio nodes in a local network, and also between cognitive radio subnetworks and the global Internet. Future work in protocol design and implementation based on this foundation will result in the CogNet architecture, a prototype open-source cognitive radio protocol, and extensive experimental evaluations on emerging cognitive radio platforms, first in a wireless local-area radio network scenario with moderate numbers of cognitive radio nodes, and later as part of several end-to-end experiments using a wide-area network testbed such as PlanetLab (and GENI in the future).
2024
Many multicast overlay networks maintain application-specific performance goals such as bandwidth, latency, jitter and loss rate by dynamically changing the overlay structure using measurement-based adaptation mechanisms. This results in... more
Many multicast overlay networks maintain application-specific performance goals such as bandwidth, latency, jitter and loss rate by dynamically changing the overlay structure using measurement-based adaptation mechanisms. This results in an unstructured overlay where no neighbor selection constraints are imposed. Although such networks provide resilience to benign failures, they are susceptible to attacks conducted by adversaries that compromise overlay nodes. Previous defense solutions proposed to address attacks against overlay networks rely on strong organizational constraints and are not effective for unstructured overlays. In this work, we identify, demonstrate and mitigate insider attacks against measurement-based adaptation mechanisms in unstructured multicast overlay networks. The attacks target the overlay network construction, maintenance, and availability and allow malicious nodes to control significant traffic in the network, facilitating selective forwarding, traffic analysis, and overlay partitioning. We propose techniques to decrease the number of incorrect or unnecessary adaptations by using outlier detection. We demonstrate the attacks and mitigation techniques in the context of a mature, operationally deployed overlay multicast system, ESM, through real-life deployments and emulations conducted on the PlanetLab and DETER testbeds, respectively.
2024
Las redes de malla inalámbricas son un dominio rápidamente creciente y esto trae muchos desafíos. En particular, un desafío difícil e inmediato es el enrutamiento efectivo debido a la volatilidad típica de tráfico en topologías complejas.... more
Las redes de malla inalámbricas son un dominio rápidamente creciente y esto trae muchos desafíos. En particular, un desafío difícil e inmediato es el enrutamiento efectivo debido a la volatilidad típica de tráfico en topologías complejas. Trabajos recientes han demostrado que el tráfico inalámbrico es muy variable y difícil de caracterizar. Comprender el impacto de la incertidumbre de la demanda en el ruteo y el diseño de algoritmos de enrutamiento para proporcionar robustez es relativamente un problema de investigación sin explotar. Sin embargo, tiene un gran impacto en el rendimiento de una red y será esencial para su desarrollo en los próximos años. El algoritmo de ruteo usado siempre debería asegurar que la información tome el camino más apropiado de acuerdo a una métrica. Por lo tanto, tomamos esto como nuestro objetivo principal de investigación: Caracterizar y resolver el problema de enrutamiento en redes de malla inalámbricas robustas.