Network Capacity Research Papers - Academia.edu (original) (raw)
2025, 2011 IEEE International Conference on Smart Grid Communications (SmartGridComm)
A key element to realizing the smart energy grid of the future is the deployment of an efficient and reliable information network. An intelligent combination of wired networks (the Internet), wireless networks and power line communication... more
A key element to realizing the smart energy grid of the future is the deployment of an efficient and reliable information network. An intelligent combination of wired networks (the Internet), wireless networks and power line communication networks can be used to deliver control and application messages generated by the smart grid. Integration of these three network types is non-trivial due to the distinct differences in deliverable quality of service and financial cost. Traffic assignment across these distinct networks poses a novel research problem which must be solved to realize the smart grid. Herein, an algorithm which dynamically allocates traffic with different Quality of Service requirements in terms of throughput, delay and failure probability to information networks with different performance characteristics is proposed. A detailed queueing model for the system is defined which accounts for input queues buffering smart grid packets and external applications injecting traffic into the buffers of the networks. A Lyapunov optimization based-algorithm selects the packet allocation strategy based on input/output queue states and guarantees the required QoS to the input queues while minimizing financial cost.
2025, Anais do 2002 International Telecommunications Symposium
Quality of Service (QoS) provisioning on a per node basis, which assumes that this strategy would provide QoS in the whole domain. Nevertheless, this approach could fail in large domains with multiple flows aggregation and unexpected... more
Quality of Service (QoS) provisioning on a per node basis, which assumes that this strategy would provide QoS in the whole domain. Nevertheless, this approach could fail in large domains with multiple flows aggregation and unexpected input traffic. Therefore, provisioning techniques should be used to avoid unpredicted overloads that result in QoS fluctuations. A proposal using fuzzy controllers to reconfigure DiffServ nodes according to ingress traffic and achieved QoS was presented in [1]. However, it is not easy to specify fuzzy rule bases and membership functions that optimize the controllers performance. Thus, we propose a methodology to choose optimized fuzzy controller parameters using the Wang-Mendel and genetic algorithms. Finally, we evaluate the performance of this methodology by simulation of voice over IP applications in DiffServ domains.
2025, Teletraffic Science and Engineering
The Dierentiated Services architecture has been proposed to oer quality of service in the Internet. Most works on Diserv (DS) handles QoS guarantees in a per node basis, which assumes that assuring QoS in a single node also leads to the... more
The Dierentiated Services architecture has been proposed to oer quality of service in the Internet. Most works on Diserv (DS) handles QoS guarantees in a per node basis, which assumes that assuring QoS in a single node also leads to the desired QoS in the entire DS domain. Nevertheless, this is not always true. This paper proposes a framework that oers QoS in a DS domain using Policy-based Management and fuzzy logic techniques. The QoS controller recongures all DS nodes according to ingress trac and domain policies. Policy Based Management is used in this framework to provide QoS in DS domain, controlling heterogeneous equipments of dierent manufacturers. The performance and functionalities of a prototype are shown by simulation of a voice over IP application.
2025, Telecommunication Systems
Most work on Differentiated Services (DiffServ) handles Quality of Service (QoS) provisioning on a per node basis, which assumes that this strategy would provide QoS in the entire domain. Nevertheless, this approach could fail in large... more
Most work on Differentiated Services (DiffServ) handles Quality of Service (QoS) provisioning on a per node basis, which assumes that this strategy would provide QoS in the entire domain. Nevertheless, this approach could fail in large domains with multiple flow aggregation and unexpected input traffic. Therefore, provisioning techniques should be used to avoid unexpected overloads that result in QoS fluctuations. A proposal using fuzzy controllers to reconfigure DiffServ nodes according to both incoming traffic and the actual QoS is given. However, it is not easy to specify fuzzy rule bases and membership functions that optimize controller performance. Thus, we also propose a methodology to choose fuzzy controller parameters using the Wang-Mendel and genetic algorithms. Finally, we evaluate the performance of this model by simulation of an IP Telephony application in a DiffServ domain.
2025
The Differentiated Services architecture has been proposed to offer quality of service in the Internet. Most works on Diffserv (DS) handles QoS guarantees in a per node basis, which assumes that assuring QoS in a single node also leads to... more
The Differentiated Services architecture has been proposed to offer quality of service in the Internet. Most works on Diffserv (DS) handles QoS guarantees in a per node basis, which assumes that assuring QoS in a single node also leads to the desired QoS in the entire DS domain. Nevertheless, this is not always true. This paper proposes a framework that offers QoS in a DS domain using Policy basedManagement and fuzzy logic techniques. The QoS controller recon gur es all DS nodes according to ingress traf cand domain policies. Policy Based Management is used in this framework to provide QoS in DS domain, control ing heterogeneous equipments of different manufacturers. The performance and function alities of a prototype are shown by simulation of a voice over IP application in different DS topologies.
2025, 2010 7th Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks (SECON)
Directional antennas offer many potential advantages for wireless networks such as increased network capacity, extended transmission range and reduced energy consumption. Exploiting these advantages, however, requires new protocols and... more
Directional antennas offer many potential advantages for wireless networks such as increased network capacity, extended transmission range and reduced energy consumption. Exploiting these advantages, however, requires new protocols and mechanisms at various communication layers to intelligently control the directional antenna system. With directional antennas, many trivial mechanisms, such as neighbor discovery, become more challenging since communicating parties must agree on where and when to point their directional beams to enable communication. In this paper, we propose a fully directional neighbor discovery protocol called Sectored-Antenna Neighbor Discovery (SAND) protocol. SAND is designed for sectored-antennas, a low-cost and simple realization of directional antennas, that utilize multiple limited beamwidth antennas. Unlike many proposed directional neighbor discovery protocols, SAND depends neither on omnidirectional antennas nor on time synchronization. In addition, SAND performs neighbor discovery in a serialized fashion allowing individual nodes to discover all potential neighbors within a predetermined time. Moreover, SAND guarantees the discovery of the best sector combination at both ends of a link, resulting in more robust and higher quality links between nodes. Finally, SAND gathers the neighborhood information in a centralized location, if needed, to be used by centralized networking protocols. The effectiveness of SAND has been assessed via simulation studies and real hardware implementation.
2025, nternational journal of communication networks and information security
The Internet of things (IoT) comprises things interconnected through the internet with unique identities. Congestion management is one of the most challenging tasks in networks. The Constrained Application Protocol (CoAP) is a... more
The Internet of things (IoT) comprises things interconnected through the internet with unique identities. Congestion management is one of the most challenging tasks in networks. The Constrained Application Protocol (CoAP) is a low-footprint protocol designed for IoT networks and has been defined by IETF. In IoT networks, CoAP nodes have limited network and battery resources. The CoAP standard has an exponential backoff congestion control mechanism. This backoff mechanism may not be adequate for all IoT applications. The characteristics of each IoT application would be different. Further, the events such as unnecessary retransmissions and packet collision caused due to links with high losses and packet transmission errors may lead to network congestion. Various congestion handling algorithms for CoAP have been defined to enrich the performance of IoT applications. Our paper presents a comprehensive survey on the evolution of the congestion control mechanism used in IoT networks. We have classified the protocols into RTO-based, queue-monitoring, and rate-based. We review congestion avoidance protocols for CoAP networks and discuss directions for future work.
2025, International Journal of Network Management
In this paper we study the scalability issue in the design of a centralized policy server controlling resources in the future IP-based telecom network generation. The policy servers are in charge of controlling and managing QoS, security... more
In this paper we study the scalability issue in the design of a centralized policy server controlling resources in the future IP-based telecom network generation. The policy servers are in charge of controlling and managing QoS, security and mobility in a centralized way in future IP-based telecom networks. Our study demonstrates that the policy servers can be designed in such a manner that they scale with increase in network capacity.
2025, IEEE INFOCOM 2008 - The 27th Conference on Computer Communications
In this paper we study the issue of topology control under the physical Signal-to-Interference-Noise-Ratio (SINR) model, with the objective of maximizing network capacity. We show that existing graph-model-based topology control captures... more
In this paper we study the issue of topology control under the physical Signal-to-Interference-Noise-Ratio (SINR) model, with the objective of maximizing network capacity. We show that existing graph-model-based topology control captures interference inadequately under the physical SINR model, and as a result, the interference in the topology thus induced is high and the network capacity attained is low. Towards bridging this gap, we propose a centralized approach, called Spatial Reuse Maximizer (MaxSR), that combines a power control algorithm T4P with a topology control algorithm P4T. T4P optimizes the assignment of transmit power given a fixed topology, where by optimality we mean that the transmit power is so assigned that it minimizes the average interference degree (defined as the number of interferencing nodes that may interfere with the ongoing transmission on a link) in the topology. P4T, on the other hand, constructs, based on the power assignment made in T4P, a new topology by deriving a spanning tree that gives the minimal interference degree. By alternately invoking the two algorithms, the power assignment quickly converges to an operational point that maximizes the network capacity. We formally prove the convergence of MaxSR. We also show via simulation that the topology induced by MaxSR outperforms that derived from existing topology control algorithms by 50%-110% in terms of maximizing the network capacity.
2025, IEEE WPMC
This paper presents a design formulation and evaluation of a wireless co-OFDMA probabilistic algorithm aimed at optimizing resource utilization in overlapping Wi-Fi networks. The design formulation is grounded in realistic industrial... more
This paper presents a design formulation and evaluation of a wireless co-OFDMA probabilistic algorithm aimed at optimizing resource utilization in overlapping Wi-Fi networks. The design formulation is grounded in realistic industrial application requirements, and the evaluation is conducted using the network simulator v3.0 with its DetNetWiFi module, which is capable of modeling deterministic wireless networks. The evaluation focuses on latency, jitter, and packet loss. Our findings demonstrate that co-OFDMA probabilistic approaches offer significant benefits in terms of latency in deterministic wireless environments. However, they may also introduce increased jitter compared to a static primary channel sharing scheme, aspect which may not be suitable to industrial wireless environments.
2025, Bài Toán Thông Minh (Nhiều Tác Giả) thuviensach
Tài liệu tập hợp các bài toán logic hay cho bạn đọc
2025, IEEE Communications Surveys & Tutorials
The Transmission Control Protocol (TCP) carries most Internet traffic, so performance of the Internet depends to a great extent on how well TCP works. Performance characteristics of a particular version of TCP are defined by the... more
The Transmission Control Protocol (TCP) carries most Internet traffic, so performance of the Internet depends to a great extent on how well TCP works. Performance characteristics of a particular version of TCP are defined by the congestion control algorithm it employs. This paper presents a survey of various congestion control proposals that preserve the original host-to-host idea of TCP-namely, that neither sender nor receiver relies on any explicit notification from the network. The proposed solutions focus on a variety of problems, starting with the basic problem of eliminating the phenomenon of congestion collapse, and also include the problems of effectively using the available network resources in different types of environments (wired, wireless, high-speed, long-delay, etc.). In a shared, highly distributed, and heterogeneous environment such as the Internet, effective network use depends not only on how well a single TCPbased application can utilize the network capacity, but also on how well it cooperates with other applications transmitting data through the same network. Our survey shows that over the last 20 years many host-to-host techniques have been developed that address several problems with different levels of reliability and precision. There have been enhancements allowing senders to detect fast packet losses and route changes. Other techniques have the ability to estimate the loss rate, the bottleneck buffer size, and level of congestion. The survey describes each congestion control alternative, its strengths and its weaknesses. Additionally, techniques that are in common use or available for testing are described.
2025
Abstract—Although IEEE 802.11 provides several transmission rates, a suitable rate adaptation taking into account the relative fairness among all competitive stations, according to the underlying channel quality remains a challenge in... more
Abstract—Although IEEE 802.11 provides several transmission rates, a suitable rate adaptation taking into account the relative fairness among all competitive stations, according to the underlying channel quality remains a challenge in Mobile Ad hoc Networks (MANETs). The absence of any fixed infrastructure and any centralized control makes the existing solutions for WLANs like CARA (collision-aware rate adaptation) not appropriate for MANETs. In this paper, we propose a new analytical model with a suitable approach to ensure a relative fairness among all competitive nodes of a particular channel. Our model deals with the channel quality while respecting the nodes, based on transmission successes and failures in a mobility context. Finally, each node calculates its own probability to access the channel in a distributed manner. We evaluate the performance of our scheme with others in the context of MANET via extensive and detailed simulations. The performance differentials are analyse...
2025, IEEE Journal on Selected Areas in Communications
We consider the problem of determining the maximum capacity of the media access (MAC) layer in wireless ad hoc networks. Due to spatial contention for the shared wireless medium, not all nodes can concurrently transmit packets to each... more
We consider the problem of determining the maximum capacity of the media access (MAC) layer in wireless ad hoc networks. Due to spatial contention for the shared wireless medium, not all nodes can concurrently transmit packets to each other in these networks. The maximum number of possible concurrent transmissions is, therefore, an estimate of the maximum network capacity, and depends on the MAC protocol being used. We show that for a large class of MAC protocols based on virtual carrier sensing using RTS/CTS messages, which includes the popular IEEE 802.11 standard, this problem may be modeled as a maximum Distance-2 matching (D2EMIS) in the underlying wireless network: Given a graph ( ), find a set of edges such that no two edges in are connected by another edge in . D2EMIS is NP-complete. Our primary goal is to show that it can be approximated efficiently in networks that arise in practice. We do this by focusing on an admittedly simplistic, yet natural, graph-theoretic model for ad hoc wireless networks based on disk graphs, where a node can reach all other nodes within some distance (nodes may have unequal reach distances). We show that our approximation yields good capacity bounds. Our work is the first attempt at characterizing an important "maximum" measure of wireless network capacity, and can be used to shed light on previous topology formation protocols like Span and GAF that attempt to produce "good" or "capacity-preserving" topologies, while allowing nodes to alternate between sleep and awake states. Our work shows an efficient way to compute an upper bound on maximum wireless network capacity, thereby allowing topology formation algorithms to determine how close they are to optimal. We also outline a distributed algorithm for the problem for unit disk graphs, and briefly discuss extensions of our results to: 1) different node interference models; 2) directional antennas; and 3) other transceiver connectivity structures besides disk graphs.
2025
Reduction of CO2 emissions is a major global environmental issue. Over the past few years, wireless and mobile communications are becoming increasingly popular with consumers. The Most popular kind of wireless access is known as Wireless... more
Reduction of CO2 emissions is a major global environmental issue. Over the past few years, wireless and mobile communications are becoming increasingly popular with consumers. The Most popular kind of wireless access is known as Wireless Mesh Networks (WMNs) that provide wireless connectivity through lot cheaper and more supple backhaul infrastructure relative to wired solutions. Wireless Mesh Network (WMN) is a new emerging technology which has been adopted as the wireless internetworking solution for the near future. Due to higher energy consumption in the information and communication technology (ICT) industries, and which would have an impact on the environment, energy efficiency has become a key factor to evaluate the performance of a communication network. This paper primarily focuses on the classification layer the greatest existing approaches devoted to the conservation of energy. It is also discussing the most interesting works on energy saving in WMNs networks.
2025, journal of computer and knowledge engineering
Since the genesis of layered network, designing a popper MAC control protocol was a major concern. Among many protocols which introduced earlier, there is always a trade-off between utilization and load overhead. ALOHA is one of the first... more
Since the genesis of layered network, designing a popper MAC control protocol was a major concern. Among many protocols which introduced earlier, there is always a trade-off between utilization and load overhead. ALOHA is one of the first MAC protocols with virtually possess no overhead, but its maximum throughput is limited. Hence a new MAC protocol introduced on basis of multi-packet reception model named Hybrid ALOHA. In the original paper stability and throughput of this algorithm for 2 or 3 users case system had been analyzed. Although stability region for above two users circumstances had been studied, there was no general form for throughput nor any practical examination of stability. In this paper, beside expanding formula for throughput for any arbitrary number of users, the throughput of system is checked with simple simulation of probability of successes and failures. Achieved results shows that regardless of additional overhead for more users, throughput remains proper, and the system is not lost stability in larger number of users.
2025, Lecture Notes in Computer Science
The paper studies the problem of allocating bandwidth resources of a Service Overlay Network, to optimize revenue. Clients bid for network capacity in periodically held auctions, under the condition that resources allocated in an auction... more
The paper studies the problem of allocating bandwidth resources of a Service Overlay Network, to optimize revenue. Clients bid for network capacity in periodically held auctions, under the condition that resources allocated in an auction are reserved for the entire duration of the connection, not subject to future contention. This makes the optimal allocation coupled over time, which we formulate as a Markov Decision Process (MDP). Studying first the single resource case, we develop a receding horizon approximation to the optimal MDP policy, using current revenue and the expected revenue in the next step to make bandwidth assignments. A second approximation is then found, suitable for generalization to the network case, where bids for different routes compete for shared resources. In that case we develop a distributed implementation of the auction, and demonstrate its performance through simulations.
2025
The NYNEX Corporation invests hundreds of millions of dollars each year to enhance the telecommunications services provided to its customers. Extensive planning and construction are required to meet the ever-increasing demand for better... more
The NYNEX Corporation invests hundreds of millions of dollars each year to enhance the telecommunications services provided to its customers. Extensive planning and construction are required to meet the ever-increasing demand for better service and provide the latest in sophisticated equipment throughout the telephone network. Engineering groups plan changes to network facilities five years ahead, with constant adjustments for changes to forecasted service demand, changes in the economy, changes to NYNEX company policies, or the availability of new technologies. ARACHNE is an expert system that automates interoffice facilities (IOF)
2025
This paper proposes a decentralized model for the allocation of modulation and coding schemes, subchannels and transmit power to users in OFDMA femtocell deployments. The proposed model does not rely on any exchanged information between... more
This paper proposes a decentralized model for the allocation of modulation and coding schemes, subchannels and transmit power to users in OFDMA femtocell deployments. The proposed model does not rely on any exchanged information between cells, which is especially useful for femtocell networks. Coordination between femtocells is achieved through the intrinsic properties of minimising transmit power independently at each cell, which leads the network to self-organize into an efficient frequency reuse pattern. This paper also provides a two-level decomposition approach for solving this intricate resource assignment problem that is able to find optimal solutions at cell level in reduced periods of time. System-level simulations show a significant performance improvement in terms of user outages and network capacity when using the proposed distributed resource allocation in comparison with scheduling techniques based on uniform power distributions among subcarriers.
2025, IEEE Transactions on Vehicular Technology
This paper investigates the hidden-node phenomenon 4 (HN) in IEEE 802.11 wireless networks. HN occurs when nodes 5 outside the carrier-sensing range of each other are nevertheless 6 close enough to interfere with each other. As a result,... more
This paper investigates the hidden-node phenomenon 4 (HN) in IEEE 802.11 wireless networks. HN occurs when nodes 5 outside the carrier-sensing range of each other are nevertheless 6 close enough to interfere with each other. As a result, the carrier-7 sensing mechanism may fail to prevent packet collisions. HN can 8 cause many performance problems, including throughput degra-9 dation, unfair throughput distribution among flows, and through-10 put instability. The contributions of this paper are threefold. 11 1) This is a first attempt to identify a set of conditions-which we 12 called Hidden-node-Free Design (HFD)-that completely remove 13 HN in 802.11 wireless networks. 2) We derive variations of HFD 14 for large-scale cellular WiFi networks consisting of many wireless 15 LAN cells. These HFDs are not only HN-free, but they also reduce 16 exposed nodes at the same time so that the network capacity is 17 improved. 3) We investigate the problem of frequency-channel as-18 signment to adjacent cells. We find that with HFD, careful assign-19 ment in which adjacent cells use different frequency channels does 20 not improve the overall network capacity (in unit of bits per second 21 per frequency channel). Indeed, given f frequency channels, a 22 simple scheme with f overlaid cellular WiFi networks in which 23 each cell uses all f frequencies yields near-optimal performance. 24 Index Terms-Hidden-node problem (HN), IEEE 802.11, 25 modeling, performance evaluation, protocol design. 26 I. INTRODUCTION 27 T HIS PAPER investigates the hidden-node phenomenon 28 (HN) in IEEE 802.11 wireless networks. HN occurs 29 when nodes outside the carrier-sensing range of each other are 30 nevertheless close enough to interfere with each other. As a 31 result, the carrier-sensing mechanism may fail to prevent packet 32 collisions. HN can cause many performance problems, includ-33 ing throughput degradation, unfair throughput distribution, and 34 throughput instability [1]. 35 The contributions of this paper are threefold. 36 1) As detailed in Section I-A1, most previous work consid-Q1 37 ered HN in an isolated manner by focusing on specific 38 examples in which it arises. In addition, most existing 39
2025
AbstractWhen an IEEE 802.11 ad-hoc network achieves capacity C by using a single channel, the targeted capacity by using two channels should be C∙ 2. However, most of the multichannel 802.11 protocols proposed in the literature only... more
AbstractWhen an IEEE 802.11 ad-hoc network achieves capacity C by using a single channel, the targeted capacity by using two channels should be C∙ 2. However, most of the multichannel 802.11 protocols proposed in the literature only appear to be able to achieve ...
2025
A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes... more
A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11). The goal of this paper is to show how the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacityboosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC) scheme to coordinate transmissions among nodes. In contrast to "straightforward" network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM) waves for equivalent coding operation. PNC can yield higher capacity than straightforward network coding when applied to wireless networks. We believe this is a first paper that ventures into EM-wave-based network coding at the physical layer and demonstrates its potential for boosting network capacity. PNC opens up a whole new research area because of its implications and new design requirements for the physical, MAC, and network layers of ad hoc wireless stations. The resolution of the many outstanding but interesting issues in PNC may lead to a revolutionary new paradigm for wireless ad hoc networking.
2025
Abstract—Wireless Mesh Network (WMN) has become a popular access network architecture in the community due to its low cost and readily deployable nature. However, it is well known that multi-hop transmission in WMN is vulnerable to... more
Abstract—Wireless Mesh Network (WMN) has become a popular access network architecture in the community due to its low cost and readily deployable nature. However, it is well known that multi-hop transmission in WMN is vulnerable to bandwidth degradation, primarily due to contention and radio interference. A straightforward solution to this problem is to use mesh nodes with multiple radios and channels. In this paper, we demonstrate through real-world experiments that the use of multiple radios and channels solely cannot ...
2025
Urban traffic control systems evolved through three generations. The first generation of such systems has been based on historical traffic data. The second generation took advantage of detectors, which enabled the collection of real-time... more
Urban traffic control systems evolved through three generations. The first generation of such systems has been based on historical traffic data. The second generation took advantage of detectors, which enabled the collection of real-time traffic data, in order to re-adjust and select traffic signalization programs. The third generation provides the ability to forecast traffic conditions, in order to have traffic signalization programs and strategies pre-computed and applied at the most appropriate time frame for the optimal control of the current traffic conditions. Nowadays, the fourth generation of traffic control systems is already under development, based among others on principles of artificial intelligence and having capabilities of on-time information provision, traffic forecasting and incident detection is being developed according to principles of large-scale integrated systems engineering. Although these systems are largely benefiting from the developments of various information technology and computer science sectors, it is obvious that their performance is always related to that of the underlying optimization and control methods. Until recently, static traffic assignment (route choice) modes were used in order to forecast future traffic flows, considering that the parameters which affect the network capacity are fixed over a given origin-destination matrix. Traffic engineering considers traffic flows as being constant and tries to optimize the control parameters in order to optimize certain parameters and measures of effectiveness. These two procedures, although largely depend on each other and known as the combined traffic assignment and control problem, are usually handled separately. The recent scientific and research developments in the fields of traffic assignment, with the rapid development of the advantageous Dynamic Traffic Assignment models, the new dynamic traffic control strategies and the evolution of ITS tend to modify the way with which networks are being modelled and their efficiency is measured. The current paper aims to present the major findings out of a critical review of existing scientific literature in the fields of dynamic traffic assignment and traffic control. Combined traffic assignment and traffic control models are discussed both in terms of the underlying mathematical formulations, as well as in terms of algorithmic solutions, in order to better evaluate their applicability in large scale networks. In addition, a generic and easily transferable scheme, in the form of a methodological framework for the Combined Dynamic Traffic Assignment and Urban Traffic Control problem is presented and applied on a realistic urban network, so as to provide numerical results and to highlight the applicability of such models in cases, which differ from the standard test networks of the related bibliography, which are usually of simple nature and form.
2025, 2006 IEEE International Conference on Mobile Ad Hoc and Sensor Sysetems
Optimizing spectral reuse is a major issue in large-scale IEEE 802.11 wireless networks. Power control is an effective means for doing so. Much previous work simply assumes that each transmitter should use the minimum transmit power... more
Optimizing spectral reuse is a major issue in large-scale IEEE 802.11 wireless networks. Power control is an effective means for doing so. Much previous work simply assumes that each transmitter should use the minimum transmit power needed to reach its receiver, and that this would maximize the network capacity by increasing spectral reuse. It turns out that this is not necessarily the case, primarily because of hidden nodes. In a network without power control, it is well known that hidden nodes give rise to unfair network bandwidth distributions and large bandwidth oscillations. Avoiding hidden nodes (by extending the carrier-sensing range), however, may cause the network to have lower overall network capacity. This paper shows that in a network with power control, reducing the instances of hidden nodes can not only prevent unfair bandwidth distributions, but also achieve higher overall network capacity compared with the minimum-transmit-power approach. We propose and investigate two distributed adaptive power control algorithms that minimize mutual interferences among links while avoiding hidden nodes. In general, our power control algorithms can boost the capacity of ordinary non-powercontrolled 802.11 networks by more than two times while eliminating hidden nodes.
2025, annals of telecommunications - annales des télécommunications
This paper presents a fair and efficient rate control mechanism, referred to as congestion-aware fair rate control (CFRC), for IEEE 802.11s-based wireless mesh networks. Existing mechanisms usually concentrate on achieving fairness and... more
This paper presents a fair and efficient rate control mechanism, referred to as congestion-aware fair rate control (CFRC), for IEEE 802.11s-based wireless mesh networks. Existing mechanisms usually concentrate on achieving fairness and achieve a poor throughput. This mainly happens due to the synchronous rate reduction of neighboring links or nodes of a congested node without considering whether they actually share the same bottleneck or not. Furthermore, the achievable throughput depends on the network load, and an efficient fair rate is achievable when the network load is balanced. Therefore, existing mechanisms usually achieve a fair rate determined by the mostly loaded network region. CFRC uses an AIMD-based rate control mechanism which enforces a rate-bound to the links that use the same bottleneck. To achieve the maximum
2025
Human players and automated players (bots) interact in real time in a congested network. A player's revenue is proportional to the number of successful "downloads" and his cost is proportional to his total waiting time. Congestion arises... more
Human players and automated players (bots) interact in real time in a congested network. A player's revenue is proportional to the number of successful "downloads" and his cost is proportional to his total waiting time. Congestion arises because waiting time is an increasing random function of the number of uncompleted download attempts by all players. Surprisingly, some human players earn considerably higher profits than bots. Bots are better able to exploit periods of excess capacity, but they create endogenous trends in congestion that human players are better able to exploit. Nash equilibrium does a good job of predicting the impact of network capacity and noise amplitude. Overall efficiency is quite low, however, and players overdissipate potential rents, i.e., earn lower profits than in Nash equilibrium..
2025, Optical Switching and Networking
Long-Reach (LR) Passive Optical Network (PON) Dynamic Bandwidth Allocation (DBA) Class of Service (CoS) Quality of Service (QoS) Service Level Agreement (SLA) Delay guarantees a b s t r a c t In this paper a novel algorithm with delay... more
Long-Reach (LR) Passive Optical Network (PON) Dynamic Bandwidth Allocation (DBA) Class of Service (CoS) Quality of Service (QoS) Service Level Agreement (SLA) Delay guarantees a b s t r a c t In this paper a novel algorithm with delay guarantees for high priority traffic based on a Proportional (P) controller for Long-Reach Passive Optical Networks (LR-PONs) is proposed. We have recently demonstrated that Proportional-Integral-Derivative (PID) controllers are quite effective when controlling guaranteed bandwidth levels and in this paper this functionality is adapted to jointly deal with Class of Service (CoS) and client differentiation in order to fulfill delay requirements. Therefore, it leads to an efficient control of the mean packet delay which enhances the provided Quality of Service (QoS) inside the LR-PON. Simulation results have exhibited that the bandwidth allocation process made by the P controller achieves this objective faster than other existing proposals. In fact, it stabilizes the priority delays in less than 2 min comparing with 5 or 6 min obtained by other proposals. Furthermore, it has been demonstrated that its performance is more robust than other proposals since it is independent of the initial network conditions, adapting very efficiently the available resources in order to comply with the established delay bounds of the most restrictive services.
2025
The performance characteristics of Wi-Fi networks have traditionally been studied and analysed using analytical models and simulations. Due to the complexity of wireless communication the existing analytical Wi-Fi network models rely on... more
The performance characteristics of Wi-Fi networks have traditionally been studied and analysed using analytical models and simulations. Due to the complexity of wireless communication the existing analytical Wi-Fi network models rely on certain network constraints and simplifications in order to be mathematically tractable. We set out to evaluate the practicality of using Wi-Fi performance models to estimate network performance by collecting the model necessary parameters directly from an access point. In order to evaulate, we must also collect network metrics, such as packet payload size and number of nodes, for comparison with the model parameters. We explore different venues to collect these parameters and metrics to find out if it is practical to apply the models in Wi-Fi networks. After performing three attempts, we conclude that this is difficult due to several aspects in the Linux kernel, such as batching optimization patterns, proprietary kernel modules and firmware blobs. W...
2025, Operations Research
In this paper we describe an efficient algorithm for solving novel optimization models arising in the context of multiperiod capacity expansion of optical networks. We assume that the network operator must make investment decisions over a... more
In this paper we describe an efficient algorithm for solving novel optimization models arising in the context of multiperiod capacity expansion of optical networks. We assume that the network operator must make investment decisions over a multiperiod planning horizon while facing rapid changes in transmission technology, as evidenced by a steadily decreasing per-unit cost of capacity. We deviate from traditional and monopolistic models in which demands are given as input parameters, and the objective is to minimize capacity deployment costs. Instead, we assume that the carrier sets end-to-end prices of bandwidth at each period of the planning horizon. These prices determine the demands that are to be met, using a plausible and explicit price-demand relationship; the resulting demands must then be routed, requiring an investment in capacity. The objective of the optimization is now to simultaneously select end-to-end prices of bandwidth and network capacities at each period of the pl...
2025
As distributed generation (DG) becomes more widely deployed distribution networks become more active and take on many of the same characteristics as transmission. We propose the use of nodal pricing that is often used in the pricing of... more
As distributed generation (DG) becomes more widely deployed distribution networks become more active and take on many of the same characteristics as transmission. We propose the use of nodal pricing that is often used in the pricing of short-term operations in transmission. As an economically efficient mechanism, nodal pricing would properly reward DG for reducing line losses through increased revenues at nodal prices, and signal prospective DG where it ought to connect with the distribution network. Applying nodal pricing to a model distribution network we show significant price differences between busses reflecting high marginal losses. Moreover, we show the contribution of a DG resource located at the end of the network to significant reductions in losses and line loading. We also show the DG resource has significantly greater revenue under nodal pricing reflecting its contribution to reduced line losses and loading.
2025
The profound change in the electric industry worldwide in the last twenty years assigns an increasing importance to electric market agents' interaction, even if these are competitive markets like generation and commercialization, or non... more
The profound change in the electric industry worldwide in the last twenty years assigns an increasing importance to electric market agents' interaction, even if these are competitive markets like generation and commercialization, or non competitive transmission and distribution markets. The agent's cooperation and coordination through coalition formation in cost allocation of investment, electric network operation and maintenance, arises as an attractive solution, if one has an appropriate technical and economic modeling. The obtained solutions in such cases are efficient, fair and equitable to participant agents. A transmission cost allocation method is presented, based on cooperative game theory and transmission network capacity use by consumer agents. It is applied to the main Chilean interconnected system and the obtained results are compared with traditional methodologies.
2025, 2003 IEEE Bologna Power Tech Conference Proceedings,
The profound change in the electric industry worldwide in the last twenty years assigns an increasing importance to electric market agents' interaction, even if these are competitive markets like generation and commercialization, or non... more
The profound change in the electric industry worldwide in the last twenty years assigns an increasing importance to electric market agents' interaction, even if these are competitive markets like generation and commercialization, or non competitive transmission and distribution markets. The agent's cooperation and coordination through coalition formation in cost allocation of investment, electric network operation and maintenance, arises as an attractive solution, if one has an appropriate technical and economic modeling. The obtained solutions in such cases are efficient, fair and equitable to participant agents. A transmission cost allocation method is presented, based on cooperative game theory and transmission network capacity use by consumer agents. It is applied to the main Chilean interconnected system and the obtained results are compared with traditional methodologies.
2025, IJSRA
The rapid evolution of data processing demands has led to innovative approaches in enterprise-scale data anonymization and protection. This comprehensive examination explores the implementation of Delphix across diverse cloud... more
The rapid evolution of data processing demands has led to innovative approaches in enterprise-scale data anonymization and protection. This comprehensive examination explores the implementation of Delphix across diverse cloud environments, focusing on its technical architecture, performance metrics, and compliance features. The platform demonstrates exceptional capabilities in handling sensitive data through advanced machine learning algorithms and sophisticated processing pipelines. The architecture incorporates robust security mechanisms, parallel processing capabilities, and intelligent resource optimization across multiple geographical regions. Integration with major cloud providers enables seamless scalability while maintaining strict data protection standards. The implementation showcases significant improvements in processing efficiency, reduced data breach risks, and enhanced compliance adherence through automated controls. Best practices and deployment guidelines ensure optimal performance through carefully calibrated infrastructure requirements and monitoring systems. The solution addresses the critical challenges of data privacy and security while maintaining high throughput rates and system availability across distributed environments.
2025, Wireless Personal Communications
This article proposes criteria and mechanisms that achieve seamless inter-working between the multiradio access technologies that will compose the fourth-generation (4G) wireless mobile environment. We address the problem of incorporating... more
This article proposes criteria and mechanisms that achieve seamless inter-working between the multiradio access technologies that will compose the fourth-generation (4G) wireless mobile environment. We address the problem of incorporating system interoperability in order to provide the user with seamless mobility across different radio access technologies; namely we focus on inter-working UMTS-High Speed Downlink Packet Access (HSDPA) and WLAN networks, as these two networks are believed to be major components of the 4G wireless network. Interoperability results in providing the user with a rich range of services across a wide range of propagation environment and mobility conditions, using a single terminal. Specifically, the article aims at defining the criteria and mechanisms for interoperability between the two networks. Our approach considers the use of Cost functions to monitor the essential parameters at the system level in order to trigger an interoperability procedure. Initial user assignment and inter-system handover are considered the incidents that initiate the interoperability algorithm execution. The overall objective of this work is to assess the performance of our developed interoperability platform and to optimize system performance by guarantying a minimum QoS requirement and maximizing network capacity.
2024
SUMMARY Within the geographic data processing domain, a broad range of problems exists that are not or only insufficiently solvable using existing local computational resources. With the continuous set up of international spatial data... more
SUMMARY Within the geographic data processing domain, a broad range of problems exists that are not or only insufficiently solvable using existing local computational resources. With the continuous set up of international spatial data infrastructures, the problem of intensive data exchange grows. Whereas network capacities have reached enormous scales in the industrial countries, the exchange of large XML encoded geographic data sets is still an obstacle in large parts of Asia, Africa, and South- and Central America. Today, more and more complex chains are used to extract valid information out of raw data sets. Workflow description languages are under development allowing a dynamic set up of complex chains, implying multiple steps of data accessing, data processing, and data visualization. Each step causes network traffic. If we measure the distance a single date has to cover before being delivered to the final user in number of geographically dispersed Web Services, it could be cer...
2024, IEEE Transactions on Wireless Communications
2024, Lecture Notes in Computer Science
TCP Westwood (TCPW) is a sender-side only modification of TCP Reno congestion control, which exploits end-to-end bandwidth estimation to properly set the values of slow-start threshold and congestion window after a congestion episode.... more
TCP Westwood (TCPW) is a sender-side only modification of TCP Reno congestion control, which exploits end-to-end bandwidth estimation to properly set the values of slow-start threshold and congestion window after a congestion episode. This paper aims at showing via both mathematical modeling and extensive simulations that TCPW significantly improves fair sharing of high-speed networks capacity and that TCPW is friendly to TCP Reno. Moreover, we propose EASY RED, which is a simple Active Queue management (AQM) scheme that improves fair sharing of network capacity especially over high-speed networks. Simulation results show that TCP Westwood provides a remarkable Jain's fairness index increment up to 200% with respect to TCP Reno and confirm that TCPW is friendly to TCP Reno. Finally, simulations show that Easy RED improves fairness of Reno connections more than RED, whereas the improvement in the case of Westwood connections is much smaller since Westwood already exhibits a fairer behavior by itself.
2024
The evolution from traditional IP-based networking to Named Data Networking (NDN) represents a paradigm shift to address the inherent limitations of current network architectures, such as scalability, mobility, and efficient data... more
The evolution from traditional IP-based networking to Named Data Networking (NDN) represents a paradigm shift to address the inherent limitations of current network architectures, such as scalability, mobility, and efficient data distribution. NDN introduces an information-centric approach where data is identified and retrieved based on names rather than locations, offering more efficient data dissemination and enhanced security. However, the transition to NDN, alongside the need to integrate it with existing IP infrastructures, necessitates the development of flexible and scalable testbeds that support diverse experimental scenarios across various physical media and networking protocol stacks.
In this paper, we present NetScaNDN, a scalable, flexible, and plug-and-play testbed designed to facilitate such experiments. NetScaNDN employs an automated process for node discovery, configuration, and installation, enabling seamless setup and execution of experiments on both wired and wireless infras- tructures simultaneously. Additionally, it incorporates a central log repository using the syslog protocol, allowing comprehensive measurement and evaluation of user-defined metrics across dif- ferent network layers. NetScaNDN offers a robust platform for researchers to explore and validate various networking scenarios, advancing the study of IP and NDN-based applications.
2024
In centralized control sensor network, tree-based multi-channel communication overcomes the recurrent channel switching and makes possible to transfer data simultaneously from different sources. In our paper, we propose a greedy algorithm... more
In centralized control sensor network, tree-based multi-channel communication overcomes the recurrent channel switching and makes possible to transfer data simultaneously from different sources. In our paper, we propose a greedy algorithm named as NIT (Non-Intersecting Tree) that the trees can avoid inter-tree interference. We also propose channel switching technique by which trees can avoid link failure or area blocking due to external interference locally without rerunningthe algorithm and without interrupting the whole network. At first we applied our algorithm for a random topology and then we evaluate the performance of the network using NS-2 simulator. The results show that with the increasing of channel the throughputand delivery ratio are increased significantly. We got better performance than a using a recent proposed Tree-based Multi-Channel Protocol (TMCP).
2024
Die ZBW räumt Ihnen als Nutzerin/Nutzer das unentgeltliche, räumlich unbeschränkte und zeitlich auf die Dauer des Schutzrechts beschränkte einfache Recht ein, das ausgewählte Werk im Rahmen der unter
2024, SSRN Electronic Journal
EU power market design has been focused on facilitating trading between countries and for this has defined interfaces for market participants and TSOs between countries. The operation of power systems and markets within countries was not... more
EU power market design has been focused on facilitating trading between countries and for this has defined interfaces for market participants and TSOs between countries. The operation of power systems and markets within countries was not the focus of these developments. This may have contributed to difficulties of defining or implementing a common perspective in particular on intraday and balancing approaches. This motivated us to pursue an in depth review of six European power markets to contribute to a better understanding of the common elements, differences and the physical and institutional reasons for these. With this paper we aim to present the main insights emerging from the reviews and to identify where there is a need for alignment of operational aspects and shortterm trading arrangements, taking into account system requirements individual member states face in operating their power system.
2024, International Journal of Emerging Trends in Engineering Research
The growth of smart phone technology have resulted in increased number of multimedia application which requires low latency service delivery and coverage which the existing wireless technology could not handle due to power limitation... more
The growth of smart phone technology have resulted in increased number of multimedia application which requires low latency service delivery and coverage which the existing wireless technology could not handle due to power limitation imposed by Wireless Local Area Network (WLAN) regulatory body. To address this wireless mesh networks (WMNs) was standardized by task group S to support multichip based communication. Reducing packet delivery latency in existing multichip WMN is one of key issues that need to be addressed. The existing methodology induces high data delivery latency due to inability to address the hidden and exposed node problem in multichip WMN environment. Many approaches have been developed to optimize the Medium Access Control (MAC), hidden node and exposed node detection algorithm in recent time to better utilize slot and reduce latency. The existing method adopts contention based method to address the collision caused by the hidden node problems which result in improper utilization of slot, which result in bandwidth wastages. Node classification technique for reutilizing slot will aid in improving network performance by reducing latency. As it is seen the latency plays a significant part in improving the performance overall throughput of a network. This work proposes an efficient device classification based MAC scheduler by adopting a cross layer design to reduce the data delivery latency. The experiments are conducted to evaluate mean network latency and data delivery latency by varying depth of tree for varied network size and density. The outcome shows that the proposed approach perform better than existing CSMA/OCA in term of network data delivery latency.
2024, … Networking, 2009. ICOIN …
IPTV solutions are emerging in the market today, both competing with traditional distribution mediums, such as cable TV, and creating new markets, such as mobile TV viewing using cellular networks. The distribution of IPTV is usually... more
IPTV solutions are emerging in the market today, both competing with traditional distribution mediums, such as cable TV, and creating new markets, such as mobile TV viewing using cellular networks. The distribution of IPTV is usually performed using a single stream, resorting multicast. This is not compatible with the growing number of devices which may be used to watch TV, from PDAs to laptops, desktops and High Definition TVs, each with different processing and networking capacities. This has prompted research into scalable video coding and adaptation techniques, where a single video stream may be used for a large number of differentiated clients. In this paper we propose and evaluate two different metrics for IP packet discard by a media-aware network element, allowing it to reduce the bandwidth of an original video stream in case of congestion on a downstream link. The first metric consists in discarding the last IP packet of P frames. The second consists in discarding entire P frames. Through experimental evaluation, using subjective viewing scores, we conclude that both of our proposals allow for a much better viewing experience than random packet dropping, although quality still deteriorates rapidly as the drop rate increases.
2024
The traditional global (i.e., submarine + terrestrial) network architecture needs to be revamped to meet contemporary needs. This paper deals with the validation of new global network concepts through a Test Bed, jointly provided by... more
The traditional global (i.e., submarine + terrestrial) network architecture needs to be revamped to meet contemporary needs. This paper deals with the validation of new global network concepts through a Test Bed, jointly provided by KDD-SCS and Lucent Technologies. This Test Bed consists of the latest products and technologies from the two companies, and is a unique and innovative collaboration between them.
2024, Kyklos
Most existing available-bandwidth measurement techniques are justified using a constant-rate fluid cross-traffic model. To achieve a better understanding of the performance of current bandwidth measurement techniques in general traffic... more
Most existing available-bandwidth measurement techniques are justified using a constant-rate fluid cross-traffic model. To achieve a better understanding of the performance of current bandwidth measurement techniques in general traffic conditions, this paper presents a queueing-theoretic foundation of single-hop packet-train bandwidth estimation under bursty arrivals of discrete cross-traffic packets. We analyze the statistical mean of the packet-train output dispersion and its mathematical relationship to the input dispersion, which we call the probing-response curve. This analysis allows us to prove that the single-hop response curve in bursty cross-traffic deviates from that obtained under fluid cross traffic of the same average intensity and to demonstrate that this may lead to significant measurement bias in certain estimation techniques based on fluid models. We conclude the paper by showing, both analytically and experimentally, that the response-curve deviation vanishes as the packet-train length or probing packet size increases, where the vanishing rate is decided by the burstiness of cross-traffic.
2024, International Journal of Wireless Networks and Broadband Technologies
Wireless Mesh Networks (WMNs) have emerged as a key technology for the next generation of wireless networking. Instead of being another type of ad-hoc networking, WMNs diversify the capabilities of ad-hoc networks. Several protocols that... more
Wireless Mesh Networks (WMNs) have emerged as a key technology for the next generation of wireless networking. Instead of being another type of ad-hoc networking, WMNs diversify the capabilities of ad-hoc networks. Several protocols that work over WMNs include IEEE 802.11a/b/g, 802.15, 802.16 and LTE-Advanced. To bring about a high throughput under varying conditions, these protocols have to adapt their transmission rate. This paper proposes a scheme to improve channel conditions by performing rate adaptation along with multiple packet transmission using packet loss and physical layer condition. Dynamic monitoring, multiple packet transmission and adaptation to changes in channel quality by adjusting the packet transmission rates according to certain optimization criteria provided greater throughput. The key feature of the proposed method is the combination of the following two factors: 1) detection of intrinsic channel conditions by measuring the fluctuation of noise to signal rati...
2024, 2007 46th IEEE Conference on Decision and Control
In this work, a new transmission power control algorithm based on the use of Quantitative Feedback Theory (QFT) is proposed for CDMA wireless cellular networks. The QFT based loop-shaping framework that is considered fully compensates the... more
In this work, a new transmission power control algorithm based on the use of Quantitative Feedback Theory (QFT) is proposed for CDMA wireless cellular networks. The QFT based loop-shaping framework that is considered fully compensates the effect of link round-trip time delay within the network. The design supports predefined levels of performance robustness in the presence of channel uncertainty and signal interference. A novel stability boundary is introduced based on the use of the Jury Array that informs the necessary trade-off between disturbance attenuation and system stability. Extensive simulation results are provided that illustrate the effectiveness of the proposed methodology.
2024
In this paper, the performance enhancement algorithm of channel allocation for voice and data transmission in cellular networks is proposed. The voice activity detection has been applied to dynamic channel allocation procedure to detect... more
In this paper, the performance enhancement algorithm of channel allocation for voice and data transmission in cellular networks is proposed. The voice activity detection has been applied to dynamic channel allocation procedure to detect and separate the silence and speech among conversation periods. Hence a data user can use the silent period of an active voice channel to transmit its information. To control the selecting of channel allocation policies, the information of number of data in transmission waiting queue has been determined in order to accept the performance measurement. In the simulation results, the improvement of the performance shows via the quality of services, which are an average delay in queue, a blocking probability, and an impact of the proposed scheme is presented in the system.