Network Capacity Research Papers - Academia.edu (original) (raw)

2025, IEEE Communications Magazine

2025, Proceedings.Twenty-First Annual Joint Conference of the IEEE Computer and Communications Societies

We study the performance metrics associated with TCPregulated traffic in multi-hop, wireless networks that use a common physical channel (e.g., IEEE 802.11). In contrast to earlier analyses, we focus simultaneously on two key operating... more

We study the performance metrics associated with TCPregulated traffic in multi-hop, wireless networks that use a common physical channel (e.g., IEEE 802.11). In contrast to earlier analyses, we focus simultaneously on two key operating metrics-the energy efficiency and the session throughput. Using analysis and simulations, we show how these metrics are strongly influenced by the radio transmission range of individual nodes. Due to tradeoffs between the individual packet transmission energy and the likelihood of retransmissions, the total energy consumption is a convex function of the number of hops (and hence, of the transmission range). On the other hand, the TCP session throughput decreases supralinearly with a decrease in the transmission range. In certain scenarios, the overall network capacity can then be a concave function of the transmission range. Based on our analysis of the performance of an individual TCP session, we finally study how parameters such as the node density and the radio transmission range affect the overall network capacity under different operating conditions. Our analysis shows that capacity metrics at the TCP layer behave quite differently than corresponding idealized link-layer metrics.

2025, 2009 IEEE International Conference on Systems, Man and Cybernetics

We describe new concepts in broadband software define radio architectures optimized for very high bandwidth wireless communication protocols. This is important since a trend in wireless communication transceivers is the adoption of... more

We describe new concepts in broadband software define radio architectures optimized for very high bandwidth wireless communication protocols. This is important since a trend in wireless communication transceivers is the adoption of increasingly sophisticated radio link algorithms which maximize utility functions consisting of network capacity, data rate reliability, and throughput. Many prior efforts in software defined radio suffer from either a presumption that sufficiently high clock rates exist to employ traditional single instruction multiple data (SIMD) multi-processor architectures or that renaissance programmers are available to convert thousand page wireless protocol specifications into fine-grain data flow graphs at the operation level. The challenge for new generations of software defined radio is to maintain flexibility whiles simultaneously supporting computationally efficient broadband communication algorithms and ease of programming. Our system architecture is viable for wide varieties of communication protocols and physical data channels. We focus principally on OFDM transceiver styles due to their bandwidth scalability and their popularity in many next generation wireless air interfaces. Our SDR system jointly minimizes power, maximize algorithm flexibility, and to enable rapid software re-programming. We feel that these concepts are critical for the adoption of software defined radio in 21st century broadband wireless networks.

2025

Strategic alliances have become a core component of modern business strategy, enabling organizations to access new markets, share risks, and foster innovation in an increasingly volatile and interconnected global economy. Over the past... more

Strategic alliances have become a core component of modern business strategy, enabling organizations to access new markets, share risks, and foster innovation in an increasingly volatile and interconnected global economy. Over the past four decades, the research domain of strategic alliances has evolved from descriptive case studies to a sophisticated, multidisciplinary field encompassing economics, management, sociology, and organizational theory. This paper provides a review of the strategic alliance literature, tracing its historical roots, theoretical development, measurement evolution, antecedents, and unresolved research issues. Drawing on over 100 scholarly sources and recent bibliometric analyses, the paper highlights major contributors, methodological advancements, and the dynamic, co-evolutionary nature of alliances. It concludes by identifying persistent gaps and suggesting directions for future research.

2025, 2007 IEEE Wireless Communications and Networking Conference

Mobile WiMAX systems are based on the IEEE 802.16e specifications, which include two mandatory MIMO profiles for the downlink. One of these is Alamouti's space-time code (STC) for transmit diversity, and the other is a 2x2 spatial... more

Mobile WiMAX systems are based on the IEEE 802.16e specifications, which include two mandatory MIMO profiles for the downlink. One of these is Alamouti's space-time code (STC) for transmit diversity, and the other is a 2x2 spatial multiplexing MIMO scheme. In this paper, we compare the two schemes assuming that the latter employs maximum-likelihood detection. The analysis shows that at the same spectral efficiency, Alamouti's STC combined with maximum-ratio combining (MRC) at the receiver significantly outperforms the 2x2 spatial multiplexing scheme at high values of the signal-tonoise ratio (SNR). Next, selection of a MIMO option is included in link adaptation to maximize network capacity, and operating SNR regions are determined for different modulation, coding and MIMO combinations.

2025, IEEE GLOBECOM 2008 - 2008 IEEE Global Telecommunications Conference

To date, most analysis of WLANs has been focused on their operation under saturation condition. This work is an attempt to understand the fundamental performance of WLANs under unsaturated condition. In particular, we are interested in... more

To date, most analysis of WLANs has been focused on their operation under saturation condition. This work is an attempt to understand the fundamental performance of WLANs under unsaturated condition. In particular, we are interested in the delay performance when collisions of packets are resolved by an exponential backoff mechanism. Using a multiple-vacation queueing model, we derive an explicit expression for packet delay distribution, from which necessary conditions for finite mean delay and delay jitter are established. It is found that under some circumstances, mean delay and delay jitter may approach infinity even when the traffic load is way below the saturation throughput. Saturation throughput is therefore not a sound measure of WLAN capacity when the underlying applications are delay sensitive. To bridge the gap, we define safe-bounded-mean-delay (SBMD) throughput and safe-bounded-delay-jitter (SBDJ) throughput that reflect the actual network capacity users can enjoy when they require bounded mean delay and delay jitter, respectively. The analytical model in this paper is general enough to cover both single-packet reception (SPR) and multipacket reception (MPR) WLANs, as well as carrier-sensing and non-carrier-sensing networks. We show that the SBMD and SBDJ throughputs scale super-linearly with the MPR capability of a network. Together with our earlier work that proves super-linear throughput scaling under saturation condition, our results here complete the demonstration of MPR as a powerful capacity-enhancement technique for both delay-sensitive and delay-tolerant applications.

2025, IEEE Transactions on Mobile Computing

With the rapid proliferation of broadband wireless services, it is of paramount importance to understand how fast data can be sent through a wireless local area network (WLAN). Thanks to a large body of research following the seminal work... more

With the rapid proliferation of broadband wireless services, it is of paramount importance to understand how fast data can be sent through a wireless local area network (WLAN). Thanks to a large body of research following the seminal work of Bianchi, WLAN throughput under saturated traffic condition has been well understood. By contrast, prior investigations on throughput performance under unsaturated traffic condition was largely based on phenomenological observations, which lead to a common misconception that WLAN can support a traffic load as high as saturation throughput, if not higher, under non-saturation condition. In this paper, we show through rigorous analysis that this misconception may result in unacceptable quality of service: mean packet delay and delay jitter may approach infinity even when the traffic load is far below the saturation throughput. Hence, saturation throughput is not a sound measure of WLAN capacity under non-saturation condition. To bridge the gap, we define safe-bounded-mean-delay (SBMD) throughput and safebounded-delay-jitter (SBDJ) throughput that reflect the actual network capacity users can enjoy when they require finite mean delay and delay jitter, respectively. Our earlier work proved that in a WLAN with multi-packet reception (MPR) capability, saturation throughput scales superlinearly with the MPR capability of the network. This paper extends the investigation to the non-saturation case and shows that super-linear scaling also holds for SBMD and SBDJ throughputs. Our results here complete the demonstration of MPR as a powerful capacity-enhancement technique for WLAN under both saturation and non-saturation conditions.

2025

Most current wireless IEEE 802.11 networks rely on a power-threshold based carrier-sensing multi-access (CSMA) mechanism to prevent packet collisions, in which a transmitter permits its transmission only if the locally measured aggregate... more

Most current wireless IEEE 802.11 networks rely on a power-threshold based carrier-sensing multi-access (CSMA) mechanism to prevent packet collisions, in which a transmitter permits its transmission only if the locally measured aggregate interference power from all existing transmissions is below a prespecified power-sensing threshold. However, such a mechanism can not completely guarantee interference-safe transmissions, leading to the so-called hidden-node problem, which causes degradation in throughput and fairness performance. Traditionally, ensuring interference-safe transmissions was addressed by simple models of conflict graphs, rather than by the realistic signal-to-interference-and-noise ratio (SINR) model. This paper presents the first viable solution for fully interference-safe transmissions that (1) assumes an accurate SINR model, and ( ) is compatible with the carrier-sensing mechanism in existing CSMA networks. Specifically, we determine a proper interference-safe power-sensing threshold by considering both the effects of (i) arbitrary ordering of local interference power measurements, and (ii) ACK frames. We compare our interference-safe solution with other solutions, and provide extensive evaluation on its throughput and fairness performance.

2025, Texila International Journal of Academic Research

The fast growth and demand for mobile broad band service has driven Telecom network operators to adopt an innovative approached to improve network capacity and enhance Quality of service in Mobile Telephony. The high demand of mobile... more

The fast growth and demand for mobile broad band service has driven Telecom network operators to adopt an innovative approached to improve network capacity and enhance Quality of service in Mobile Telephony. The high demand of mobile broadband to pave way for Spectrum refarming which involves reallocating underutilized frequency bands (e.g., 2G/3G) to more advanced networks like 4G LTE. The research focus and objectives was to investigate the impact of spectrum refarming on 4G capacity and QoS improvement and examines the technical, economic, social and regulatory challenges associated with spectrum refarming. The study will follow a quantitave and comparative approach and data collection was conducted through structured questionnaires and distributed to Telecom engineers, IT specialist with hands-on experience in Telecom, regulatory authorities, supporting staff outside IT and Telecom space. The findings indicate that spectrum refarming can significantly enhance the 4G network capacity, improve quality of service and user experience though its implementation is often limited by technical and regulatory challenges and per the results received from the survey, (88.6%) of respondents acknowledged spectrum refarming will significantly improve 4G coverage and capacity, (6.85 %) believed it is not possible and (4.5%) were uncertain. The study also contributes to the field that provide actionable recommendation for telecom operators and policymakers to improve the spectrum management practices. Additionally, (81.8%) of the respondents reported spectrum sharing policy should be introduced to facilitate refarming ,11.4% didn't support the idea of spectrum sharing ,6.8% of respondents not sure and total Reponses were 40.

2025, Journal of Computer Science and Technology

Providing each node with one or more multi-channel radios offers a promising avenue for enhancing the network capacity by simultaneously exploiting multiple non-overlapping channels through different radio interfaces and mitigating... more

Providing each node with one or more multi-channel radios offers a promising avenue for enhancing the network capacity by simultaneously exploiting multiple non-overlapping channels through different radio interfaces and mitigating interferences through proper channel assignment. However, it is quite challenging to effectively utilize multiple channels and/or multiple radios to maximize throughput capacity. The National Natural Science Foundation of China (NSFC) Project 61128005 conducted comprehensive algorithmic-theoretic and queuing-theoretic studies of maximizing wireless networking capacity in multi-channel multi-radio (MC-MR) wireless networks under the protocol interference model and fundamentally advanced the state of the art. In addition, under the notoriously hard physical interference model, this project has taken initial algorithmic studies on maximizing the network capacity, with or without power control. We expect the new techniques and tools developed in this project will have wide applications in capacity planning, resource allocation and sharing, and protocol design for wireless networks, and will serve as the basis for future algorithm developments in wireless networks with advanced features, such as multi-input multi-output (MIMO) wireless networks.

2025

Due to trustless link quality in Wireless Mesh Networks (WMNs), Power Outage Notification (PON) and Power Restoration Notification (PRN) messages are often dropped or delayed en-route, which may fail to satisfy customer requirements in... more

Due to trustless link quality in Wireless Mesh Networks (WMNs), Power Outage Notification (PON) and Power Restoration Notification (PRN) messages are often dropped or delayed en-route, which may fail to satisfy customer requirements in practice. Therefore, proposed herein are techniques that use machine learning and Fog computing to efficiently deduce missing PON/PRN messages. DETAILED DESCRIPTION Vendors are developing multi-hop wireless mesh networks (WMNs) for smart grid business use and to provide interoperability for with Advantaged Metering Infrastructure (AMI) and Distributed Automation (DA) devices. These WMNs utilize IPv6 Routing Protocol for LLNs (RPL) to establish a tree-based multi-hop topology network based on the RF and PLC medium by using IEEE802.15.4g and P1901.2 protocols. In general, the networks are usually constrained with limited power/energy, bandwidth, and memory resources, are often deployed in hostile environments, and utilize wireless communication. In WMNs...

2025

The proposed technique attempts to recognize and predict the working state of a reachable destination node using a Markov Chain method in Low-Power and Lossy Networks (LLNs). DETAILED DESCRIPTION Considering the cost of large-scale (up to... more

The proposed technique attempts to recognize and predict the working state of a reachable destination node using a Markov Chain method in Low-Power and Lossy Networks (LLNs). DETAILED DESCRIPTION Considering the cost of large-scale (up to millions) wireless mesh networks (WMNs), such as 6TiSCH, CG-Mesh, or Wi-SUN-based WMNs, these networks all use half-duplex wireless nodes as basic components. That means a node is available to be connected by neighbors only if it is in receive (Rx) mode. Consequently, a requester in a WMN needs a destination responder to be operating in Rx mode when it wants to initiate a transmission. Unfortunately, the requester has to assume the destination node is always in Rx mode, because the requester has no visibility into the exact state of the responder. Carrier Sense Multiple Access/Collision Avoidance could help to avoid collisions among neighboring requesters at the same time, but it provides no benefit when the destination node is not in Rx mode (e.g....

2025, 2009 IEEE International Conference on Communications

Most researchers conduct wireless networking experiments in their laboratory or similar indoor environments. Such environments are veritable RF jungles, especially when we consider the ISM bands. In this paper we examine and test several... more

Most researchers conduct wireless networking experiments in their laboratory or similar indoor environments. Such environments are veritable RF jungles, especially when we consider the ISM bands. In this paper we examine and test several common explicit and implicit assumptions that researchers tend to make about the wireless environment. Although these assumptions are acknowledged by most researchers, the extent of their impact is often underestimated. We find that because the environment is always in flux, it is almost impossible to reproduce the results of an experiment. Hence, there is a high risk of misinterpreting the data obtained from such experiments. Through this paper we try to caution experimenters against such risky assumptions when they venture into the RF jungle. After a successful proof-of-concept experiment, we advocate the use of wireless networking testbeds that provide experimenters better control over the RF environment by using coaxial cables, programmable attenuators and power dividers/combiners.

2025

There is a tremendous growth in the deployment and usage of Wireless Local Area Networks (WLANs) in recent years. There is also an increasing interest in supporting voice applications over WLANs. In this paper we use analytical models... more

There is a tremendous growth in the deployment and usage of Wireless Local Area Networks (WLANs) in recent years. There is also an increasing interest in supporting voice applications over WLANs. In this paper we use analytical models presented in the literature for saturation throughput, average packet delay and packet drop probability and simulative results for jitter to evaluate voice traffic over WLANs. We propose a methodology that determines the voice capacity of WLANs and we examine the effect of packetization interval and transmission rate on voice capacity using G.711 codec. Analytical results for voice capacity match simulative results presented in the literature using extensive simulations.

2025, 2011 IEEE International Conference on Smart Grid Communications (SmartGridComm)

A key element to realizing the smart energy grid of the future is the deployment of an efficient and reliable information network. An intelligent combination of wired networks (the Internet), wireless networks and power line communication... more

A key element to realizing the smart energy grid of the future is the deployment of an efficient and reliable information network. An intelligent combination of wired networks (the Internet), wireless networks and power line communication networks can be used to deliver control and application messages generated by the smart grid. Integration of these three network types is non-trivial due to the distinct differences in deliverable quality of service and financial cost. Traffic assignment across these distinct networks poses a novel research problem which must be solved to realize the smart grid. Herein, an algorithm which dynamically allocates traffic with different Quality of Service requirements in terms of throughput, delay and failure probability to information networks with different performance characteristics is proposed. A detailed queueing model for the system is defined which accounts for input queues buffering smart grid packets and external applications injecting traffic into the buffers of the networks. A Lyapunov optimization based-algorithm selects the packet allocation strategy based on input/output queue states and guarantees the required QoS to the input queues while minimizing financial cost.

2025, Anais do 2002 International Telecommunications Symposium

Quality of Service (QoS) provisioning on a per node basis, which assumes that this strategy would provide QoS in the whole domain. Nevertheless, this approach could fail in large domains with multiple flows aggregation and unexpected... more

Quality of Service (QoS) provisioning on a per node basis, which assumes that this strategy would provide QoS in the whole domain. Nevertheless, this approach could fail in large domains with multiple flows aggregation and unexpected input traffic. Therefore, provisioning techniques should be used to avoid unpredicted overloads that result in QoS fluctuations. A proposal using fuzzy controllers to reconfigure DiffServ nodes according to ingress traffic and achieved QoS was presented in [1]. However, it is not easy to specify fuzzy rule bases and membership functions that optimize the controllers performance. Thus, we propose a methodology to choose optimized fuzzy controller parameters using the Wang-Mendel and genetic algorithms. Finally, we evaluate the performance of this methodology by simulation of voice over IP applications in DiffServ domains.

2025, Teletraffic Science and Engineering

The Dierentiated Services architecture has been proposed to oer quality of service in the Internet. Most works on Diserv (DS) handles QoS guarantees in a per node basis, which assumes that assuring QoS in a single node also leads to the... more

The Dierentiated Services architecture has been proposed to oer quality of service in the Internet. Most works on Diserv (DS) handles QoS guarantees in a per node basis, which assumes that assuring QoS in a single node also leads to the desired QoS in the entire DS domain. Nevertheless, this is not always true. This paper proposes a framework that oers QoS in a DS domain using Policy-based Management and fuzzy logic techniques. The QoS controller recongures all DS nodes according to ingress trac and domain policies. Policy Based Management is used in this framework to provide QoS in DS domain, controlling heterogeneous equipments of dierent manufacturers. The performance and functionalities of a prototype are shown by simulation of a voice over IP application.

2025, Telecommunication Systems

Most work on Differentiated Services (DiffServ) handles Quality of Service (QoS) provisioning on a per node basis, which assumes that this strategy would provide QoS in the entire domain. Nevertheless, this approach could fail in large... more

Most work on Differentiated Services (DiffServ) handles Quality of Service (QoS) provisioning on a per node basis, which assumes that this strategy would provide QoS in the entire domain. Nevertheless, this approach could fail in large domains with multiple flow aggregation and unexpected input traffic. Therefore, provisioning techniques should be used to avoid unexpected overloads that result in QoS fluctuations. A proposal using fuzzy controllers to reconfigure DiffServ nodes according to both incoming traffic and the actual QoS is given. However, it is not easy to specify fuzzy rule bases and membership functions that optimize controller performance. Thus, we also propose a methodology to choose fuzzy controller parameters using the Wang-Mendel and genetic algorithms. Finally, we evaluate the performance of this model by simulation of an IP Telephony application in a DiffServ domain.

2025

The Differentiated Services architecture has been proposed to offer quality of service in the Internet. Most works on Diffserv (DS) handles QoS guarantees in a per node basis, which assumes that assuring QoS in a single node also leads to... more

The Differentiated Services architecture has been proposed to offer quality of service in the Internet. Most works on Diffserv (DS) handles QoS guarantees in a per node basis, which assumes that assuring QoS in a single node also leads to the desired QoS in the entire DS domain. Nevertheless, this is not always true. This paper proposes a framework that offers QoS in a DS domain using Policy basedManagement and fuzzy logic techniques. The QoS controller recon gur es all DS nodes according to ingress traf cand domain policies. Policy Based Management is used in this framework to provide QoS in DS domain, control ing heterogeneous equipments of different manufacturers. The performance and function alities of a prototype are shown by simulation of a voice over IP application in different DS topologies.

2025, 2010 7th Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks (SECON)

Directional antennas offer many potential advantages for wireless networks such as increased network capacity, extended transmission range and reduced energy consumption. Exploiting these advantages, however, requires new protocols and... more

Directional antennas offer many potential advantages for wireless networks such as increased network capacity, extended transmission range and reduced energy consumption. Exploiting these advantages, however, requires new protocols and mechanisms at various communication layers to intelligently control the directional antenna system. With directional antennas, many trivial mechanisms, such as neighbor discovery, become more challenging since communicating parties must agree on where and when to point their directional beams to enable communication. In this paper, we propose a fully directional neighbor discovery protocol called Sectored-Antenna Neighbor Discovery (SAND) protocol. SAND is designed for sectored-antennas, a low-cost and simple realization of directional antennas, that utilize multiple limited beamwidth antennas. Unlike many proposed directional neighbor discovery protocols, SAND depends neither on omnidirectional antennas nor on time synchronization. In addition, SAND performs neighbor discovery in a serialized fashion allowing individual nodes to discover all potential neighbors within a predetermined time. Moreover, SAND guarantees the discovery of the best sector combination at both ends of a link, resulting in more robust and higher quality links between nodes. Finally, SAND gathers the neighborhood information in a centralized location, if needed, to be used by centralized networking protocols. The effectiveness of SAND has been assessed via simulation studies and real hardware implementation.

2025, nternational journal of communication networks and information security

The Internet of things (IoT) comprises things interconnected through the internet with unique identities. Congestion management is one of the most challenging tasks in networks. The Constrained Application Protocol (CoAP) is a... more

The Internet of things (IoT) comprises things interconnected through the internet with unique identities. Congestion management is one of the most challenging tasks in networks. The Constrained Application Protocol (CoAP) is a low-footprint protocol designed for IoT networks and has been defined by IETF. In IoT networks, CoAP nodes have limited network and battery resources. The CoAP standard has an exponential backoff congestion control mechanism. This backoff mechanism may not be adequate for all IoT applications. The characteristics of each IoT application would be different. Further, the events such as unnecessary retransmissions and packet collision caused due to links with high losses and packet transmission errors may lead to network congestion. Various congestion handling algorithms for CoAP have been defined to enrich the performance of IoT applications. Our paper presents a comprehensive survey on the evolution of the congestion control mechanism used in IoT networks. We have classified the protocols into RTO-based, queue-monitoring, and rate-based. We review congestion avoidance protocols for CoAP networks and discuss directions for future work.

2025, International Journal of Network Management

In this paper we study the scalability issue in the design of a centralized policy server controlling resources in the future IP-based telecom network generation. The policy servers are in charge of controlling and managing QoS, security... more

In this paper we study the scalability issue in the design of a centralized policy server controlling resources in the future IP-based telecom network generation. The policy servers are in charge of controlling and managing QoS, security and mobility in a centralized way in future IP-based telecom networks. Our study demonstrates that the policy servers can be designed in such a manner that they scale with increase in network capacity.

2025, IEEE INFOCOM 2008 - The 27th Conference on Computer Communications

In this paper we study the issue of topology control under the physical Signal-to-Interference-Noise-Ratio (SINR) model, with the objective of maximizing network capacity. We show that existing graph-model-based topology control captures... more

In this paper we study the issue of topology control under the physical Signal-to-Interference-Noise-Ratio (SINR) model, with the objective of maximizing network capacity. We show that existing graph-model-based topology control captures interference inadequately under the physical SINR model, and as a result, the interference in the topology thus induced is high and the network capacity attained is low. Towards bridging this gap, we propose a centralized approach, called Spatial Reuse Maximizer (MaxSR), that combines a power control algorithm T4P with a topology control algorithm P4T. T4P optimizes the assignment of transmit power given a fixed topology, where by optimality we mean that the transmit power is so assigned that it minimizes the average interference degree (defined as the number of interferencing nodes that may interfere with the ongoing transmission on a link) in the topology. P4T, on the other hand, constructs, based on the power assignment made in T4P, a new topology by deriving a spanning tree that gives the minimal interference degree. By alternately invoking the two algorithms, the power assignment quickly converges to an operational point that maximizes the network capacity. We formally prove the convergence of MaxSR. We also show via simulation that the topology induced by MaxSR outperforms that derived from existing topology control algorithms by 50%-110% in terms of maximizing the network capacity.

2025, IEEE WPMC

This paper presents a design formulation and evaluation of a wireless co-OFDMA probabilistic algorithm aimed at optimizing resource utilization in overlapping Wi-Fi networks. The design formulation is grounded in realistic industrial... more

This paper presents a design formulation and evaluation of a wireless co-OFDMA probabilistic algorithm aimed at optimizing resource utilization in overlapping Wi-Fi networks. The design formulation is grounded in realistic industrial application requirements, and the evaluation is conducted using the network simulator v3.0 with its DetNetWiFi module, which is capable of modeling deterministic wireless networks. The evaluation focuses on latency, jitter, and packet loss. Our findings demonstrate that co-OFDMA probabilistic approaches offer significant benefits in terms of latency in deterministic wireless environments. However, they may also introduce increased jitter compared to a static primary channel sharing scheme, aspect which may not be suitable to industrial wireless environments.

2025, Bài Toán Thông Minh (Nhiều Tác Giả) thuviensach

Tài liệu tập hợp các bài toán logic hay cho bạn đọc

2025, IEEE Communications Surveys & Tutorials

The Transmission Control Protocol (TCP) carries most Internet traffic, so performance of the Internet depends to a great extent on how well TCP works. Performance characteristics of a particular version of TCP are defined by the... more

The Transmission Control Protocol (TCP) carries most Internet traffic, so performance of the Internet depends to a great extent on how well TCP works. Performance characteristics of a particular version of TCP are defined by the congestion control algorithm it employs. This paper presents a survey of various congestion control proposals that preserve the original host-to-host idea of TCP-namely, that neither sender nor receiver relies on any explicit notification from the network. The proposed solutions focus on a variety of problems, starting with the basic problem of eliminating the phenomenon of congestion collapse, and also include the problems of effectively using the available network resources in different types of environments (wired, wireless, high-speed, long-delay, etc.). In a shared, highly distributed, and heterogeneous environment such as the Internet, effective network use depends not only on how well a single TCPbased application can utilize the network capacity, but also on how well it cooperates with other applications transmitting data through the same network. Our survey shows that over the last 20 years many host-to-host techniques have been developed that address several problems with different levels of reliability and precision. There have been enhancements allowing senders to detect fast packet losses and route changes. Other techniques have the ability to estimate the loss rate, the bottleneck buffer size, and level of congestion. The survey describes each congestion control alternative, its strengths and its weaknesses. Additionally, techniques that are in common use or available for testing are described.

2025

Abstract—Although IEEE 802.11 provides several transmission rates, a suitable rate adaptation taking into account the relative fairness among all competitive stations, according to the underlying channel quality remains a challenge in... more

Abstract—Although IEEE 802.11 provides several transmission rates, a suitable rate adaptation taking into account the relative fairness among all competitive stations, according to the underlying channel quality remains a challenge in Mobile Ad hoc Networks (MANETs). The absence of any fixed infrastructure and any centralized control makes the existing solutions for WLANs like CARA (collision-aware rate adaptation) not appropriate for MANETs. In this paper, we propose a new analytical model with a suitable approach to ensure a relative fairness among all competitive nodes of a particular channel. Our model deals with the channel quality while respecting the nodes, based on transmission successes and failures in a mobility context. Finally, each node calculates its own probability to access the channel in a distributed manner. We evaluate the performance of our scheme with others in the context of MANET via extensive and detailed simulations. The performance differentials are analyse...

2025, IEEE Journal on Selected Areas in Communications

We consider the problem of determining the maximum capacity of the media access (MAC) layer in wireless ad hoc networks. Due to spatial contention for the shared wireless medium, not all nodes can concurrently transmit packets to each... more

We consider the problem of determining the maximum capacity of the media access (MAC) layer in wireless ad hoc networks. Due to spatial contention for the shared wireless medium, not all nodes can concurrently transmit packets to each other in these networks. The maximum number of possible concurrent transmissions is, therefore, an estimate of the maximum network capacity, and depends on the MAC protocol being used. We show that for a large class of MAC protocols based on virtual carrier sensing using RTS/CTS messages, which includes the popular IEEE 802.11 standard, this problem may be modeled as a maximum Distance-2 matching (D2EMIS) in the underlying wireless network: Given a graph ( ), find a set of edges such that no two edges in are connected by another edge in . D2EMIS is NP-complete. Our primary goal is to show that it can be approximated efficiently in networks that arise in practice. We do this by focusing on an admittedly simplistic, yet natural, graph-theoretic model for ad hoc wireless networks based on disk graphs, where a node can reach all other nodes within some distance (nodes may have unequal reach distances). We show that our approximation yields good capacity bounds. Our work is the first attempt at characterizing an important "maximum" measure of wireless network capacity, and can be used to shed light on previous topology formation protocols like Span and GAF that attempt to produce "good" or "capacity-preserving" topologies, while allowing nodes to alternate between sleep and awake states. Our work shows an efficient way to compute an upper bound on maximum wireless network capacity, thereby allowing topology formation algorithms to determine how close they are to optimal. We also outline a distributed algorithm for the problem for unit disk graphs, and briefly discuss extensions of our results to: 1) different node interference models; 2) directional antennas; and 3) other transceiver connectivity structures besides disk graphs.

2025

Reduction of CO2 emissions is a major global environmental issue. Over the past few years, wireless and mobile communications are becoming increasingly popular with consumers. The Most popular kind of wireless access is known as Wireless... more

Reduction of CO2 emissions is a major global environmental issue. Over the past few years, wireless and mobile communications are becoming increasingly popular with consumers. The Most popular kind of wireless access is known as Wireless Mesh Networks (WMNs) that provide wireless connectivity through lot cheaper and more supple backhaul infrastructure relative to wired solutions. Wireless Mesh Network (WMN) is a new emerging technology which has been adopted as the wireless internetworking solution for the near future. Due to higher energy consumption in the information and communication technology (ICT) industries, and which would have an impact on the environment, energy efficiency has become a key factor to evaluate the performance of a communication network. This paper primarily focuses on the classification layer the greatest existing approaches devoted to the conservation of energy. It is also discussing the most interesting works on energy saving in WMNs networks.

2025, journal of computer and knowledge engineering

Since the genesis of layered network, designing a popper MAC control protocol was a major concern. Among many protocols which introduced earlier, there is always a trade-off between utilization and load overhead. ALOHA is one of the first... more

Since the genesis of layered network, designing a popper MAC control protocol was a major concern. Among many protocols which introduced earlier, there is always a trade-off between utilization and load overhead. ALOHA is one of the first MAC protocols with virtually possess no overhead, but its maximum throughput is limited. Hence a new MAC protocol introduced on basis of multi-packet reception model named Hybrid ALOHA. In the original paper stability and throughput of this algorithm for 2 or 3 users case system had been analyzed. Although stability region for above two users circumstances had been studied, there was no general form for throughput nor any practical examination of stability. In this paper, beside expanding formula for throughput for any arbitrary number of users, the throughput of system is checked with simple simulation of probability of successes and failures. Achieved results shows that regardless of additional overhead for more users, throughput remains proper, and the system is not lost stability in larger number of users.

2025, Lecture Notes in Computer Science

The paper studies the problem of allocating bandwidth resources of a Service Overlay Network, to optimize revenue. Clients bid for network capacity in periodically held auctions, under the condition that resources allocated in an auction... more

The paper studies the problem of allocating bandwidth resources of a Service Overlay Network, to optimize revenue. Clients bid for network capacity in periodically held auctions, under the condition that resources allocated in an auction are reserved for the entire duration of the connection, not subject to future contention. This makes the optimal allocation coupled over time, which we formulate as a Markov Decision Process (MDP). Studying first the single resource case, we develop a receding horizon approximation to the optimal MDP policy, using current revenue and the expected revenue in the next step to make bandwidth assignments. A second approximation is then found, suitable for generalization to the network case, where bids for different routes compete for shared resources. In that case we develop a distributed implementation of the auction, and demonstrate its performance through simulations.

2025

The NYNEX Corporation invests hundreds of millions of dollars each year to enhance the telecommunications services provided to its customers. Extensive planning and construction are required to meet the ever-increasing demand for better... more

The NYNEX Corporation invests hundreds of millions of dollars each year to enhance the telecommunications services provided to its customers. Extensive planning and construction are required to meet the ever-increasing demand for better service and provide the latest in sophisticated equipment throughout the telephone network. Engineering groups plan changes to network facilities five years ahead, with constant adjustments for changes to forecasted service demand, changes in the economy, changes to NYNEX company policies, or the availability of new technologies. ARACHNE is an expert system that automates interoffice facilities (IOF)

2025

This paper proposes a decentralized model for the allocation of modulation and coding schemes, subchannels and transmit power to users in OFDMA femtocell deployments. The proposed model does not rely on any exchanged information between... more

This paper proposes a decentralized model for the allocation of modulation and coding schemes, subchannels and transmit power to users in OFDMA femtocell deployments. The proposed model does not rely on any exchanged information between cells, which is especially useful for femtocell networks. Coordination between femtocells is achieved through the intrinsic properties of minimising transmit power independently at each cell, which leads the network to self-organize into an efficient frequency reuse pattern. This paper also provides a two-level decomposition approach for solving this intricate resource assignment problem that is able to find optimal solutions at cell level in reduced periods of time. System-level simulations show a significant performance improvement in terms of user outages and network capacity when using the proposed distributed resource allocation in comparison with scheduling techniques based on uniform power distributions among subcarriers.

2025, IEEE Transactions on Vehicular Technology

This paper investigates the hidden-node phenomenon 4 (HN) in IEEE 802.11 wireless networks. HN occurs when nodes 5 outside the carrier-sensing range of each other are nevertheless 6 close enough to interfere with each other. As a result,... more

This paper investigates the hidden-node phenomenon 4 (HN) in IEEE 802.11 wireless networks. HN occurs when nodes 5 outside the carrier-sensing range of each other are nevertheless 6 close enough to interfere with each other. As a result, the carrier-7 sensing mechanism may fail to prevent packet collisions. HN can 8 cause many performance problems, including throughput degra-9 dation, unfair throughput distribution among flows, and through-10 put instability. The contributions of this paper are threefold. 11 1) This is a first attempt to identify a set of conditions-which we 12 called Hidden-node-Free Design (HFD)-that completely remove 13 HN in 802.11 wireless networks. 2) We derive variations of HFD 14 for large-scale cellular WiFi networks consisting of many wireless 15 LAN cells. These HFDs are not only HN-free, but they also reduce 16 exposed nodes at the same time so that the network capacity is 17 improved. 3) We investigate the problem of frequency-channel as-18 signment to adjacent cells. We find that with HFD, careful assign-19 ment in which adjacent cells use different frequency channels does 20 not improve the overall network capacity (in unit of bits per second 21 per frequency channel). Indeed, given f frequency channels, a 22 simple scheme with f overlaid cellular WiFi networks in which 23 each cell uses all f frequencies yields near-optimal performance. 24 Index Terms-Hidden-node problem (HN), IEEE 802.11, 25 modeling, performance evaluation, protocol design. 26 I. INTRODUCTION 27 T HIS PAPER investigates the hidden-node phenomenon 28 (HN) in IEEE 802.11 wireless networks. HN occurs 29 when nodes outside the carrier-sensing range of each other are 30 nevertheless close enough to interfere with each other. As a 31 result, the carrier-sensing mechanism may fail to prevent packet 32 collisions. HN can cause many performance problems, includ-33 ing throughput degradation, unfair throughput distribution, and 34 throughput instability [1]. 35 The contributions of this paper are threefold. 36 1) As detailed in Section I-A1, most previous work consid-Q1 37 ered HN in an isolated manner by focusing on specific 38 examples in which it arises. In addition, most existing 39

2025

Abstract—When an IEEE 802.11 ad-hoc network achieves capacity C by using a single channel, the targeted capacity by using two channels should be C∙ 2. However, most of the multichannel 802.11 protocols proposed in the literature only... more

Abstract—When an IEEE 802.11 ad-hoc network achieves capacity C by using a single channel, the targeted capacity by using two channels should be C∙ 2. However, most of the multichannel 802.11 protocols proposed in the literature only appear to be able to achieve ...

2025

A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes... more

A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11). The goal of this paper is to show how the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacityboosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC) scheme to coordinate transmissions among nodes. In contrast to "straightforward" network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM) waves for equivalent coding operation. PNC can yield higher capacity than straightforward network coding when applied to wireless networks. We believe this is a first paper that ventures into EM-wave-based network coding at the physical layer and demonstrates its potential for boosting network capacity. PNC opens up a whole new research area because of its implications and new design requirements for the physical, MAC, and network layers of ad hoc wireless stations. The resolution of the many outstanding but interesting issues in PNC may lead to a revolutionary new paradigm for wireless ad hoc networking.

2025

Abstract—Wireless Mesh Network (WMN) has become a popular access network architecture in the community due to its low cost and readily deployable nature. However, it is well known that multi-hop transmission in WMN is vulnerable to... more

Abstract—Wireless Mesh Network (WMN) has become a popular access network architecture in the community due to its low cost and readily deployable nature. However, it is well known that multi-hop transmission in WMN is vulnerable to bandwidth degradation, primarily due to contention and radio interference. A straightforward solution to this problem is to use mesh nodes with multiple radios and channels. In this paper, we demonstrate through real-world experiments that the use of multiple radios and channels solely cannot ...

2025

Urban traffic control systems evolved through three generations. The first generation of such systems has been based on historical traffic data. The second generation took advantage of detectors, which enabled the collection of real-time... more

Urban traffic control systems evolved through three generations. The first generation of such systems has been based on historical traffic data. The second generation took advantage of detectors, which enabled the collection of real-time traffic data, in order to re-adjust and select traffic signalization programs. The third generation provides the ability to forecast traffic conditions, in order to have traffic signalization programs and strategies pre-computed and applied at the most appropriate time frame for the optimal control of the current traffic conditions. Nowadays, the fourth generation of traffic control systems is already under development, based among others on principles of artificial intelligence and having capabilities of on-time information provision, traffic forecasting and incident detection is being developed according to principles of large-scale integrated systems engineering. Although these systems are largely benefiting from the developments of various information technology and computer science sectors, it is obvious that their performance is always related to that of the underlying optimization and control methods. Until recently, static traffic assignment (route choice) modes were used in order to forecast future traffic flows, considering that the parameters which affect the network capacity are fixed over a given origin-destination matrix. Traffic engineering considers traffic flows as being constant and tries to optimize the control parameters in order to optimize certain parameters and measures of effectiveness. These two procedures, although largely depend on each other and known as the combined traffic assignment and control problem, are usually handled separately. The recent scientific and research developments in the fields of traffic assignment, with the rapid development of the advantageous Dynamic Traffic Assignment models, the new dynamic traffic control strategies and the evolution of ITS tend to modify the way with which networks are being modelled and their efficiency is measured. The current paper aims to present the major findings out of a critical review of existing scientific literature in the fields of dynamic traffic assignment and traffic control. Combined traffic assignment and traffic control models are discussed both in terms of the underlying mathematical formulations, as well as in terms of algorithmic solutions, in order to better evaluate their applicability in large scale networks. In addition, a generic and easily transferable scheme, in the form of a methodological framework for the Combined Dynamic Traffic Assignment and Urban Traffic Control problem is presented and applied on a realistic urban network, so as to provide numerical results and to highlight the applicability of such models in cases, which differ from the standard test networks of the related bibliography, which are usually of simple nature and form.

2025, 2006 IEEE International Conference on Mobile Ad Hoc and Sensor Sysetems

Optimizing spectral reuse is a major issue in large-scale IEEE 802.11 wireless networks. Power control is an effective means for doing so. Much previous work simply assumes that each transmitter should use the minimum transmit power... more

Optimizing spectral reuse is a major issue in large-scale IEEE 802.11 wireless networks. Power control is an effective means for doing so. Much previous work simply assumes that each transmitter should use the minimum transmit power needed to reach its receiver, and that this would maximize the network capacity by increasing spectral reuse. It turns out that this is not necessarily the case, primarily because of hidden nodes. In a network without power control, it is well known that hidden nodes give rise to unfair network bandwidth distributions and large bandwidth oscillations. Avoiding hidden nodes (by extending the carrier-sensing range), however, may cause the network to have lower overall network capacity. This paper shows that in a network with power control, reducing the instances of hidden nodes can not only prevent unfair bandwidth distributions, but also achieve higher overall network capacity compared with the minimum-transmit-power approach. We propose and investigate two distributed adaptive power control algorithms that minimize mutual interferences among links while avoiding hidden nodes. In general, our power control algorithms can boost the capacity of ordinary non-powercontrolled 802.11 networks by more than two times while eliminating hidden nodes.

2025, annals of telecommunications - annales des télécommunications

This paper presents a fair and efficient rate control mechanism, referred to as congestion-aware fair rate control (CFRC), for IEEE 802.11s-based wireless mesh networks. Existing mechanisms usually concentrate on achieving fairness and... more

This paper presents a fair and efficient rate control mechanism, referred to as congestion-aware fair rate control (CFRC), for IEEE 802.11s-based wireless mesh networks. Existing mechanisms usually concentrate on achieving fairness and achieve a poor throughput. This mainly happens due to the synchronous rate reduction of neighboring links or nodes of a congested node without considering whether they actually share the same bottleneck or not. Furthermore, the achievable throughput depends on the network load, and an efficient fair rate is achievable when the network load is balanced. Therefore, existing mechanisms usually achieve a fair rate determined by the mostly loaded network region. CFRC uses an AIMD-based rate control mechanism which enforces a rate-bound to the links that use the same bottleneck. To achieve the maximum

2025

Human players and automated players (bots) interact in real time in a congested network. A player's revenue is proportional to the number of successful "downloads" and his cost is proportional to his total waiting time. Congestion arises... more

Human players and automated players (bots) interact in real time in a congested network. A player's revenue is proportional to the number of successful "downloads" and his cost is proportional to his total waiting time. Congestion arises because waiting time is an increasing random function of the number of uncompleted download attempts by all players. Surprisingly, some human players earn considerably higher profits than bots. Bots are better able to exploit periods of excess capacity, but they create endogenous trends in congestion that human players are better able to exploit. Nash equilibrium does a good job of predicting the impact of network capacity and noise amplitude. Overall efficiency is quite low, however, and players overdissipate potential rents, i.e., earn lower profits than in Nash equilibrium..

2025, Optical Switching and Networking

Long-Reach (LR) Passive Optical Network (PON) Dynamic Bandwidth Allocation (DBA) Class of Service (CoS) Quality of Service (QoS) Service Level Agreement (SLA) Delay guarantees a b s t r a c t In this paper a novel algorithm with delay... more

Long-Reach (LR) Passive Optical Network (PON) Dynamic Bandwidth Allocation (DBA) Class of Service (CoS) Quality of Service (QoS) Service Level Agreement (SLA) Delay guarantees a b s t r a c t In this paper a novel algorithm with delay guarantees for high priority traffic based on a Proportional (P) controller for Long-Reach Passive Optical Networks (LR-PONs) is proposed. We have recently demonstrated that Proportional-Integral-Derivative (PID) controllers are quite effective when controlling guaranteed bandwidth levels and in this paper this functionality is adapted to jointly deal with Class of Service (CoS) and client differentiation in order to fulfill delay requirements. Therefore, it leads to an efficient control of the mean packet delay which enhances the provided Quality of Service (QoS) inside the LR-PON. Simulation results have exhibited that the bandwidth allocation process made by the P controller achieves this objective faster than other existing proposals. In fact, it stabilizes the priority delays in less than 2 min comparing with 5 or 6 min obtained by other proposals. Furthermore, it has been demonstrated that its performance is more robust than other proposals since it is independent of the initial network conditions, adapting very efficiently the available resources in order to comply with the established delay bounds of the most restrictive services.

2025

The performance characteristics of Wi-Fi networks have traditionally been studied and analysed using analytical models and simulations. Due to the complexity of wireless communication the existing analytical Wi-Fi network models rely on... more

The performance characteristics of Wi-Fi networks have traditionally been studied and analysed using analytical models and simulations. Due to the complexity of wireless communication the existing analytical Wi-Fi network models rely on certain network constraints and simplifications in order to be mathematically tractable. We set out to evaluate the practicality of using Wi-Fi performance models to estimate network performance by collecting the model necessary parameters directly from an access point. In order to evaulate, we must also collect network metrics, such as packet payload size and number of nodes, for comparison with the model parameters. We explore different venues to collect these parameters and metrics to find out if it is practical to apply the models in Wi-Fi networks. After performing three attempts, we conclude that this is difficult due to several aspects in the Linux kernel, such as batching optimization patterns, proprietary kernel modules and firmware blobs. W...

2025, Operations Research

In this paper we describe an efficient algorithm for solving novel optimization models arising in the context of multiperiod capacity expansion of optical networks. We assume that the network operator must make investment decisions over a... more

In this paper we describe an efficient algorithm for solving novel optimization models arising in the context of multiperiod capacity expansion of optical networks. We assume that the network operator must make investment decisions over a multiperiod planning horizon while facing rapid changes in transmission technology, as evidenced by a steadily decreasing per-unit cost of capacity. We deviate from traditional and monopolistic models in which demands are given as input parameters, and the objective is to minimize capacity deployment costs. Instead, we assume that the carrier sets end-to-end prices of bandwidth at each period of the planning horizon. These prices determine the demands that are to be met, using a plausible and explicit price-demand relationship; the resulting demands must then be routed, requiring an investment in capacity. The objective of the optimization is now to simultaneously select end-to-end prices of bandwidth and network capacities at each period of the pl...

2025

As distributed generation (DG) becomes more widely deployed distribution networks become more active and take on many of the same characteristics as transmission. We propose the use of nodal pricing that is often used in the pricing of... more

As distributed generation (DG) becomes more widely deployed distribution networks become more active and take on many of the same characteristics as transmission. We propose the use of nodal pricing that is often used in the pricing of short-term operations in transmission. As an economically efficient mechanism, nodal pricing would properly reward DG for reducing line losses through increased revenues at nodal prices, and signal prospective DG where it ought to connect with the distribution network. Applying nodal pricing to a model distribution network we show significant price differences between busses reflecting high marginal losses. Moreover, we show the contribution of a DG resource located at the end of the network to significant reductions in losses and line loading. We also show the DG resource has significantly greater revenue under nodal pricing reflecting its contribution to reduced line losses and loading.

2025

The profound change in the electric industry worldwide in the last twenty years assigns an increasing importance to electric market agents' interaction, even if these are competitive markets like generation and commercialization, or non... more

The profound change in the electric industry worldwide in the last twenty years assigns an increasing importance to electric market agents' interaction, even if these are competitive markets like generation and commercialization, or non competitive transmission and distribution markets. The agent's cooperation and coordination through coalition formation in cost allocation of investment, electric network operation and maintenance, arises as an attractive solution, if one has an appropriate technical and economic modeling. The obtained solutions in such cases are efficient, fair and equitable to participant agents. A transmission cost allocation method is presented, based on cooperative game theory and transmission network capacity use by consumer agents. It is applied to the main Chilean interconnected system and the obtained results are compared with traditional methodologies.

2025, 2003 IEEE Bologna Power Tech Conference Proceedings,

The profound change in the electric industry worldwide in the last twenty years assigns an increasing importance to electric market agents' interaction, even if these are competitive markets like generation and commercialization, or non... more

The profound change in the electric industry worldwide in the last twenty years assigns an increasing importance to electric market agents' interaction, even if these are competitive markets like generation and commercialization, or non competitive transmission and distribution markets. The agent's cooperation and coordination through coalition formation in cost allocation of investment, electric network operation and maintenance, arises as an attractive solution, if one has an appropriate technical and economic modeling. The obtained solutions in such cases are efficient, fair and equitable to participant agents. A transmission cost allocation method is presented, based on cooperative game theory and transmission network capacity use by consumer agents. It is applied to the main Chilean interconnected system and the obtained results are compared with traditional methodologies.