Performance Analysis of Feedback Controlled Data Packet Transmission over High-Speed Networks (original) (raw)

Analysis of discarding policies in high-speed networks

IEEE Journal on Selected Areas in Communications, 1998

Networked applications generate messages that are segmented into smaller, fixed or variable size packets, before they are sent through the network. In high-speed networks, acknowledging individual packets is impractical; so when congestion builds up and packets have to be dropped, entire messages are lost. For a message to be useful, all packets comprising it must arrive successfully at the destination. The problem is therefore which packets to discard so that as many complete messages are delivered, and so that congestion is alleviated or avoided altogether.

A Dynamic Rate Control Mechanism for Source Coded Traffic in a Fast Packet Network

IEEE Journal on Selected Areas in Communications, 1991

To achieve better statistical gain for voice and video traffic and to relieve congestion in fast packet networks, a dynamic rate control mechanism is proposed. An analytical model is developed to evaluate the performance of this control mechanism for voice traffic. The feedback delay for the source node to obtain the network congestion information is represented in the model. The study indicates that significant improvement in statistical gain can be realized for smaller capacity links (e.g., links that can accommodate less than 24 voice calls) with a reasonable feedback time (about 100 ms). The tradeoff for increasing the statistical gain is temporary degradation of voice quality to a lower rate. It is shown that whether the feedback delay is exponentially distributed or constant does not significantly affect performance in terms of fractional packet loss and average received coding rate. It is also shown that using the number of calls in talkspurt or the packet queue length as measures of congestion provides comparable performance.

On the performance of early packet discard

IEEE Journal on Selected Areas in Communications, 1997

In a previous paper 3], one of the authors, gave a worst-case analysis for the Early Packet Discard (EPD) technique for maintaining packet integrity during overload in ATM switches. This analysis showed that to ensure 100% goodput during overload under worst-case conditions, requires a bu er with enough storage for one maximum length packet from every active virtual circuit. This paper re nes that analysis, using assumptions that are closer to what we expect to see in practice and examines how EPD performs when the bu er is not large enough to achieve 100% goodput. We show that 100% goodput can be achieved with substantially smaller bu ers than predicted by the worst-case analysis, although the required bu er space can be signi cant when the link speed is substantially higher than the rate of the individual virtual circuits. We also show that high goodputs can be achieved with more modest bu er sizes, but that EPD exhibits anomalies with respect to bu er capacity, in that there are situations in which increasing the amount of bu ering can cause the goodput to decrease. These results are validated by comparison with simulation.

Generalized stochastic performance models for loss-based congestion control

2010

In this paper, we propose a generalized framework for modeling the behavior of prominent congestion-control protocols. Specifically, we define a general class of loss-based congestion-control (LB-CC) mechanisms and demonstrate that many variants of TCP, including those being proposed for high-speed networks, belong to this class.

Window-based control in lossy packet networks

This paper addresses the problem of fair allocation of bandwidth resources on lossy channels and heterogeneous networks. It discusses more particularly the ability of window-based congestion control to support non-congestion related losses. We investigate methods for efficient packet loss recovery by retransmission, and builds on explicit congestion control mechanisms to decouple the packet loss detection from the congestion feedback signals. For different retransmission strategies that respectively rely on conventional cumulative acknowledgments or accurate loss monitoring, we show how the principles underlying the TCP retransmission mechanisms have to be adapted in order to take advantage of an explicit congestion control framework. A novel retransmission timer is proposed to deal with multiple losses of data segments and, in consequence, to allow for aggressive reset of the connection recovery timer. It ensures significant benefit from temporary inflation of the send-out window, and hence the fair share of bottleneck bandwidth between loss-prone and lossy connections. Extensive simulations demonstrate the performance of the new loss monitoring and recovery strategies, when used with two distinct explicit congestion control mechanisms. The first one proposes a simple modification of TCP to support explicit congestion control, based on a coarse binary congestion notification from the routers. The second one, introduced in [1], relies on accurate feedback about congestion to compute fine congestion window adjustment. For both congestion control mechanisms, we observe that retransmissions triggered based on a precise monitoring of losses allow for efficient utilization of lossy links, and provide a fair share of the bottleneck bandwidth between heterogeneous connections, even for high loss ratios and bursty loss processes. Explicit congestion control, combined with appropriate error control strategies, can therefore provide a valid solution to reliable and controlled connections over lossy network infrastructures. In addition, our simulations also reveal that triggering retransmissions based on cumulative acknowledgments is only efficient-in terms of bottleneck utilization and fairness-at high loss rates when used in conjunction with an accurate and finely tuned congestion control. Therefore, we finally recommend the implementation of accurate feedback mechanisms either in the routers (about the level of congestion) or at the receivers (about a packet arrival), in order to provide a fair bandwidth allocation in hybrid networks with explicit window-based control.

Improved analysis of early packet discard

Teletraffic Science and Engineering, 1997

In a previous paper, one of the authors, gave a worst-case analysis for the Early Packet Discard technique for maintaining packet integrity during overload in ATM switches. This analysis showed that to ensure 100% goodput during overload under worst-case conditions, requires a bu er with enough storage for one maximum length packet from every active virtual circuit. This paper re nes that analysis, using assumptions that are closer to what we expect to see in practice. Our principal result is that 100% goodput can be achieved with substantially smaller bu ers, although the required bu er space can be signi cant when the link speed is substantially higher than the rate of the individual virtual circuits. These results are validated by comparison with simulation. We also give a simple analysis to determine the amount of bu ering needed to bound the probability of bu er over ow and under ow.

Explicit Window-Based Control in Lossy Packet Networks

2006

This paper addresses the problem,of fair allocation of bandwidth resources on lossy channels and heterogeneous,networks. It discusses more,particularly the ability of window-based,congestion control to support non-congestion related losses . We investigate methods for efficient packet loss recovery by retransmissio n, and builds on explicit congestion control mechanisms to decouple the packet loss detection from,the congestion feedback,signals. For different retransmission strategies that respect ively rely on conventional cumulative acknowledgments or accurate loss monitoring, we show how the principles underlying the TCP retransmission mechanisms,have to be adapted in order to take advantage,of an explicit congestion control framework. A novel retransmission timer is proposed to deal with multiple losses of data segments and, in consequence, to allow for aggressive reset of the connection recovery timer. It ensures significant ben efit from temporary inflation of the send-out wind...

Remarks Regarding Queuing Model And Packet Loss Probability For The Traffic With Self-Similar Characteristics

2008

Network management techniques have long been of interest to the networking research community. The queue size plays a critical role for the network performance. The adequate size of the queue maintains Quality of Service (QoS) requirements within limited network capacity for as many users as possible. The appropriate estimation of the queuing model parameters is crucial for both initial size estimation and during the process of resource allocation. The accurate resource allocation model for the management system increases the network utilization. The present paper demonstrates the results of empirical observation of memory allocation for packet-based services.

TCP performance over end-to-end rate control and stochastic available capacity

IEEE/ACM Transactions on Networking, 2001

Motivated by TCP over end-to-end ABR, we study the performance of adaptive window congestion control, when it operates over an explicit feedback rate-control mechanism, in a situation in which the bandwidth available to the elastic traffic is stochastically time varying. It is assumed that the sender and receiver of the adaptive window protocol are colocated with the rate-control endpoints. The objective of the study is to understand if the interaction of the rate-control loop and the window-control loop is beneficial for end-to-end throughput, and how the parameters of the problem (propagation delay, bottleneck buffers, and rate of variation of the available bottleneck bandwidth) affect the performance. The available bottleneck bandwidth is modeled as a two-state Markov chain. We develop an analysis that explicitly models the bottleneck buffers, the delayed explicit rate feedback, and TCP's adaptive window mechanism. The analysis, however, applies only when the variations in the available bandwidth occur over periods larger than the round-trip delay. For fast variations of the bottleneck bandwidth, we provide results from a simulation on a TCP testbed that uses Linux TCP code, and a simulation/emulation of the network model inside the Linux kernel. We find that, over end-to-end ABR, the performance of TCP improves significantly if the network bottleneck bandwidth variations are slow as compared to the round-trip propagation delay. Further, we find that TCP over ABR is relatively insensitive to bottleneck buffer size. These results are for a short-term average link capacity feedback at the ABR level (INSTCAP). We use the testbed to study EFFCAP feedback, which is motivated by the notion of the effective capacity of the bottleneck link. We find that EFFCAP feedback is adaptive to the rate of bandwidth variations at the bottleneck link, and thus yields good performance (as compared to INSTCAP) over a wide range of the rate of bottleneck bandwidth variation. Finally, we study if TCP over ABR, with EFFCAP feedback, provides throughput fairness even if the connections have different round-trip propagation delays.

Progression Towards Controlling of Packet Loss in Networks

Ijitr, 2014

Computer networks are intended to hold a certain amount of traffic with a suitable level of network performance. Packets will undergo long queuing delays at congested nodes and perhaps packet loss if buffers overflow. Traffic management denotes to the set of traffic controls contained by the network that control traffic flows for the principle of maintaining the usability of the network during conditions of congestion. Congestion control is the keystone of packet switching networks and it should prevent the congestion collapse, and to provide the fairness of competing flows and to optimize the transport performance indexes. In order to progress fairness in networks of high speed, Core-Stateless Fair Queuing establish a system of open-loop control at the network layer, which set in the label of the rate of flow arrival onto the packet header at edge routers moreover plunges the packet at core routers on the basis of rate label if congestion occurs. To work out the oscillation trouble, the Stable Token-Limited Congestion Control was commenced and there is approximately no packet lost at the congested link.

TCP-Illinois: A loss- and delay-based congestion control algorithm for high-speed networks

Performance Evaluation, 2008

We introduce a new congestion control algorithm, called TCP-Illinois, which has many desirable properties for implementation in (very) high-speed networks. TCP-Illinois is a sender side protocol, which modifies the AIMD algorithm of the standard TCP (Reno, NewReno or SACK) by adjusting the increment/decrement amounts based on delay information. By using both loss and delay as congestion signals, TCP-Illinois achieves a better throughput than the standard TCP for high-speed networks. To study its fairness and stability properties, we extend recently developed stochastic matrix models of TCP to accommodate window size backoff probabilities that are proportional to arrival rates when the network is congested. Using this model, TCP-Illinois is shown to allocate the network resource fairly as in the standard TCP. In addition, TCP-Illinois is shown to be compatible with the standard TCP when implemented in today's networks, and is shown to provide the right incentive for transition to the new protocol. We finally perform ns-2 simulations to validate its properties and demonstrate its performance.

Congestion Control in TCP/IP Routers Based on Sampled-Data Systems Theory

Journal of Control, Automation and Electrical Systems, 2020

A methodology for designing congestion controllers, based on active queue management (AQM), is presented here. The congestion control law is derived using sampled-data H ∞ systems theory. More precisely, a sampled-data state feedback that guarantees the stability of the closed-loop system and satisfies a H ∞ disturbance attenuation level is derived here, based on sufficient conditions expressed in terms of linear matrix inequalities. The effectiveness of the developed technique is validated on two examples. Keywords AQM • Network congestion control • Sampled-data controller • Saturating input 1 Introduction Network congestion in communication networks is being paid significant attention because of their wide range of applications, from Web servers to industrial control systems. In order to manage network congestion, active queue management (AQM) (Braden et al. 1998) techniques are frequently used, which aim to reduce packet drops and improve the overall network utilization. One of the most well-known AQM policies are random early detection (RED) (Floyd and Jacobson 1993), adaptive RED (ARED) (La et al. 2003) and nonlinear RED (NLRED) (Rastogi and Zaheer 2016). Moreover, the fundamentals of control theory have been used to analyze new AQM schemes, such as proportional integral (PI) (Hollot et al. 2002), proportional integral derivative (PID) (Zhang and Papachristodoulou 2014), adaptive PI (Zhang et al. 2003) and PI enhanced (PIE) (Pan et al. 2013). Also, in Bender (2013) it has proposed the use of a dynamic anti-windup gain matrix to improve the performance of a designed controller used to AQM in congested TCP/IP routers, and it was developed in El Fezazi et al.

Performance analysis and stochastic stability of congestion control protocols

INFOCOM 2005. 24th …, 2005

We study an adaptive window protocol (AWP) with a general increase and decrease profile in the presence of window dependent random losses. We derive a steady-state Kolmogorov equation and obtain its solution in analytic form. We obtain some stochastic ordering relations for a protocol with different bounds on window. A closed form necessary and sufficient stability condition using the stochastic ordering for the window process is established. Finally, we apply the general results to particular TCP versions ...

Chapter 9 Modeling of Packet Streaming Services in Information Communication Networks

2018

Application of the term video streaming in contemporary usage denotes compression techniques and data buffering, which can transmit video in real time over the network. There is currently a rapid growth and development of technologies using wireless broadband technology as a transport, which is a serious alternative to cellular communication systems. Adverse effect of the aggressive environment used in wireless networks transmission results in data packets undergoing serious distortions and often getting lost in transit. All existing research in this area investigate the known types of errors separately. At present there are no standard approaches to determining the effect of errors on transmission quality of services. Besides, the spate in popularity of multimedia applications has led to the need for optimization of bandwidth allocation and usage in telecommunication networks. Modern telecommunication networks should by their definition be able to maintain the quality of different ...

Analysis of packet loss processes in high-speed networks

1993

For the same long-term loss ratio, di erent loss patterns lead to di erent application-level Quality of Service QoS perceived by the users short-term QoS. While basic packet loss measures like the mean loss rate are widely used in the literature, much less work has been devoted to capturing a more detailed characterization of the loss process. In this paper, we provide means for a comprehensive c haracterization of loss processes by employing a model that captures loss burstiness and distances between loss bursts. Model parameters can be approximated based on run-lengths of received lost packets. We show h o w the model serves as a framework in which packet loss metrics existing in the literature can be described as model parameters and thus integrated into the loss process characterization. Variations of the model with di erent complexity are introduced, including the well-known Gilbert model as a special case. Finally we show h o w our loss characterization can be used by applying it to actual Internet loss traces.

A control theoretical approach to congestion control in packet networks

2004

Abstract In this paper, we introduce a control theoretical analysis of the closed-loop congestion control problem in packet networks. The control theoretical approach is used in a proportional rate controller, where packets are admitted into the network in accordance with network buffer occupancy. A Smith Predictor is used to deal with large propagation delays, common to high speed backbone networks.

Congestion controlling schemes for high-speed data networks: A survey

Journal of High Speed Networks, 2019

The data networks are basically designed with the aim of maximum throughput and fair resource allocation by managing available resources. A transport layer plays an important role in throughput and fairness with the help of congestion control algorithms (variants). This survey targets mainly congestion issues in high-speed data networks to improve efficiency at connection or flow level. Transmission Control Protocol (TCP) is a dominating transport layer protocol in the existing network because of its reliable service and deployment in most of the routers. A cause of congestion may be different in wired and wireless network and needs to be handled separately. Packet delay, packet loss and time out (RTO) are not caused by congestion in case of wireless network. This has been taken into account in our consideration. To overcome the dominance of TCP, Google proposed UDP based solution to handle congestion control and reliable service with minimum latency and control overhead. In the literature several methods are proposed to classify transport layer Protocols. In this survey congestion control proposals are classified based on situation handled by the algorithm such as pure congestion, link loss, packet reordering, path optimization etc. and at the end congestion control at flow level has been addressed.