Mahmoud Elhaddad - Academia.edu (original) (raw)

Uploads

Papers by Mahmoud Elhaddad

Research paper thumbnail of Decoupling packet loss from blocking in proactive reservation-based switching

First International Conference on Broadband Networks

We consider the maximization of network throughput in buffer-constrained optical networks using a... more We consider the maximization of network throughput in buffer-constrained optical networks using aggregate bandwidth allocation and reservation-based transmission control. Assuming that all flows are subject to loss-based TCP congestion control, we quantify the effects of buffer capacity constraints on bandwidth utilization efficiency through contention-induced packet loss. The analysis shows that the ability of TCP flows to efficiently utilize successful reservations is highly sensitive to the available buffer capacity. Maximizing the bandwidth utilization efficiency under buffer capacity constraints thus requires decoupling packet loss from contention-induced blocking of transmission requests. We describe a confirmed (two-way) reservation scheme that eliminates contention-induced loss, so that no packets are dropped at the network's core, and loss can be incurred only at the adequately buffer-provisioned ingress routers, where it is exclusively congestion-induced. For the confirmed signaling scheme, analytical and simulation results indicate that TCP aggregates are able to efficiently utilize the successful reservations independently of buffer constraints.

Research paper thumbnail of Traffic shaping and scheduling for OBS-based IP/WDM backbones

SPIE Proceedings, 2003

We introduce Proactive Reservation-based Switching (PRS)-a switching architecture for IP/WDM netw... more We introduce Proactive Reservation-based Switching (PRS)-a switching architecture for IP/WDM networks based on Labeled Optical Burst Switching. PRS achieves packet delay and loss performance comparable to that of packet-switched networks, without requiring large buffering capacity, or burst scheduling across a large number of wavelengths at the core routers. PRS combines proactive channel reservation with periodic shaping of ingress-egress traffic aggregates to hide the offset latency and approximate the utilization/buffering characteristics of discrete-time queues with periodic arrival streams. A channel scheduling algorithm imposes constraints on burst departure times to ensure efficient utilization of wavelength channels and to maintain the distance between consecutive bursts through the network. Results obtained from simulation using TCP traffic over carefully designed topologies indicate that PRS consistently achieves channel utilization above 90% with modest buffering requirements.

Research paper thumbnail of Analysis of a transmission scheduling algorithm for supporting bandwidth guarantees in bufferless networks

ACM SIGMETRICS Performance Evaluation Review, 2006

In a network of bufferless packet multiplexers, the user-perceived capacity of an ingress-egress ... more In a network of bufferless packet multiplexers, the user-perceived capacity of an ingress-egress tunnel (connection) may degrade quickly with increasing path length. This is due to the compounding of transmission blocking probabilities along the path of the connection, even when the links are not overloaded. In such an environment, providing users (e.g., client ISPs) with tunnels of statistically guaranteed bandwidth may limit the network's connection-carrying capacity.In this paper, we introduce and analyze a transmission-scheduling algorithm that employs randomization and traffic regulation at the ingress, and batch scheduling at the links. The algorithm ensures that a fraction of transmissions from each connection is consistently subject to small blocking probability at every link, so that these transmissions are likely to survive long paths. For this algorithm, we obtain tight bounds on the expectation and tail probability of the blocking rate of any ingress-egress connectio...

Research paper thumbnail of On the Emulation of Finite-Buffered Output Queued Switches Using Combined Input-Output Queuing

Lecture Notes in Computer Science

Emulation of Output Queuing (OQ) switches using Combined Input-Output Queuing (CIOQ) switches has... more Emulation of Output Queuing (OQ) switches using Combined Input-Output Queuing (CIOQ) switches has been studied extensively in the setting where the switch buffers have unlimited capacity. In this paper we study the general setting where the OQ switch and the CIOQ switch have finite buffer capacity B ≥ 1 packets at every output. We analyze the resource requirements of CIOQ policies in terms of the required fabric speedup and the additional buffer capacity needed at the CIOQ inputs: A CIOQ policy is said to be (s, b)-valid (for OQ emulation) if a CIOQ employing this policy can emulate an OQ switch using fabric speedup s ≥ 1, and without exceeding buffer occupancy b at any input port. For the family of work-conserving scheduling algorithms, we find that whereas every greedy CIOQ policy is valid at speedup B, no CIOQ policy is valid at speedup s < 3 √ B − 2 when preemption is allowed. We also find that CCF in particular is not valid at any speedup s < B. We then introduce a CIOQ policy, CEH, that is valid at speedup s ≥ p 2(B − 1). Under CEH, the buffer occupancy at any input never exceeds 1 + j B−1 s−1 B−1 s−2 m. We also show that a greedy variant of the CCF policy is (2, B)-valid for the emulation of non-preemptive OQ algorithms with PIFO service disciplines.

Research paper thumbnail of Scheduling to Minimize theWorst-Case Loss Rate

27th International Conference on Distributed Computing Systems (ICDCS '07), 2007

We study link scheduling in networks with small router buffers, with the goal of minimizing the g... more We study link scheduling in networks with small router buffers, with the goal of minimizing the guaranteed packet loss rate bound for each ingress-egress traffic aggregate (connection). Given a link scheduling algorithm (a service discipline and a packet drop policy), the guaranteed loss rate for a connection is the loss rate under worst-case routing and bandwidth allocations for competing traffic. Under simplifying assumptions, we show that a local min-max fairness property with respect to apportioning loss events among the connections sharing each link, and a condition on the correlation of scheduling decisions at different links are two necessary and (together) sufficient conditions for optimality in the minimization problem. Based on these conditions, we introduce a randomized link-scheduling algorithm called Rolling Priority where packet scheduling at each link relies exclusively on local information. We show that RP satisfies both conditions and is therefore optimal.

Research paper thumbnail of Supporting bandwidth guarantees in buffer-limited networks

Flows traversing a buffer-limited network can suffer rapid throughput deterioration with increasi... more Flows traversing a buffer-limited network can suffer rapid throughput deterioration with increasing path length. This paper introduces a distributed transmission scheduling scheme that supports strong bandwidth guarantees for flows routed over a large number of hops, without compromising network utilization. The scheme is defined within a reservation-based transmission control framework. Through randomization at the ingress routers and batch scheduling of transmission requests at the links, it ensures that a fraction of transmissions from each ingress-egress connection consistently experience low blocking probability at every hop. These requests are therefore likely to survive long paths. The scheme supports provable bounds on the expected blocking rate and its tail probability. We compare the bounds achieved under this scheme to those obtained when the blocking probability of each request at a link is equal to the link's blocking rate. We find that the proposed scheme improves the bounds for connections experiencing blocking at multiple hops without weakening the bounds for other connections.

Research paper thumbnail of Supporting Loss Guarantees in Buffer-Limited Networks

200614th IEEE International Workshop on Quality of Service, 2006

We consider the problem of packet scheduling in a network with small router buffers. The objectiv... more We consider the problem of packet scheduling in a network with small router buffers. The objective is to provide a statistical bound on the worst-case packet loss rate for a traffic aggregate (connection) routed along any network path, given a maximum permissible link utilization (load). This problem is argued to be of interest in networks providing statistical loss-rate guarantees to ingress-egress connections with fixed bandwidth demands. We introduce a scheduling algorithm for networks using perpacket transmission reservation. Reservations allow loss guarantees at the aggregate level to hold for individual flows within the aggregate. The algorithm employs randomization and traffic regulation at the ingress, and batch local scheduling at the links. It ensures that a large fraction of packets from each connection are consistently subject to small loss probability at every link. These packets are therefore likely to survive long paths. To obtain the desired loss-rate bound, we analyze the performance of the algorithm under global routing and bandwidth allocation scenarios that maximize the loss rate of a connection routed along an arbitrary network path. We compare the bound to that obtained using the scheduling algorithm that combines the FCFS service discipline and the drop-tail policy. We find that the proposed algorithm significantly improves the constraints on link utilization and path length necessary to achieve strong loss-rate guarantees.

Research paper thumbnail of Decoupling packet loss from blocking in proactive reservation-based switching

First International Conference on Broadband Networks

We consider the maximization of network throughput in buffer-constrained optical networks using a... more We consider the maximization of network throughput in buffer-constrained optical networks using aggregate bandwidth allocation and reservation-based transmission control. Assuming that all flows are subject to loss-based TCP congestion control, we quantify the effects of buffer capacity constraints on bandwidth utilization efficiency through contention-induced packet loss. The analysis shows that the ability of TCP flows to efficiently utilize successful reservations is highly sensitive to the available buffer capacity. Maximizing the bandwidth utilization efficiency under buffer capacity constraints thus requires decoupling packet loss from contention-induced blocking of transmission requests. We describe a confirmed (two-way) reservation scheme that eliminates contention-induced loss, so that no packets are dropped at the network's core, and loss can be incurred only at the adequately buffer-provisioned ingress routers, where it is exclusively congestion-induced. For the confirmed signaling scheme, analytical and simulation results indicate that TCP aggregates are able to efficiently utilize the successful reservations independently of buffer constraints.

Research paper thumbnail of Traffic shaping and scheduling for OBS-based IP/WDM backbones

SPIE Proceedings, 2003

We introduce Proactive Reservation-based Switching (PRS)-a switching architecture for IP/WDM netw... more We introduce Proactive Reservation-based Switching (PRS)-a switching architecture for IP/WDM networks based on Labeled Optical Burst Switching. PRS achieves packet delay and loss performance comparable to that of packet-switched networks, without requiring large buffering capacity, or burst scheduling across a large number of wavelengths at the core routers. PRS combines proactive channel reservation with periodic shaping of ingress-egress traffic aggregates to hide the offset latency and approximate the utilization/buffering characteristics of discrete-time queues with periodic arrival streams. A channel scheduling algorithm imposes constraints on burst departure times to ensure efficient utilization of wavelength channels and to maintain the distance between consecutive bursts through the network. Results obtained from simulation using TCP traffic over carefully designed topologies indicate that PRS consistently achieves channel utilization above 90% with modest buffering requirements.

Research paper thumbnail of Analysis of a transmission scheduling algorithm for supporting bandwidth guarantees in bufferless networks

ACM SIGMETRICS Performance Evaluation Review, 2006

In a network of bufferless packet multiplexers, the user-perceived capacity of an ingress-egress ... more In a network of bufferless packet multiplexers, the user-perceived capacity of an ingress-egress tunnel (connection) may degrade quickly with increasing path length. This is due to the compounding of transmission blocking probabilities along the path of the connection, even when the links are not overloaded. In such an environment, providing users (e.g., client ISPs) with tunnels of statistically guaranteed bandwidth may limit the network's connection-carrying capacity.In this paper, we introduce and analyze a transmission-scheduling algorithm that employs randomization and traffic regulation at the ingress, and batch scheduling at the links. The algorithm ensures that a fraction of transmissions from each connection is consistently subject to small blocking probability at every link, so that these transmissions are likely to survive long paths. For this algorithm, we obtain tight bounds on the expectation and tail probability of the blocking rate of any ingress-egress connectio...

Research paper thumbnail of On the Emulation of Finite-Buffered Output Queued Switches Using Combined Input-Output Queuing

Lecture Notes in Computer Science

Emulation of Output Queuing (OQ) switches using Combined Input-Output Queuing (CIOQ) switches has... more Emulation of Output Queuing (OQ) switches using Combined Input-Output Queuing (CIOQ) switches has been studied extensively in the setting where the switch buffers have unlimited capacity. In this paper we study the general setting where the OQ switch and the CIOQ switch have finite buffer capacity B ≥ 1 packets at every output. We analyze the resource requirements of CIOQ policies in terms of the required fabric speedup and the additional buffer capacity needed at the CIOQ inputs: A CIOQ policy is said to be (s, b)-valid (for OQ emulation) if a CIOQ employing this policy can emulate an OQ switch using fabric speedup s ≥ 1, and without exceeding buffer occupancy b at any input port. For the family of work-conserving scheduling algorithms, we find that whereas every greedy CIOQ policy is valid at speedup B, no CIOQ policy is valid at speedup s < 3 √ B − 2 when preemption is allowed. We also find that CCF in particular is not valid at any speedup s < B. We then introduce a CIOQ policy, CEH, that is valid at speedup s ≥ p 2(B − 1). Under CEH, the buffer occupancy at any input never exceeds 1 + j B−1 s−1 B−1 s−2 m. We also show that a greedy variant of the CCF policy is (2, B)-valid for the emulation of non-preemptive OQ algorithms with PIFO service disciplines.

Research paper thumbnail of Scheduling to Minimize theWorst-Case Loss Rate

27th International Conference on Distributed Computing Systems (ICDCS '07), 2007

We study link scheduling in networks with small router buffers, with the goal of minimizing the g... more We study link scheduling in networks with small router buffers, with the goal of minimizing the guaranteed packet loss rate bound for each ingress-egress traffic aggregate (connection). Given a link scheduling algorithm (a service discipline and a packet drop policy), the guaranteed loss rate for a connection is the loss rate under worst-case routing and bandwidth allocations for competing traffic. Under simplifying assumptions, we show that a local min-max fairness property with respect to apportioning loss events among the connections sharing each link, and a condition on the correlation of scheduling decisions at different links are two necessary and (together) sufficient conditions for optimality in the minimization problem. Based on these conditions, we introduce a randomized link-scheduling algorithm called Rolling Priority where packet scheduling at each link relies exclusively on local information. We show that RP satisfies both conditions and is therefore optimal.

Research paper thumbnail of Supporting bandwidth guarantees in buffer-limited networks

Flows traversing a buffer-limited network can suffer rapid throughput deterioration with increasi... more Flows traversing a buffer-limited network can suffer rapid throughput deterioration with increasing path length. This paper introduces a distributed transmission scheduling scheme that supports strong bandwidth guarantees for flows routed over a large number of hops, without compromising network utilization. The scheme is defined within a reservation-based transmission control framework. Through randomization at the ingress routers and batch scheduling of transmission requests at the links, it ensures that a fraction of transmissions from each ingress-egress connection consistently experience low blocking probability at every hop. These requests are therefore likely to survive long paths. The scheme supports provable bounds on the expected blocking rate and its tail probability. We compare the bounds achieved under this scheme to those obtained when the blocking probability of each request at a link is equal to the link's blocking rate. We find that the proposed scheme improves the bounds for connections experiencing blocking at multiple hops without weakening the bounds for other connections.

Research paper thumbnail of Supporting Loss Guarantees in Buffer-Limited Networks

200614th IEEE International Workshop on Quality of Service, 2006

We consider the problem of packet scheduling in a network with small router buffers. The objectiv... more We consider the problem of packet scheduling in a network with small router buffers. The objective is to provide a statistical bound on the worst-case packet loss rate for a traffic aggregate (connection) routed along any network path, given a maximum permissible link utilization (load). This problem is argued to be of interest in networks providing statistical loss-rate guarantees to ingress-egress connections with fixed bandwidth demands. We introduce a scheduling algorithm for networks using perpacket transmission reservation. Reservations allow loss guarantees at the aggregate level to hold for individual flows within the aggregate. The algorithm employs randomization and traffic regulation at the ingress, and batch local scheduling at the links. It ensures that a large fraction of packets from each connection are consistently subject to small loss probability at every link. These packets are therefore likely to survive long paths. To obtain the desired loss-rate bound, we analyze the performance of the algorithm under global routing and bandwidth allocation scenarios that maximize the loss rate of a connection routed along an arbitrary network path. We compare the bound to that obtained using the scheduling algorithm that combines the FCFS service discipline and the drop-tail policy. We find that the proposed algorithm significantly improves the constraints on link utilization and path length necessary to achieve strong loss-rate guarantees.