Taieb Znati - Academia.edu (original) (raw)

Papers by Taieb Znati

Research paper thumbnail of End-to-end pairwise key establishment using multi-path in wireless sensor network

GLOBECOM '05. IEEE Global Telecommunications Conference, 2005., 2005

Research paper thumbnail of RoMR: robust multicast routing in mobile wireless networks

Wireless Communications and Mobile Computing, 2010

... a relay node does not save a copy of the tree in order to reduce the demands ... Several addi... more ... a relay node does not save a copy of the tree in order to reduce the demands ... Several additional path loss models based on the size of the radio cells are discussed in Reference ... When a MM receives a message from the unicast protocol that the topology has changed, it needs ...

Research paper thumbnail of Robust multicasting using an underlying link state unicast protocol

37th Annual Hawaii International Conference on System Sciences, 2004. Proceedings of the, 2004

... fold increase occurs in the num-ber of packets being sent, but RoMR finds a value of k > 1... more ... fold increase occurs in the num-ber of packets being sent, but RoMR finds a value of k > 1, when possible, reducing the overhead incurred with ... It was enhanced to include the weights associated with the links in the network in the Hello messages as well as in the Topology ...

Research paper thumbnail of Increasing DHT Data Security by Scattering Data (Invited Paper)

2008 Proceedings of 17th International Conference on Computer Communications and Networks, 2008

This paper describes methods for increasing the security of data being stored in a distributed ha... more This paper describes methods for increasing the security of data being stored in a distributed hash table (DHT) which leverages the inherent properties of the DHT to provide a secure storage substrate. The methods presented are based upon a framework referred to as "scatter, conceal, and recover" (SCAR). The standard method of securing data in a DHT is to encrypt the data using symmetrical encryption before storing it in the network. SCAR provides this level of security, but also prevents any known cryptoanalisys from being performed. It does this by dividing data into multiple blocks and scattering these blocks within the DHT. The security of SCAR is provided by the property that an attacker is unable to obtain and reassemble the data blocks correctly. However, if the attacker has access to the network communication, the likelihood of a successful attack is significantly increased. This paper defines how such attacks can be executed and provides methods for ensuring data security in spite of such attacks.

Research paper thumbnail of Congestion Control using Efficient Explicit Feedback

IEEE INFOCOM 2009 - The 28th Conference on Computer Communications, 2009

This paper proposes a framework for congestion control, called Binary Marking Congestion Control ... more This paper proposes a framework for congestion control, called Binary Marking Congestion Control (BMCC) for high bandwidth-delay product networks. The basic components of BMCC are i) a packet marking scheme for obtaining high resolution congestion estimates using the existing bits available in the IP header for Explicit Congestion Notification (ECN) and ii) a set of load-dependent control laws that use these congestion estimates to achieve efficient and fair bandwidth allocations on high bandwidth-delay product networks, while maintaining a low persistent queue length and negligible packet loss rate. We present analytical models that predict and provide insights into the convergence properties of the protocol. Using extensive packet-level simulations, we assess the efficacy of BMCC and perform comparisons with several proposed schemes. BMCC outperforms VCP, MLCP, XCP, SACK+RED/ECN and in some cases RCP, in terms of average flow completion times for typical Internet flow sizes.

Research paper thumbnail of Simulation Study of Firewalls to Aid Improved Performance

39th Annual Simulation Symposium (ANSS'06), 2006

The overall performance of a firewall is crucial in enforcing and administrating security, especi... more The overall performance of a firewall is crucial in enforcing and administrating security, especially when the network is under attack. The continuous growth of the Internet, coupled with the increasing sophistication of the attacks, is placing stringent demands on firewall performance. Under such circumstances it becomes very vital to understand the fundamental behind firewalls and their operation. In this paper, we describe a simulation framework for the study and analysis of firewalls. Based on this framework, we design methodologies to inspect and analyze both multi-dimensional firewall rules and traffic logs information. The data used for this study was collected over a large set of firewall rules and traffic logs at tens of enterprise networks managed by a Tier-1 service provider. The analysis presented in the paper firmly state the importance of considering traffic information in the process of firewall optimization. To the best of our knowledge ours is the first attempt to show the relevance of considering traffic characteristics to aid firewall optimization.

Research paper thumbnail of Traffic-Aware Firewall Optimization Strategies

2006 IEEE International Conference on Communications, 2006

... Based on this framework, we design a set of tools that inspect and analyze both multi-dimensi... more ... Based on this framework, we design a set of tools that inspect and analyze both multi-dimensional firewall rules and traffic logs and construct the optimal equivalentfirewall rules based on the observed traffic characteristics. ...

Research paper thumbnail of The Complexity of Channel Scheduling in Multi-Radio Multi-Channel Wireless Networks

IEEE INFOCOM 2009 - The 28th Conference on Computer Communications, 2009

The complexity of channel scheduling in Multi-Radio Multi-Channel (MR-MC) wireless networks is an... more The complexity of channel scheduling in Multi-Radio Multi-Channel (MR-MC) wireless networks is an open research topic. This problem asks for the set of edges that can support maximum amount of simultaneous traffic over orthogonal channels under a certain interference model. There exist two major interference models for channel scheduling, with one under the physical distance constraint, and one under the hop distance constraint. The complexity of channel scheduling under these two interference models serves as the foundation for many problems related to network throughput maximization. However, channel scheduling was proved to be NP-Hard only under the hop distance constraint for SR-SC wireless networks. In this paper, we fill the void by proving that channel scheduling is NP-Hard under both models in MR-MC wireless networks. In addition, we propose a polynomial-time approximation scheme (PTAS) framework that is applicable to channel scheduling under both interference models in MR-MC wireless networks. Furthermore, we conduct a comparison study on the two interference models and identify conditions under which these two models are equivalent for channel scheduling.

Research paper thumbnail of Coopetition Spectrum Trading in Cognitive Radio Networks

2013 IEEE 77th Vehicular Technology Conference (VTC Spring), 2013

ABSTRACT Spectrum trading is a promising method to improve spectrum usage efficiency. Several iss... more ABSTRACT Spectrum trading is a promising method to improve spectrum usage efficiency. Several issues must be addressed, however, to enable spectrum trading that goes beyond conservative trading idle bands and achieve cooperation between primary and secondary users. In this paper, we argue that spectrum holes should be explicitly endogenous and negotiated by spectrum trading participants. To this end, we proposed an a Vickery auction based, coopetive framework to foster cooperation, while allowing competition for spectrum sharing. Incentive schemes and penalty for revocable spectrum are proposed to increase the spectrum access opportunities for SUs while protecting PUs spectrum value. A simultation study shows that the proposed framework outperforms conservative trading approaches, in a variety of scenarios with different levels of cooperation and bidding strategies.

Research paper thumbnail of A Guided Tour Puzzle for Denial of Service Prevention

2009 Annual Computer Security Applications Conference, 2009

Abstract—Various cryptographic,puzzle,schemes,are,pro- posed as a defense mechanism,against denia... more Abstract—Various cryptographic,puzzle,schemes,are,pro- posed as a defense mechanism,against denial of service attack. But, all these puzzle schemes face a dilemma when there is a large disparity between,the computational,power,of attackers and legitimate clients: increasing the difficulty of puzzles might unnecessarily restrict legitimate clients too much, and lower difficulty puzzles cannot,sufficiently block attackers with large computational resources. In this paper, we introduce guided tour puzzle,, a novel puzzle scheme that is not affected by such resource,disparity. A guided,tour puzzle requires a client to visit a predefined set of nodes, called tour guides, in a certain sequential order to retrieve an n-piece answer, one piece from each tour guide that appears,in the tour. This puzzle solving process is non-parallelizable, thus cheating by trying to solve the puzzle in parallel is not possible. Guided tour puzzle not,only,achieves,all previously,defined,desired,properties of a cryptographic puzzle scheme, but it also satisfies more important requirements, such as puzzle fairness and minimum interference, that we identified. The number of tour guides required by the scheme can be as few as two, and this extra cost can be amortized,by sharing the same,set of tour guides among,multiple,servers.

Research paper thumbnail of Decoupling packet loss from blocking in proactive reservation-based switching

First International Conference on Broadband Networks, 2004

We consider the maximization of network throughput in buffer-constrained optical networks using a... more We consider the maximization of network throughput in buffer-constrained optical networks using aggregate bandwidth allocation and reservation-based transmission control. Assuming that all flows are subject to loss-based TCP congestion control, we quantify the effects of buffer capacity constraints on bandwidth utilization efficiency through contention-induced packet loss. The analysis shows that the ability of TCP flows to efficiently utilize successful reservations is highly sensitive to the available buffer capacity. Maximizing the bandwidth utilization efficiency under buffer capacity constraints thus requires decoupling packet loss from contention-induced blocking of transmission requests. We describe a confirmed (two-way) reservation scheme that eliminates contention-induced loss, so that no packets are dropped at the network's core, and loss can be incurred only at the adequately buffer-provisioned ingress routers, where it is exclusively congestion-induced. For the confirmed signaling scheme, analytical and simulation results indicate that TCP aggregates are able to efficiently utilize the successful reservations independently of buffer constraints.

Research paper thumbnail of Scheduling to Minimize the Worst-Case Loss Rate

We study link scheduling in networks with small router buffers, with the goal of minimizing the g... more We study link scheduling in networks with small router buffers, with the goal of minimizing the guaranteed packet loss rate bound for each ingress-egress traffic aggregate (connection). Given a link scheduling algorithm (a service discipline and a packet drop policy), the guaranteed loss rate for a connection is the loss rate under worst-case rout- ing and bandwidth allocations for competing

Research paper thumbnail of <title>Traffic shaping and scheduling for OBS-based IP/WDM backbones</title>

OptiComm 2003: Optical Networking and Communications, 2003

We introduce Proactive Reservation-based Switching (PRS)-a switching architecture for IP/WDM netw... more We introduce Proactive Reservation-based Switching (PRS)-a switching architecture for IP/WDM networks based on Labeled Optical Burst Switching. PRS achieves packet delay and loss performance comparable to that of packet-switched networks, without requiring large buffering capacity, or burst scheduling across a large number of wavelengths at the core routers. PRS combines proactive channel reservation with periodic shaping of ingress-egress traffic aggregates to hide the offset latency and approximate the utilization/buffering characteristics of discrete-time queues with periodic arrival streams. A channel scheduling algorithm imposes constraints on burst departure times to ensure efficient utilization of wavelength channels and to maintain the distance between consecutive bursts through the network. Results obtained from simulation using TCP traffic over carefully designed topologies indicate that PRS consistently achieves channel utilization above 90% with modest buffering requirements.

Research paper thumbnail of Analysis of a transmission scheduling algorithm for supporting bandwidth guarantees in bufferless networks

ACM SIGMETRICS Performance Evaluation Review, 2006

In a network of bufferless packet multiplexers, the userperceived capacity of an ingress-egress t... more In a network of bufferless packet multiplexers, the userperceived capacity of an ingress-egress tunnel (connection) may degrade quickly with increasing path length. This is due to the compounding of transmission blocking probabilities along the path of the connection, even when the links are not overloaded. In such an environment, providing users (e.g., client ISPs) with tunnels of statistically guaranteed bandwidth may limit the network's connection-carrying capacity.

Research paper thumbnail of Scheduling to Minimize theWorst-Case Loss Rate

27th International Conference on Distributed Computing Systems (ICDCS '07), 2007

We study link scheduling in networks with small router buffers with the goal of minimizing the gu... more We study link scheduling in networks with small router buffers with the goal of minimizing the guaranteed packet loss rate bound for each ingress-egress traffic aggregate (connection). Given a link scheduling algorithm (a service discipline and a packet drop policy), the guaranteed loss rate for a connection is the loss rate under worst-case routing and bandwidth allocations for competing traffic. We show that a local min-max fairness property with respect to apportioning loss events among the connections sharing each link, and the correlation of scheduling decisions at different links are two necessary and (together) sufficient conditions for optimality in the minimization problem. Based on these conditions, we introduce and analyze a randomized link-scheduling algorithm called Rolling Priorities (RP) where packet scheduling at each link relies exclusively on local information. We show that RP satisfies both conditions and is therefore optimal. Furthermore, we show that the algorithm combining FCFS with the Random Drop policy (FCFS/RD) is locally fair and that it is nearly optimal under light link load. Under heavy load, the guaranteed loss rate under FCFS/RD deteriorates much faster as a function of path length compared to the optimal algorithm.

Research paper thumbnail of Performance of redundancy methods in P2P networks under churn

2012 International Conference on Computing, Networking and Communications (ICNC), 2012

Research paper thumbnail of Design and analysis of a quantum-based QoS-aware fair share server for integrated services networks

Proceedings 32nd Annual Simulation Symposium, 1999

... Michael S. Boykin University of Pittsburgh Computer Science Department Pittsburgh, PA 15260 b... more ... Michael S. Boykin University of Pittsburgh Computer Science Department Pittsburgh, PA 15260 boykin1@cs.pitt.edu ... (1) where m denotes channel m's maximum burst size, m de-notes its minimum guaranteed rate, Smax denotes the max-imum packet size and C denotes the ...

Research paper thumbnail of Message from the technical program committee co-chairs

Proceedings - 2009 4th Latin-American Symposium on Dependable Computing, LADC 2009, 2009

... and Vipin Kumar, three distinguished scientists who give an inspiring view of the frontiers o... more ... and Vipin Kumar, three distinguished scientists who give an inspiring view of the frontiers of data mining, also with the eye of neighboring ... Finally, we would like to thank Sanjay Ranka and Philip S. Yu, the Conference General Chairs, who have been extremely helpful on all ...

Research paper thumbnail of Shadow Replication: An Energy-Aware, Fault-Tolerant Computational Model for Green Cloud Computing

Energies, 2014

As the demand for cloud computing continues to increase, cloud service providers face the dauntin... more As the demand for cloud computing continues to increase, cloud service providers face the daunting challenge to meet the negotiated SLA agreement, in terms of reliability and timely performance, while achieving cost-effectiveness. This challenge is increasingly compounded by the increasing likelihood of failure in large-scale clouds and the rising impact of energy consumption and CO 2 emission on the environment. This paper proposes Shadow Replication, a novel fault-tolerance model for cloud computing, which seamlessly addresses failure at scale, while minimizing energy consumption and reducing its impact on the environment. The basic tenet of the model is to associate a suite of shadow processes to execute concurrently with the main process, but initially at a much reduced execution speed, to overcome failures as they occur. Two computationally-feasible schemes are proposed to achieve Shadow Replication. A performance evaluation framework is developed to analyze these schemes and compare their performance to traditional replication-based fault tolerance methods, focusing on the inherent tradeoff between fault tolerance, the specified SLA and profit maximization. The results show that Shadow Replication leads to significant energy reduction, and is better suited for compute-intensive execution models, where up to 30% more profit increase can be achieved due to reduced energy consumption.

Research paper thumbnail of Shadow Computing: An energy-aware fault tolerant computing model

2014 International Conference on Computing, Networking and Communications (ICNC), 2014

ABSTRACT The current response to fault tolerance relies upon either time or hardware redundancy i... more ABSTRACT The current response to fault tolerance relies upon either time or hardware redundancy in order to mask faults. Time redundancy implies a re-execution of the failed computation after the failure has been detected, although this can further be optimized by the use of checkpoints these solutions still impose a significant delay. In many mission critical systems hardware redundancy has traditionally deployed in the form of process replication to provide fault tolerance, avoiding delay and maintaining tight deadlines. Both approaches have drawbacks, re-execution requiring additional time and replication requiring additional resources, especially energy. This forces the systems engineer to choose between time or hardware redundancy, cloud computing environments have largely chosen replication because response time is often critical. In this paper we propose a new computational model called shadow computing, which provides goal-based adaptive resilience through the use of dynamic execution. Using this general model we develop shadow replication which enables a parameterized tradeoff between time and hardware redundancy to provide fault tolerance. Then we build an analytical model to predict the expected energy savings and provide an analysis using that model.

Research paper thumbnail of End-to-end pairwise key establishment using multi-path in wireless sensor network

GLOBECOM '05. IEEE Global Telecommunications Conference, 2005., 2005

Research paper thumbnail of RoMR: robust multicast routing in mobile wireless networks

Wireless Communications and Mobile Computing, 2010

... a relay node does not save a copy of the tree in order to reduce the demands ... Several addi... more ... a relay node does not save a copy of the tree in order to reduce the demands ... Several additional path loss models based on the size of the radio cells are discussed in Reference ... When a MM receives a message from the unicast protocol that the topology has changed, it needs ...

Research paper thumbnail of Robust multicasting using an underlying link state unicast protocol

37th Annual Hawaii International Conference on System Sciences, 2004. Proceedings of the, 2004

... fold increase occurs in the num-ber of packets being sent, but RoMR finds a value of k > 1... more ... fold increase occurs in the num-ber of packets being sent, but RoMR finds a value of k > 1, when possible, reducing the overhead incurred with ... It was enhanced to include the weights associated with the links in the network in the Hello messages as well as in the Topology ...

Research paper thumbnail of Increasing DHT Data Security by Scattering Data (Invited Paper)

2008 Proceedings of 17th International Conference on Computer Communications and Networks, 2008

This paper describes methods for increasing the security of data being stored in a distributed ha... more This paper describes methods for increasing the security of data being stored in a distributed hash table (DHT) which leverages the inherent properties of the DHT to provide a secure storage substrate. The methods presented are based upon a framework referred to as &amp;quot;scatter, conceal, and recover&amp;quot; (SCAR). The standard method of securing data in a DHT is to encrypt the data using symmetrical encryption before storing it in the network. SCAR provides this level of security, but also prevents any known cryptoanalisys from being performed. It does this by dividing data into multiple blocks and scattering these blocks within the DHT. The security of SCAR is provided by the property that an attacker is unable to obtain and reassemble the data blocks correctly. However, if the attacker has access to the network communication, the likelihood of a successful attack is significantly increased. This paper defines how such attacks can be executed and provides methods for ensuring data security in spite of such attacks.

Research paper thumbnail of Congestion Control using Efficient Explicit Feedback

IEEE INFOCOM 2009 - The 28th Conference on Computer Communications, 2009

This paper proposes a framework for congestion control, called Binary Marking Congestion Control ... more This paper proposes a framework for congestion control, called Binary Marking Congestion Control (BMCC) for high bandwidth-delay product networks. The basic components of BMCC are i) a packet marking scheme for obtaining high resolution congestion estimates using the existing bits available in the IP header for Explicit Congestion Notification (ECN) and ii) a set of load-dependent control laws that use these congestion estimates to achieve efficient and fair bandwidth allocations on high bandwidth-delay product networks, while maintaining a low persistent queue length and negligible packet loss rate. We present analytical models that predict and provide insights into the convergence properties of the protocol. Using extensive packet-level simulations, we assess the efficacy of BMCC and perform comparisons with several proposed schemes. BMCC outperforms VCP, MLCP, XCP, SACK+RED/ECN and in some cases RCP, in terms of average flow completion times for typical Internet flow sizes.

Research paper thumbnail of Simulation Study of Firewalls to Aid Improved Performance

39th Annual Simulation Symposium (ANSS'06), 2006

The overall performance of a firewall is crucial in enforcing and administrating security, especi... more The overall performance of a firewall is crucial in enforcing and administrating security, especially when the network is under attack. The continuous growth of the Internet, coupled with the increasing sophistication of the attacks, is placing stringent demands on firewall performance. Under such circumstances it becomes very vital to understand the fundamental behind firewalls and their operation. In this paper, we describe a simulation framework for the study and analysis of firewalls. Based on this framework, we design methodologies to inspect and analyze both multi-dimensional firewall rules and traffic logs information. The data used for this study was collected over a large set of firewall rules and traffic logs at tens of enterprise networks managed by a Tier-1 service provider. The analysis presented in the paper firmly state the importance of considering traffic information in the process of firewall optimization. To the best of our knowledge ours is the first attempt to show the relevance of considering traffic characteristics to aid firewall optimization.

Research paper thumbnail of Traffic-Aware Firewall Optimization Strategies

2006 IEEE International Conference on Communications, 2006

... Based on this framework, we design a set of tools that inspect and analyze both multi-dimensi... more ... Based on this framework, we design a set of tools that inspect and analyze both multi-dimensional firewall rules and traffic logs and construct the optimal equivalentfirewall rules based on the observed traffic characteristics. ...

Research paper thumbnail of The Complexity of Channel Scheduling in Multi-Radio Multi-Channel Wireless Networks

IEEE INFOCOM 2009 - The 28th Conference on Computer Communications, 2009

The complexity of channel scheduling in Multi-Radio Multi-Channel (MR-MC) wireless networks is an... more The complexity of channel scheduling in Multi-Radio Multi-Channel (MR-MC) wireless networks is an open research topic. This problem asks for the set of edges that can support maximum amount of simultaneous traffic over orthogonal channels under a certain interference model. There exist two major interference models for channel scheduling, with one under the physical distance constraint, and one under the hop distance constraint. The complexity of channel scheduling under these two interference models serves as the foundation for many problems related to network throughput maximization. However, channel scheduling was proved to be NP-Hard only under the hop distance constraint for SR-SC wireless networks. In this paper, we fill the void by proving that channel scheduling is NP-Hard under both models in MR-MC wireless networks. In addition, we propose a polynomial-time approximation scheme (PTAS) framework that is applicable to channel scheduling under both interference models in MR-MC wireless networks. Furthermore, we conduct a comparison study on the two interference models and identify conditions under which these two models are equivalent for channel scheduling.

Research paper thumbnail of Coopetition Spectrum Trading in Cognitive Radio Networks

2013 IEEE 77th Vehicular Technology Conference (VTC Spring), 2013

ABSTRACT Spectrum trading is a promising method to improve spectrum usage efficiency. Several iss... more ABSTRACT Spectrum trading is a promising method to improve spectrum usage efficiency. Several issues must be addressed, however, to enable spectrum trading that goes beyond conservative trading idle bands and achieve cooperation between primary and secondary users. In this paper, we argue that spectrum holes should be explicitly endogenous and negotiated by spectrum trading participants. To this end, we proposed an a Vickery auction based, coopetive framework to foster cooperation, while allowing competition for spectrum sharing. Incentive schemes and penalty for revocable spectrum are proposed to increase the spectrum access opportunities for SUs while protecting PUs spectrum value. A simultation study shows that the proposed framework outperforms conservative trading approaches, in a variety of scenarios with different levels of cooperation and bidding strategies.

Research paper thumbnail of A Guided Tour Puzzle for Denial of Service Prevention

2009 Annual Computer Security Applications Conference, 2009

Abstract—Various cryptographic,puzzle,schemes,are,pro- posed as a defense mechanism,against denia... more Abstract—Various cryptographic,puzzle,schemes,are,pro- posed as a defense mechanism,against denial of service attack. But, all these puzzle schemes face a dilemma when there is a large disparity between,the computational,power,of attackers and legitimate clients: increasing the difficulty of puzzles might unnecessarily restrict legitimate clients too much, and lower difficulty puzzles cannot,sufficiently block attackers with large computational resources. In this paper, we introduce guided tour puzzle,, a novel puzzle scheme that is not affected by such resource,disparity. A guided,tour puzzle requires a client to visit a predefined set of nodes, called tour guides, in a certain sequential order to retrieve an n-piece answer, one piece from each tour guide that appears,in the tour. This puzzle solving process is non-parallelizable, thus cheating by trying to solve the puzzle in parallel is not possible. Guided tour puzzle not,only,achieves,all previously,defined,desired,properties of a cryptographic puzzle scheme, but it also satisfies more important requirements, such as puzzle fairness and minimum interference, that we identified. The number of tour guides required by the scheme can be as few as two, and this extra cost can be amortized,by sharing the same,set of tour guides among,multiple,servers.

Research paper thumbnail of Decoupling packet loss from blocking in proactive reservation-based switching

First International Conference on Broadband Networks, 2004

We consider the maximization of network throughput in buffer-constrained optical networks using a... more We consider the maximization of network throughput in buffer-constrained optical networks using aggregate bandwidth allocation and reservation-based transmission control. Assuming that all flows are subject to loss-based TCP congestion control, we quantify the effects of buffer capacity constraints on bandwidth utilization efficiency through contention-induced packet loss. The analysis shows that the ability of TCP flows to efficiently utilize successful reservations is highly sensitive to the available buffer capacity. Maximizing the bandwidth utilization efficiency under buffer capacity constraints thus requires decoupling packet loss from contention-induced blocking of transmission requests. We describe a confirmed (two-way) reservation scheme that eliminates contention-induced loss, so that no packets are dropped at the network's core, and loss can be incurred only at the adequately buffer-provisioned ingress routers, where it is exclusively congestion-induced. For the confirmed signaling scheme, analytical and simulation results indicate that TCP aggregates are able to efficiently utilize the successful reservations independently of buffer constraints.

Research paper thumbnail of Scheduling to Minimize the Worst-Case Loss Rate

We study link scheduling in networks with small router buffers, with the goal of minimizing the g... more We study link scheduling in networks with small router buffers, with the goal of minimizing the guaranteed packet loss rate bound for each ingress-egress traffic aggregate (connection). Given a link scheduling algorithm (a service discipline and a packet drop policy), the guaranteed loss rate for a connection is the loss rate under worst-case rout- ing and bandwidth allocations for competing

Research paper thumbnail of <title>Traffic shaping and scheduling for OBS-based IP/WDM backbones</title>

OptiComm 2003: Optical Networking and Communications, 2003

We introduce Proactive Reservation-based Switching (PRS)-a switching architecture for IP/WDM netw... more We introduce Proactive Reservation-based Switching (PRS)-a switching architecture for IP/WDM networks based on Labeled Optical Burst Switching. PRS achieves packet delay and loss performance comparable to that of packet-switched networks, without requiring large buffering capacity, or burst scheduling across a large number of wavelengths at the core routers. PRS combines proactive channel reservation with periodic shaping of ingress-egress traffic aggregates to hide the offset latency and approximate the utilization/buffering characteristics of discrete-time queues with periodic arrival streams. A channel scheduling algorithm imposes constraints on burst departure times to ensure efficient utilization of wavelength channels and to maintain the distance between consecutive bursts through the network. Results obtained from simulation using TCP traffic over carefully designed topologies indicate that PRS consistently achieves channel utilization above 90% with modest buffering requirements.

Research paper thumbnail of Analysis of a transmission scheduling algorithm for supporting bandwidth guarantees in bufferless networks

ACM SIGMETRICS Performance Evaluation Review, 2006

In a network of bufferless packet multiplexers, the userperceived capacity of an ingress-egress t... more In a network of bufferless packet multiplexers, the userperceived capacity of an ingress-egress tunnel (connection) may degrade quickly with increasing path length. This is due to the compounding of transmission blocking probabilities along the path of the connection, even when the links are not overloaded. In such an environment, providing users (e.g., client ISPs) with tunnels of statistically guaranteed bandwidth may limit the network's connection-carrying capacity.

Research paper thumbnail of Scheduling to Minimize theWorst-Case Loss Rate

27th International Conference on Distributed Computing Systems (ICDCS '07), 2007

We study link scheduling in networks with small router buffers with the goal of minimizing the gu... more We study link scheduling in networks with small router buffers with the goal of minimizing the guaranteed packet loss rate bound for each ingress-egress traffic aggregate (connection). Given a link scheduling algorithm (a service discipline and a packet drop policy), the guaranteed loss rate for a connection is the loss rate under worst-case routing and bandwidth allocations for competing traffic. We show that a local min-max fairness property with respect to apportioning loss events among the connections sharing each link, and the correlation of scheduling decisions at different links are two necessary and (together) sufficient conditions for optimality in the minimization problem. Based on these conditions, we introduce and analyze a randomized link-scheduling algorithm called Rolling Priorities (RP) where packet scheduling at each link relies exclusively on local information. We show that RP satisfies both conditions and is therefore optimal. Furthermore, we show that the algorithm combining FCFS with the Random Drop policy (FCFS/RD) is locally fair and that it is nearly optimal under light link load. Under heavy load, the guaranteed loss rate under FCFS/RD deteriorates much faster as a function of path length compared to the optimal algorithm.

Research paper thumbnail of Performance of redundancy methods in P2P networks under churn

2012 International Conference on Computing, Networking and Communications (ICNC), 2012

Research paper thumbnail of Design and analysis of a quantum-based QoS-aware fair share server for integrated services networks

Proceedings 32nd Annual Simulation Symposium, 1999

... Michael S. Boykin University of Pittsburgh Computer Science Department Pittsburgh, PA 15260 b... more ... Michael S. Boykin University of Pittsburgh Computer Science Department Pittsburgh, PA 15260 boykin1@cs.pitt.edu ... (1) where m denotes channel m's maximum burst size, m de-notes its minimum guaranteed rate, Smax denotes the max-imum packet size and C denotes the ...

Research paper thumbnail of Message from the technical program committee co-chairs

Proceedings - 2009 4th Latin-American Symposium on Dependable Computing, LADC 2009, 2009

... and Vipin Kumar, three distinguished scientists who give an inspiring view of the frontiers o... more ... and Vipin Kumar, three distinguished scientists who give an inspiring view of the frontiers of data mining, also with the eye of neighboring ... Finally, we would like to thank Sanjay Ranka and Philip S. Yu, the Conference General Chairs, who have been extremely helpful on all ...

Research paper thumbnail of Shadow Replication: An Energy-Aware, Fault-Tolerant Computational Model for Green Cloud Computing

Energies, 2014

As the demand for cloud computing continues to increase, cloud service providers face the dauntin... more As the demand for cloud computing continues to increase, cloud service providers face the daunting challenge to meet the negotiated SLA agreement, in terms of reliability and timely performance, while achieving cost-effectiveness. This challenge is increasingly compounded by the increasing likelihood of failure in large-scale clouds and the rising impact of energy consumption and CO 2 emission on the environment. This paper proposes Shadow Replication, a novel fault-tolerance model for cloud computing, which seamlessly addresses failure at scale, while minimizing energy consumption and reducing its impact on the environment. The basic tenet of the model is to associate a suite of shadow processes to execute concurrently with the main process, but initially at a much reduced execution speed, to overcome failures as they occur. Two computationally-feasible schemes are proposed to achieve Shadow Replication. A performance evaluation framework is developed to analyze these schemes and compare their performance to traditional replication-based fault tolerance methods, focusing on the inherent tradeoff between fault tolerance, the specified SLA and profit maximization. The results show that Shadow Replication leads to significant energy reduction, and is better suited for compute-intensive execution models, where up to 30% more profit increase can be achieved due to reduced energy consumption.

Research paper thumbnail of Shadow Computing: An energy-aware fault tolerant computing model

2014 International Conference on Computing, Networking and Communications (ICNC), 2014

ABSTRACT The current response to fault tolerance relies upon either time or hardware redundancy i... more ABSTRACT The current response to fault tolerance relies upon either time or hardware redundancy in order to mask faults. Time redundancy implies a re-execution of the failed computation after the failure has been detected, although this can further be optimized by the use of checkpoints these solutions still impose a significant delay. In many mission critical systems hardware redundancy has traditionally deployed in the form of process replication to provide fault tolerance, avoiding delay and maintaining tight deadlines. Both approaches have drawbacks, re-execution requiring additional time and replication requiring additional resources, especially energy. This forces the systems engineer to choose between time or hardware redundancy, cloud computing environments have largely chosen replication because response time is often critical. In this paper we propose a new computational model called shadow computing, which provides goal-based adaptive resilience through the use of dynamic execution. Using this general model we develop shadow replication which enables a parameterized tradeoff between time and hardware redundancy to provide fault tolerance. Then we build an analytical model to predict the expected energy savings and provide an analysis using that model.