Murat Yuksel | University of Nevada, Reno (original) (raw)

Papers by Murat Yuksel

Research paper thumbnail of LIGHTNETs: Smart LIGHTing and Mobile Optical Wireless NETworks – A Survey

—Recently, rapid increase of mobile devices pushed the radio frequency (RF)-based wireless techno... more —Recently, rapid increase of mobile devices pushed the radio frequency (RF)-based wireless technologies to their limits. Free-space-optical (FSO), a.k.a. optical wireless, communication has been considered as one of the viable solutions to respond to the ever-increasing wireless capacity demand. Particularly , Visible Light Communication (VLC) which uses light emitting diode (LED) based smart lighting technology provides an opportunity and infrastructure for the high-speed low-cost wireless communication. Though stemming from the same core technology, the smart lighting and FSO communication have inherent tradeoffs amongst each other. In this paper, we present a tutorial and survey of advances in these two technologies and explore the potential for integration of the two as a single field of study: LIGHTNETs. We focus our survey to the context of mobile communications given the recent pressing needs in mobile wireless networking. We deliberate on key challenges involved in designing technologies jointly performing the two functions simultaneously: LIGHTing and NETworking.

Research paper thumbnail of Distributed Dynamic Capacity Contracting: An overlay congestion pricing framework for differentiated-services architecture

Several congestion pricing proposals have been made in the last decade. Usually, however, those p... more Several congestion pricing proposals have been made in the last decade. Usually, however, those proposals studied optimal strategies and did not focus on implementation issues. Our main contribution in this paper is to address implementation issues for congestion-sensitive pricing over a single differentiated-services (diff-serv) domain. We propose a new congestion-sensitive pricing framework Distributed Dynamic Capacity Contracting (Distributed-DCC), which is able to provide a range of fairness (e.g. maxmin, proportional) in rate allocation by using pricing as a tool. We develop a pricing scheme within the Distributed-DCC framework investigate several issues such as optimality of prices, fairness of rate allocation, sensitivity to parameter changes.

Research paper thumbnail of Simulating the Smart Market pricing scheme on Differentiated Services architecture

Research paper thumbnail of A Platform for Large-Scale Network Performance Analysis

Performance analysis techniques are fundamental to aid the process of large-scale protocol design... more Performance analysis techniques are fundamental to aid the process of large-scale protocol design and network operations. There has been a tremendous explosion in the variety of tools and platforms available (eg: ns-2, SSFNet, Click Router toolkit, Emulab, Planetlab). However, we still look at the sample results obtained from such tools with skepticism because they are isolated (potentially random) and may not be representative of the real-world. The first issue (random isolated results) can be addressed by large-scale experiment design techniques that extract maximum information and confidence from a minimum number of carefully designed experiments. Such techniques can be used to find "good" results fast to guide either incremental protocol design or operational parameter tuning. The second issue (representativeness) is more sticky and relates to formulating benchmarks. In this paper, we explore the former case, i.e. large-scale experiment design and black-box optimization (i.e. large-dimensional parameter state space search). We propose a new platform ROSS.Net that combines large-scale network simulation with large-scale experiment design and XML interfaces to data sets (eg: Rocketfuel, CAIDA) and models (traffic, topology etc). This is a step towards the broader problem of understanding meta-simulation methodology, and speculate how we could integrate these tools with testbeds like Emulab and Planetlab. Examples of large-scale simulations (routing, TCP, multicast) and experiment design are presented.

Research paper thumbnail of Understanding OSPF and BGP Interactions Using Efficient Experiment Design

In this paper, we analyze the two dominant inter- and intra-domain routing protocols in the Inter... more In this paper, we analyze the two dominant inter- and intra-domain routing protocols in the Internet: Open Short- est Path Forwarding (OSPFv2) and Border Gateway Protocol (BGP4). Specifically, we investigate interactions between these two routing protocols as well as overall (i.e. both OSPF and BGP) stability and dynamics. Our analysis is based on large-scale simulations of OSPF and BGP, and careful design of experiments to perform an efficient search for the best parameter setting s of these two routing protocols. To optimize the overall response of OSPF and BGP, we define metrics based on the types of routing update messages generated by these protocols in the control plane. We then perform a search for the best setting of OSPF and BGP parameters to minimize these routing updates and negative interactions between the inter- and intra-domain routing. We consider BGP attributes as well as OSPF and BGP timers in terms of contribution to the total number of routing updates. Using this...

Research paper thumbnail of Distributed Dynamic Capacity Contracting: A Congestion Pricing Framework for Diff-Serv

Lecture Notes in Computer Science, 2002

Several congestion pricing proposals have been made in the last decade. Usually, however, those p... more Several congestion pricing proposals have been made in the last decade. Usually, however, those proposals studied optimal strategies and did not focus on implementation issues. Our main contribution in this paper is to address implementation issues for congestion-sensitive pricing over a single domain of the differentiated-services (diff-serv) architecture of the Internet. We propose a new congestion-sensitive pricing framework Distributed Dynamic Capacity Contracting (Distributed-DCC), which is able to provide a range of fairness (e.g. max-min, proportional) in rate allocation by using pricing as a tool. Within the Distributed-DCC framework, we develop an Edge-to-Edge Pricing Scheme (EEP) and present simulation experiments of it.

Research paper thumbnail of <title>Multi-transceiver simulation modules for free-space optical mobile ad hoc networks</title>

Modeling and Simulation for Defense Systems and Applications V, 2010

This paper presents realistic simulation modules to assess characteristics of multi-transceiver f... more This paper presents realistic simulation modules to assess characteristics of multi-transceiver free-space-optical (FSO) mobile ad-hoc networks. We start with a physical propagation model for FSO communications in the context of mobile ad-hoc networks (MANETs). We specifically focus on the drop in power of the light beam and probability of error in the decoded signal due to a number of parameters (such as separation between transmitter and receiver and visibility in the propagation medium), comparing our results with well-known theoretical models. Then, we provide details on simulating multi-transceiver mobile wireless nodes in Network Simulator 2 (NS-2), realistic obstacles in the medium and communication between directional optical transceivers. We introduce new structures in the networking protocol stack at lower layers to deliver such functionality. At the end, we provide our findings resulted from detailed modeling and simulation of FSO-MANETs regarding effects of such directionality on higher layers in the networking stack.

Research paper thumbnail of ROSS.Net: optimistic parallel simulation framework for large-scale Internet models

Proceedings of the 2003 International Conference on Machine Learning and Cybernetics (IEEE Cat. No.03EX693), 2003

ROSS.Net brings together the four major areas of networking research: network modeling, simulatio... more ROSS.Net brings together the four major areas of networking research: network modeling, simulation, measurement and protocol design. ROSS.Net is a tool for computing large scale design of experiments through components such as a discrete-event simulation engine, default and extensible model designs, and a state of the art XML interface. ROSS.Net reads in predefined descriptions of network topologies and traffic scenarios which allows for in-depth analysis and insight into emerging feature interactions, cascading failures and protocol stability in a variety of situations. Developers will be able to design and implement their own protocol designs, network topologies and modeling scenarios, as well as implement existing platforms within the ROSS.Net platform. Also using ROSS.Net, designers are able to create experiments with varying levels of granularity, allowing for the highest-degree of scalability.

Research paper thumbnail of A Case Study in Meta-Simulation Design and Performance Analysis for Large-Scale Networks

Proceedings of the 2004 Winter Simulation Conference, 2004., 2004

Simulation and Emulation techniques are fundamental to aid the process of large-scale protocol de... more Simulation and Emulation techniques are fundamental to aid the process of large-scale protocol design and network operations. However, the results from these techniques are often view with a great deal of skepticism from the networking community. Criticisms come in two flavors: (i) the study presents isolated and potentially random feature interactions, and (ii) the parameters used in the study may not be representative of real-world conditions. The first issue (random isolated results) can be addressed by large-scale experiment design techniques that extract maximum information and confidence from a minimum number of carefully designed experiments. Such techniques can be used to find "good" results fast to guide either incremental protocol design or operational parameter tuning. The second issue (representativeness) is more problematic and relates to formulating benchmarks that to the greatest possible extent characterize the structure of the system under study. In this paper, we explore both cases , i.e. large-scale experiment design and black-box optimization (i.e. large-dimensional parameter state space search) using a realistic network topology with bandwidth and delay metrics to analyze convergence of network route paths in the Open Shortest Path First (OSPFv2) protocol. By using the Recursive Random Search (RRS) approach to design of experiments, we find: (i) that the number of simulation experiments is reduced by an order of magnitude when compared to full-factorial design approach, (ii) it allowed the elimination of unnecessary parameters, and (iii) it enabled the rapid understanding of key parameter interactions. From this design of experiment approach, we were able to abstract away large portions of the OSPF model which resulted in a execution time improvement of 100 fold.

Research paper thumbnail of Packet-based simulation for optical wireless communication

2010 17th IEEE Workshop on Local & Metropolitan Area Networks (LANMAN), 2010

This paper presents packet-based simulation tools for free-space-optical (FSO) wireless communica... more This paper presents packet-based simulation tools for free-space-optical (FSO) wireless communication. We implement the well-known propagation models for free-space-optical communication as a set of modules in NS-2. Our focus is on accurately simulating line-of-sight (LOS) requirement for two communicating antennas, the drop in the received power with respect to separation between antennas, and error behavior. In our simulation modules, we consider numerous factors affecting the performance of optical wireless communication such as visibility in the medium, divergence angles of transmitters, field of view of photo-detectors, and surface areas of transceiver devices.

Research paper thumbnail of Effectiveness of Multi-Hop Negotiation on the Internet

2011 IEEE Global Telecommunications Conference - GLOBECOM 2011, 2011

Inter-domain routing has been long considered as an ongoing negotiation on end-to-end paths betwe... more Inter-domain routing has been long considered as an ongoing negotiation on end-to-end paths between service providers. Such negotiations were often believed to be effective in their form of bilateral and single-hop interactions between neighboring ISPs in a rather hierarchical market structure. Traffic engineering policies, multi-homing schemes and peering mechanisms have been often employed as only service performance improvement methods within

Research paper thumbnail of Inter-domain Multi-Hop Negotiation for the Internet

2011 IEEE International Symposium on Policies for Distributed Systems and Networks, 2011

Inter-domain connectivity in the Internet is currently established on policy-based shortest-path ... more Inter-domain connectivity in the Internet is currently established on policy-based shortest-path routing. Business relationships of the Internet entities are translated into routing decisions through policies. Although these policies are built on simple mechanisms provided by BGP, they give rise to very complex market structure. Extensive research efforts have been made to understand common practices of inter-domain policies. In early stages, hierarchical models were thought to be adequate to explain negotiation between the entities over improving routing and expressing routing preferences in general. Then, it has been realized that there are significant number of local policy exceptions and random decision making over hierarchical structure. In this work, we examine how effective these local policy exceptions are in providing better quality paths. Our analysis on traces captured from the Internet quantitatively shows that currently adopted local policies could not be as effective as multi-hop negotiations for the purpose of attaining better paths in terms of multiple path quality metrics.

Research paper thumbnail of Offloading routing complexity to the Cloud(s)

2013 IEEE International Conference on Communications Workshops (ICC), 2013

ABSTRACT We propose a new architectural approach, Cloud-Assisted Routing (CAR), that leverages hi... more ABSTRACT We propose a new architectural approach, Cloud-Assisted Routing (CAR), that leverages high computation and memory power of cloud services for easing complex routing functions such as forwarding and flow level policy management. We aim to mitigate the increasing routing complexity to the cloud and to seek an answer to the following question: “Can techniques leveraging the memory and computation resources of the cloud remedy the routing scalability issues?” The key contribution of our work is to outline a framework on how to integrate cloud computing with routers and define operational regions where such integration is beneficial.

Research paper thumbnail of Path-Vector Contract Routing

2012 IEEE International Conference on Communications (ICC), 2012

ABSTRACT Many recently proposed clean slate Internet architectures essentially depend on more fle... more ABSTRACT Many recently proposed clean slate Internet architectures essentially depend on more flexible and extended representation of Internet topology on which next generation routing protocols may operate. Representation of neighboring relationships between Internet Service Providers (ISPs) in finer granularity is promising to overcome many shortcomings of the current Internet architecture. Similarly, contract-switching paradigm promotes an ISP to define itself as a set of edge-to-edge (g2g) links that connect ingress and egress routers of its domain. Each link is represented by a contract which defines not only neighboring relationships with other domains but also economic (e.g., price), performance (e.g., quality of service parameters) and temporal (e.g., lifetime of the dedicated link) attributes attached to this g2g link. In this work, we introduce Path-Vector Contract Routing (PVCR) protocol which allows multi-metric, multi-hop negotiation of end-to-end inter-domain paths by leveraging path-vector style construction on top of g2g contract definitions. Our analysis on synthetic and real-world topologies show that Path-Vector Contract Routing has many promising properties such as rich route diversity, end-to-end multi-domain QoS and low control traffic. We also investigate inter-domain traffic engineering capabilities of PVCR which inherently considers economics of routing in its opportunistic settings.

Research paper thumbnail of Multi-element Free-Space-Optical spherical structures with intermittent connectivity patterns

IEEE INFOCOM 2008 - IEEE Conference on Computer Communications Workshops, 2008

Due to its high bandwidth spectrum, Free-Space-Optical (FSO) communication has the potential to b... more Due to its high bandwidth spectrum, Free-Space-Optical (FSO) communication has the potential to bridge the capacity gap between backbone fiber links and mobile ad-hoc links, especially in the last-mile. Though FSO can solve the wireless capacity problem, it brings new challenges, like frequent disruption of physical link (intermittent connectivity) and the line of sight (LOS) requirements. In this paper, we study a spherical FSO structure as a basic building block and examine the effects of such FSO structures to upper layers, especially to TCP behavior for stationary and mobile nodes.

Research paper thumbnail of Maintaining a free-space-optical communication link between two autonomous mobiles

2014 IEEE Wireless Communications and Networking Conference (WCNC), 2014

ABSTRACT

Research paper thumbnail of Seven-O'clock: a new distributed GVT algorithm using network atomic operations

International Journal of Simulation and Process Modelling, 2009

In this paper we introduce a new concept, network atomic operations (NAOs) to create a zero-cost ... more In this paper we introduce a new concept, network atomic operations (NAOs) to create a zero-cost consistent cut. Using NAOs, we define a wall-clock-time driven GVT algorithm called the seven o'clock algorithm that is an extension of Fujimoto's shared memory GVT algorithm. Using this new GVT algorithm, we report good optimistic parallel performance on a cluster of state-of-the-art Itanium-II quad processor systems as well as a dated cluster of 40 dual Pentium III systems for both benchmark applications such as PHOLD and realworld applications such as a large-scale TCP/Internet model. In some cases, super-linear speedup is observed. We present a new measurement for determining the optimal performance achieved in a parallel and distributed simulation when the sequential case cannot be performed due to model size.

Research paper thumbnail of Large-scale network simulation techniques

ACM SIGCOMM Computer Communication Review, 2003

Simulation of large-scale networks remains to be a challenge, although various network simulators... more Simulation of large-scale networks remains to be a challenge, although various network simulators are in place. In this paper, we identify fundamental issues for large-scale network simulation, and propose new techniques that address them. First, we exploit optimistic parallel simulation techniques to enable fast execution on inexpensive hyper-threaded, multiprocessor systems. Second, we provide a compact, light-weight implementation framework that greatly reduces the amount of state required to simulate large-scale network models. Based on the proposed techniques, we provide sample simulation models for two networking protocols: TCP and OSPF. We implement these models in a simulation environment ROSSNet, which is an extension to the previously developed optimistic simulator ROSS. We perform validation experiments for TCP and OSPF and present performance results of our techniques by simulating OSPF and TCP on a large and realistic topology, such as AT&T's US network based on Rocketfuel data. The end result of these innovations is that we are able to simulate million node network topologies using inexpensive commercial off-the-shelf hyperthreaded multiprocessor systems consuming less than 1.4 GB of RAM in total.

Research paper thumbnail of Distributed dynamic capacity contracting: an overlay congestion pricing framework

Computer Communications, 2003

Several congestion pricing proposals have been made in the last decade. Usually, however, those p... more Several congestion pricing proposals have been made in the last decade. Usually, however, those proposals studied optimal strategies and did not focus on implementation issues. Our main contribution in this paper is to address implementation issues for congestion-sensitive pricing over a single differentiated-services (diff-serv) domain. We propose a new congestion-sensitive pricing framework Distributed Dynamic Capacity Contracting (Distributed-DCC), which is able to provide a range of fairness (e.g. maxmin, proportional) in rate allocation by using pricing as a tool. We develop a pricing scheme within the Distributed-DCC framework investigate several issues such as optimality of prices, fairness of rate allocation, sensitivity to parameter changes.

Research paper thumbnail of A Platform for Large-Scale Network Performance Analysis

Performance analysis techniques are fundamental to aid the process of large-scale protocol design... more Performance analysis techniques are fundamental to aid the process of large-scale protocol design and network operations. There has been a tremendous explosion in the variety of tools and platforms available (eg: ns-2, SSFNet, Click Router toolkit, Emulab, Planetlab). However, we still look at the sample results obtained from such tools with skepticism because they are isolated (potentially random) and may not be representative of the real-world. The first issue (random isolated results) can be addressed by large-scale experiment design techniques that extract maximum information and confidence from a minimum number of carefully designed experiments. Such techniques can be used to find "good" results fast to guide either incremental protocol design or operational parameter tuning. The second issue (representativeness) is more sticky and relates to formulating benchmarks. In this paper, we explore the former case, i.e. large-scale experiment design and black-box optimization (i.e. large-dimensional parameter state space search). We propose a new platform ROSS.Net that combines large-scale network simulation with large-scale experiment design and XML interfaces to data sets (eg: Rocketfuel, CAIDA) and models (traffic, topology etc). This is a step towards the broader problem of understanding meta-simulation methodology, and speculate how we could integrate these tools with testbeds like Emulab and Planetlab. Examples of large-scale simulations (routing, TCP, multicast) and experiment design are presented.

Research paper thumbnail of LIGHTNETs: Smart LIGHTing and Mobile Optical Wireless NETworks – A Survey

—Recently, rapid increase of mobile devices pushed the radio frequency (RF)-based wireless techno... more —Recently, rapid increase of mobile devices pushed the radio frequency (RF)-based wireless technologies to their limits. Free-space-optical (FSO), a.k.a. optical wireless, communication has been considered as one of the viable solutions to respond to the ever-increasing wireless capacity demand. Particularly , Visible Light Communication (VLC) which uses light emitting diode (LED) based smart lighting technology provides an opportunity and infrastructure for the high-speed low-cost wireless communication. Though stemming from the same core technology, the smart lighting and FSO communication have inherent tradeoffs amongst each other. In this paper, we present a tutorial and survey of advances in these two technologies and explore the potential for integration of the two as a single field of study: LIGHTNETs. We focus our survey to the context of mobile communications given the recent pressing needs in mobile wireless networking. We deliberate on key challenges involved in designing technologies jointly performing the two functions simultaneously: LIGHTing and NETworking.

Research paper thumbnail of Distributed Dynamic Capacity Contracting: An overlay congestion pricing framework for differentiated-services architecture

Several congestion pricing proposals have been made in the last decade. Usually, however, those p... more Several congestion pricing proposals have been made in the last decade. Usually, however, those proposals studied optimal strategies and did not focus on implementation issues. Our main contribution in this paper is to address implementation issues for congestion-sensitive pricing over a single differentiated-services (diff-serv) domain. We propose a new congestion-sensitive pricing framework Distributed Dynamic Capacity Contracting (Distributed-DCC), which is able to provide a range of fairness (e.g. maxmin, proportional) in rate allocation by using pricing as a tool. We develop a pricing scheme within the Distributed-DCC framework investigate several issues such as optimality of prices, fairness of rate allocation, sensitivity to parameter changes.

Research paper thumbnail of Simulating the Smart Market pricing scheme on Differentiated Services architecture

Research paper thumbnail of A Platform for Large-Scale Network Performance Analysis

Performance analysis techniques are fundamental to aid the process of large-scale protocol design... more Performance analysis techniques are fundamental to aid the process of large-scale protocol design and network operations. There has been a tremendous explosion in the variety of tools and platforms available (eg: ns-2, SSFNet, Click Router toolkit, Emulab, Planetlab). However, we still look at the sample results obtained from such tools with skepticism because they are isolated (potentially random) and may not be representative of the real-world. The first issue (random isolated results) can be addressed by large-scale experiment design techniques that extract maximum information and confidence from a minimum number of carefully designed experiments. Such techniques can be used to find "good" results fast to guide either incremental protocol design or operational parameter tuning. The second issue (representativeness) is more sticky and relates to formulating benchmarks. In this paper, we explore the former case, i.e. large-scale experiment design and black-box optimization (i.e. large-dimensional parameter state space search). We propose a new platform ROSS.Net that combines large-scale network simulation with large-scale experiment design and XML interfaces to data sets (eg: Rocketfuel, CAIDA) and models (traffic, topology etc). This is a step towards the broader problem of understanding meta-simulation methodology, and speculate how we could integrate these tools with testbeds like Emulab and Planetlab. Examples of large-scale simulations (routing, TCP, multicast) and experiment design are presented.

Research paper thumbnail of Understanding OSPF and BGP Interactions Using Efficient Experiment Design

In this paper, we analyze the two dominant inter- and intra-domain routing protocols in the Inter... more In this paper, we analyze the two dominant inter- and intra-domain routing protocols in the Internet: Open Short- est Path Forwarding (OSPFv2) and Border Gateway Protocol (BGP4). Specifically, we investigate interactions between these two routing protocols as well as overall (i.e. both OSPF and BGP) stability and dynamics. Our analysis is based on large-scale simulations of OSPF and BGP, and careful design of experiments to perform an efficient search for the best parameter setting s of these two routing protocols. To optimize the overall response of OSPF and BGP, we define metrics based on the types of routing update messages generated by these protocols in the control plane. We then perform a search for the best setting of OSPF and BGP parameters to minimize these routing updates and negative interactions between the inter- and intra-domain routing. We consider BGP attributes as well as OSPF and BGP timers in terms of contribution to the total number of routing updates. Using this...

Research paper thumbnail of Distributed Dynamic Capacity Contracting: A Congestion Pricing Framework for Diff-Serv

Lecture Notes in Computer Science, 2002

Several congestion pricing proposals have been made in the last decade. Usually, however, those p... more Several congestion pricing proposals have been made in the last decade. Usually, however, those proposals studied optimal strategies and did not focus on implementation issues. Our main contribution in this paper is to address implementation issues for congestion-sensitive pricing over a single domain of the differentiated-services (diff-serv) architecture of the Internet. We propose a new congestion-sensitive pricing framework Distributed Dynamic Capacity Contracting (Distributed-DCC), which is able to provide a range of fairness (e.g. max-min, proportional) in rate allocation by using pricing as a tool. Within the Distributed-DCC framework, we develop an Edge-to-Edge Pricing Scheme (EEP) and present simulation experiments of it.

Research paper thumbnail of <title>Multi-transceiver simulation modules for free-space optical mobile ad hoc networks</title>

Modeling and Simulation for Defense Systems and Applications V, 2010

This paper presents realistic simulation modules to assess characteristics of multi-transceiver f... more This paper presents realistic simulation modules to assess characteristics of multi-transceiver free-space-optical (FSO) mobile ad-hoc networks. We start with a physical propagation model for FSO communications in the context of mobile ad-hoc networks (MANETs). We specifically focus on the drop in power of the light beam and probability of error in the decoded signal due to a number of parameters (such as separation between transmitter and receiver and visibility in the propagation medium), comparing our results with well-known theoretical models. Then, we provide details on simulating multi-transceiver mobile wireless nodes in Network Simulator 2 (NS-2), realistic obstacles in the medium and communication between directional optical transceivers. We introduce new structures in the networking protocol stack at lower layers to deliver such functionality. At the end, we provide our findings resulted from detailed modeling and simulation of FSO-MANETs regarding effects of such directionality on higher layers in the networking stack.

Research paper thumbnail of ROSS.Net: optimistic parallel simulation framework for large-scale Internet models

Proceedings of the 2003 International Conference on Machine Learning and Cybernetics (IEEE Cat. No.03EX693), 2003

ROSS.Net brings together the four major areas of networking research: network modeling, simulatio... more ROSS.Net brings together the four major areas of networking research: network modeling, simulation, measurement and protocol design. ROSS.Net is a tool for computing large scale design of experiments through components such as a discrete-event simulation engine, default and extensible model designs, and a state of the art XML interface. ROSS.Net reads in predefined descriptions of network topologies and traffic scenarios which allows for in-depth analysis and insight into emerging feature interactions, cascading failures and protocol stability in a variety of situations. Developers will be able to design and implement their own protocol designs, network topologies and modeling scenarios, as well as implement existing platforms within the ROSS.Net platform. Also using ROSS.Net, designers are able to create experiments with varying levels of granularity, allowing for the highest-degree of scalability.

Research paper thumbnail of A Case Study in Meta-Simulation Design and Performance Analysis for Large-Scale Networks

Proceedings of the 2004 Winter Simulation Conference, 2004., 2004

Simulation and Emulation techniques are fundamental to aid the process of large-scale protocol de... more Simulation and Emulation techniques are fundamental to aid the process of large-scale protocol design and network operations. However, the results from these techniques are often view with a great deal of skepticism from the networking community. Criticisms come in two flavors: (i) the study presents isolated and potentially random feature interactions, and (ii) the parameters used in the study may not be representative of real-world conditions. The first issue (random isolated results) can be addressed by large-scale experiment design techniques that extract maximum information and confidence from a minimum number of carefully designed experiments. Such techniques can be used to find "good" results fast to guide either incremental protocol design or operational parameter tuning. The second issue (representativeness) is more problematic and relates to formulating benchmarks that to the greatest possible extent characterize the structure of the system under study. In this paper, we explore both cases , i.e. large-scale experiment design and black-box optimization (i.e. large-dimensional parameter state space search) using a realistic network topology with bandwidth and delay metrics to analyze convergence of network route paths in the Open Shortest Path First (OSPFv2) protocol. By using the Recursive Random Search (RRS) approach to design of experiments, we find: (i) that the number of simulation experiments is reduced by an order of magnitude when compared to full-factorial design approach, (ii) it allowed the elimination of unnecessary parameters, and (iii) it enabled the rapid understanding of key parameter interactions. From this design of experiment approach, we were able to abstract away large portions of the OSPF model which resulted in a execution time improvement of 100 fold.

Research paper thumbnail of Packet-based simulation for optical wireless communication

2010 17th IEEE Workshop on Local & Metropolitan Area Networks (LANMAN), 2010

This paper presents packet-based simulation tools for free-space-optical (FSO) wireless communica... more This paper presents packet-based simulation tools for free-space-optical (FSO) wireless communication. We implement the well-known propagation models for free-space-optical communication as a set of modules in NS-2. Our focus is on accurately simulating line-of-sight (LOS) requirement for two communicating antennas, the drop in the received power with respect to separation between antennas, and error behavior. In our simulation modules, we consider numerous factors affecting the performance of optical wireless communication such as visibility in the medium, divergence angles of transmitters, field of view of photo-detectors, and surface areas of transceiver devices.

Research paper thumbnail of Effectiveness of Multi-Hop Negotiation on the Internet

2011 IEEE Global Telecommunications Conference - GLOBECOM 2011, 2011

Inter-domain routing has been long considered as an ongoing negotiation on end-to-end paths betwe... more Inter-domain routing has been long considered as an ongoing negotiation on end-to-end paths between service providers. Such negotiations were often believed to be effective in their form of bilateral and single-hop interactions between neighboring ISPs in a rather hierarchical market structure. Traffic engineering policies, multi-homing schemes and peering mechanisms have been often employed as only service performance improvement methods within

Research paper thumbnail of Inter-domain Multi-Hop Negotiation for the Internet

2011 IEEE International Symposium on Policies for Distributed Systems and Networks, 2011

Inter-domain connectivity in the Internet is currently established on policy-based shortest-path ... more Inter-domain connectivity in the Internet is currently established on policy-based shortest-path routing. Business relationships of the Internet entities are translated into routing decisions through policies. Although these policies are built on simple mechanisms provided by BGP, they give rise to very complex market structure. Extensive research efforts have been made to understand common practices of inter-domain policies. In early stages, hierarchical models were thought to be adequate to explain negotiation between the entities over improving routing and expressing routing preferences in general. Then, it has been realized that there are significant number of local policy exceptions and random decision making over hierarchical structure. In this work, we examine how effective these local policy exceptions are in providing better quality paths. Our analysis on traces captured from the Internet quantitatively shows that currently adopted local policies could not be as effective as multi-hop negotiations for the purpose of attaining better paths in terms of multiple path quality metrics.

Research paper thumbnail of Offloading routing complexity to the Cloud(s)

2013 IEEE International Conference on Communications Workshops (ICC), 2013

ABSTRACT We propose a new architectural approach, Cloud-Assisted Routing (CAR), that leverages hi... more ABSTRACT We propose a new architectural approach, Cloud-Assisted Routing (CAR), that leverages high computation and memory power of cloud services for easing complex routing functions such as forwarding and flow level policy management. We aim to mitigate the increasing routing complexity to the cloud and to seek an answer to the following question: “Can techniques leveraging the memory and computation resources of the cloud remedy the routing scalability issues?” The key contribution of our work is to outline a framework on how to integrate cloud computing with routers and define operational regions where such integration is beneficial.

Research paper thumbnail of Path-Vector Contract Routing

2012 IEEE International Conference on Communications (ICC), 2012

ABSTRACT Many recently proposed clean slate Internet architectures essentially depend on more fle... more ABSTRACT Many recently proposed clean slate Internet architectures essentially depend on more flexible and extended representation of Internet topology on which next generation routing protocols may operate. Representation of neighboring relationships between Internet Service Providers (ISPs) in finer granularity is promising to overcome many shortcomings of the current Internet architecture. Similarly, contract-switching paradigm promotes an ISP to define itself as a set of edge-to-edge (g2g) links that connect ingress and egress routers of its domain. Each link is represented by a contract which defines not only neighboring relationships with other domains but also economic (e.g., price), performance (e.g., quality of service parameters) and temporal (e.g., lifetime of the dedicated link) attributes attached to this g2g link. In this work, we introduce Path-Vector Contract Routing (PVCR) protocol which allows multi-metric, multi-hop negotiation of end-to-end inter-domain paths by leveraging path-vector style construction on top of g2g contract definitions. Our analysis on synthetic and real-world topologies show that Path-Vector Contract Routing has many promising properties such as rich route diversity, end-to-end multi-domain QoS and low control traffic. We also investigate inter-domain traffic engineering capabilities of PVCR which inherently considers economics of routing in its opportunistic settings.

Research paper thumbnail of Multi-element Free-Space-Optical spherical structures with intermittent connectivity patterns

IEEE INFOCOM 2008 - IEEE Conference on Computer Communications Workshops, 2008

Due to its high bandwidth spectrum, Free-Space-Optical (FSO) communication has the potential to b... more Due to its high bandwidth spectrum, Free-Space-Optical (FSO) communication has the potential to bridge the capacity gap between backbone fiber links and mobile ad-hoc links, especially in the last-mile. Though FSO can solve the wireless capacity problem, it brings new challenges, like frequent disruption of physical link (intermittent connectivity) and the line of sight (LOS) requirements. In this paper, we study a spherical FSO structure as a basic building block and examine the effects of such FSO structures to upper layers, especially to TCP behavior for stationary and mobile nodes.

Research paper thumbnail of Maintaining a free-space-optical communication link between two autonomous mobiles

2014 IEEE Wireless Communications and Networking Conference (WCNC), 2014

ABSTRACT

Research paper thumbnail of Seven-O'clock: a new distributed GVT algorithm using network atomic operations

International Journal of Simulation and Process Modelling, 2009

In this paper we introduce a new concept, network atomic operations (NAOs) to create a zero-cost ... more In this paper we introduce a new concept, network atomic operations (NAOs) to create a zero-cost consistent cut. Using NAOs, we define a wall-clock-time driven GVT algorithm called the seven o'clock algorithm that is an extension of Fujimoto's shared memory GVT algorithm. Using this new GVT algorithm, we report good optimistic parallel performance on a cluster of state-of-the-art Itanium-II quad processor systems as well as a dated cluster of 40 dual Pentium III systems for both benchmark applications such as PHOLD and realworld applications such as a large-scale TCP/Internet model. In some cases, super-linear speedup is observed. We present a new measurement for determining the optimal performance achieved in a parallel and distributed simulation when the sequential case cannot be performed due to model size.

Research paper thumbnail of Large-scale network simulation techniques

ACM SIGCOMM Computer Communication Review, 2003

Simulation of large-scale networks remains to be a challenge, although various network simulators... more Simulation of large-scale networks remains to be a challenge, although various network simulators are in place. In this paper, we identify fundamental issues for large-scale network simulation, and propose new techniques that address them. First, we exploit optimistic parallel simulation techniques to enable fast execution on inexpensive hyper-threaded, multiprocessor systems. Second, we provide a compact, light-weight implementation framework that greatly reduces the amount of state required to simulate large-scale network models. Based on the proposed techniques, we provide sample simulation models for two networking protocols: TCP and OSPF. We implement these models in a simulation environment ROSSNet, which is an extension to the previously developed optimistic simulator ROSS. We perform validation experiments for TCP and OSPF and present performance results of our techniques by simulating OSPF and TCP on a large and realistic topology, such as AT&T's US network based on Rocketfuel data. The end result of these innovations is that we are able to simulate million node network topologies using inexpensive commercial off-the-shelf hyperthreaded multiprocessor systems consuming less than 1.4 GB of RAM in total.

Research paper thumbnail of Distributed dynamic capacity contracting: an overlay congestion pricing framework

Computer Communications, 2003

Several congestion pricing proposals have been made in the last decade. Usually, however, those p... more Several congestion pricing proposals have been made in the last decade. Usually, however, those proposals studied optimal strategies and did not focus on implementation issues. Our main contribution in this paper is to address implementation issues for congestion-sensitive pricing over a single differentiated-services (diff-serv) domain. We propose a new congestion-sensitive pricing framework Distributed Dynamic Capacity Contracting (Distributed-DCC), which is able to provide a range of fairness (e.g. maxmin, proportional) in rate allocation by using pricing as a tool. We develop a pricing scheme within the Distributed-DCC framework investigate several issues such as optimality of prices, fairness of rate allocation, sensitivity to parameter changes.

Research paper thumbnail of A Platform for Large-Scale Network Performance Analysis

Performance analysis techniques are fundamental to aid the process of large-scale protocol design... more Performance analysis techniques are fundamental to aid the process of large-scale protocol design and network operations. There has been a tremendous explosion in the variety of tools and platforms available (eg: ns-2, SSFNet, Click Router toolkit, Emulab, Planetlab). However, we still look at the sample results obtained from such tools with skepticism because they are isolated (potentially random) and may not be representative of the real-world. The first issue (random isolated results) can be addressed by large-scale experiment design techniques that extract maximum information and confidence from a minimum number of carefully designed experiments. Such techniques can be used to find "good" results fast to guide either incremental protocol design or operational parameter tuning. The second issue (representativeness) is more sticky and relates to formulating benchmarks. In this paper, we explore the former case, i.e. large-scale experiment design and black-box optimization (i.e. large-dimensional parameter state space search). We propose a new platform ROSS.Net that combines large-scale network simulation with large-scale experiment design and XML interfaces to data sets (eg: Rocketfuel, CAIDA) and models (traffic, topology etc). This is a step towards the broader problem of understanding meta-simulation methodology, and speculate how we could integrate these tools with testbeds like Emulab and Planetlab. Examples of large-scale simulations (routing, TCP, multicast) and experiment design are presented.