Stephan Bohacek - Academia.edu (original) (raw)
Uploads
Papers by Stephan Bohacek
Third International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt'05)
— Most standard implementations of TCP perform poorly when packets are reordered. In this paper, ... more — Most standard implementations of TCP perform poorly when packets are reordered. In this paper, we propose a new version of TCP that maintains high throughput when reordering occurs and yet, when packet reordering does not occur, is friendly to other versions of TCP. The proposed TCP variant, or TCP-PR, does not rely on duplicate acknowledgments to detect a packet loss. Instead, timers are maintained to keep track of how long ago a packet was transmitted. In case the corresponding acknowledgment has not yet arrived and the elapsed time since the packet was sent is larger than a given threshold, the packet is assumed lost. Because TCP-PR does not rely on duplicate acknowledgments, packet reordering (including out-or-order acknowledgments) has no effect on TCP-PR’s performance. Through extensive simulations, we show that TCP-PR performs consistently better than existing mechanisms that try to make TCP more robust to packet reordering. In the case that packets are not reordered, we ve...
Abstract—Traffic properties relevant to Active Queue Management (AQM) and the resulting pole/zero... more Abstract—Traffic properties relevant to Active Queue Management (AQM) and the resulting pole/zero and Bode limitations are examined. ARX models of TCP are derived from data collectedinafixed "dumbbell " simulation environment. The pole dynamics at short time scales can be traced to the RTT, which serves as a confirmation of the relevance of the models. It is shown that the weighted sensitivity function is a reasonable metric for AQM performance. However, � � design shows that the sensitivity can only be slightly reduced. The conclusion of this work is that the goal of smoothing TCP’s oscillations (a principle design goal of AQM) will likely not be met in general. I.
In multihop wireless networks, the variability of channels results in some paths providing better... more In multihop wireless networks, the variability of channels results in some paths providing better performance than other paths. While it is well known that some paths are better than others, a significant number of routing protocols do not focus on utilizing optimal paths. However, cooperative diversity, an area of recent interest, provides techniques to efficiently exploit path and channel diversity. This paper examines the potential performance improvements offered by path diversity. Three settings are examined, namely, where the path loss and channel correlation are neglected, where path loss is considered, but channel correlation is neglected, and where path loss and channel correlation are both accounted for. It is shown that by exploiting path diversity, dramatic improvements in the considered route metric may be achieved. Furthermore, in some settings, if the link statistics are held constant, then when path diversity is exploited, the route metric improves with path length. ...
In this paper we present a general hybrid systems modeling framework to describe the flow of traf... more In this paper we present a general hybrid systems modeling framework to describe the flow of traffic in communication networks. To characterize network behavior, these models use averaging to continuously approximate discrete variables such as congestion window and queue size. Because averaging occurs over short time intervals, one still models discrete events such as the occurrence of a drop and the consequent reaction (e.g., congestion control). The proposed hybrid systems modeling framework fills the gap between packet-level and fluid-based models: by averaging discrete variables over a very short time scale (on the order of a round-trip time), our models are able to capture the dynamics of transient phenomena fairly accurately. This provides significant flexibility in modeling various congestion control mechanisms, different queuing policies, multicast transmission, etc. We validate our hybrid modeling methodology by comparing simulations of the hybrid models against packet-leve...
2006 4th International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks
MILCOM 2005 - 2005 IEEE Military Communications Conference
An optimal approach to mitigation of flooding denial of service attacks is presented. The objecti... more An optimal approach to mitigation of flooding denial of service attacks is presented. The objective is to minimize effect of the mitigation while protecting the server. The approach relies on routers filtering enough packets so that the server is not overwhelmed while ensuring that as little filtering is performed as possible. The optimal solution is to filter packets at routers through which the “attack packets” are passing. The identification of which router the packets are passing is carried out by routers filtering a small but time varying fraction of the packets. The arrival of packets at the server is correlated to router filtering providing an indication through which routers the attack packets are passing. Once sufficient confidence in the identification is achieved, the routers that forward more attack packets filter more packets than router that forward less attack packets.
Capacity optimization by optimizing transmission schedules of wireless networks has been an activ... more Capacity optimization by optimizing transmission schedules of wireless networks has been an active area of research for at least 20 years. The challenge is that the space over which the optimization is performed is exponential in the number of links in the network. For example, in the simple SISO case where no power control is used and only one bit-rate is available, the optimization must be performed over a space of size 2 where there are L links in the network. Thus, the optimization cannot be performed for even moderate sized networks of a few tens of links. This abstract discusses recent advances that allow capacity maximization in realistic mesh networks. With these techniques, the maximum capacity of a 500 link network can be determined in approximately 6 minutes on a 2.8GHz PC. This represents a dramatic improvement over the techniques of [1] that can only be applied to networks with fewer than 16 links. With tractable schedule optimization, it is possible to consider optimal...
Most standard implementations of TCP perform poorly when packets are reordered. In this paper, we... more Most standard implementations of TCP perform poorly when packets are reordered. In this paper, we propose a new version of TCP that maintains high throughput when reordering occurs and yet, when packet reordering does not occur, is friendly to other versions of TCP. The proposed TCP variant, or TCPPR, does not rely on duplicate acknowledgments to detect a packet loss. Instead, timers are maintained to keep track of how long ago a packet was transmitted. In case the corresponding acknowledgment has not yet arrived and the elapsed time since the packet was sent is larger than a given threshold, the packet is assumed lost. Because TCP-PR does not rely on duplicate acknowledgments, packet reordering (including out-or-order acknowledgements) has no effect on TCP-PR’s performance. Through extensive simulations, we show that evaluate TCP-PR performs consistently better than existing mechanisms that try to make TCP more robust to packet reordering. In the case that packets are not reordered...
In this paper a mobility model of people in urban areas for mobile wireless network simulation is... more In this paper a mobility model of people in urban areas for mobile wireless network simulation is presented. A 3-layer hierarchical approach is taken where the highest layer is an activity model that determines the high level activity that the node is performing (e.g., working). The second level is a task model that models the specific task within an activity (e.g., meeting with three people). And the third level is an agent model that determines how the person moves from one location to another. These three models are based on a number of surveys and data sources. The activity model is based on a recent US Department of Labor Bureau of Labor Statistics time use study. Such time use studies gather detailed information about how the interviewees spent their time. The task model mostly focuses on mobility of office workers and is based on the current findings from research on meetings analysis. The agent model is based on the work from urban planning that has collected extensive knowl...
Interference and collisions greatly limit the throughput of mesh networks that used contention-ba... more Interference and collisions greatly limit the throughput of mesh networks that used contention-based MAC protocols such as 802.11. Significantly higher throughput is achievable if transmissions are scheduled. However, traditional methods to compute optimal schedules are computationally intractable (unless co-channel interference is neglected). This paper presents a tractable technique to compute optimal schedules and routing in multihop wireless networks. The resulting algorithm consists of three layers of optimization. The inner-most optimization computes an estimate of the capacity. This optimization is a linear or nonlinear optimization with linear constraints. The middle iteration uses the Lagrange multipliers from the inner iteration to modify the space over which the inner optimization is performed. This is a graph theoretic optimization known as the maximum weighted independent set problem. The outermost optimization uses the Lagrange multipliers from the innermost optimizati...
A model based approach to bandwidth pricing is developed. The focus is not on how much an ISP sho... more A model based approach to bandwidth pricing is developed. The focus is not on how much an ISP should sell bandwidth for, but rather, how much bandwidth a video service provider (VSP) will need to use beyond the bandwidth provided via best-effort. An algorithm is presented where the VSP sells the ability to transmit a movie. It is assumed that the end-user pays the VSP for this ability at the beginning of the download, whereas the VSP pays the ISP for the bandwidth at the end of the download. Hence, the VSP must predict how much bandwidth will be required. There has been extensive research focused on how QoS guarantees can be accommodated in data networks [1], [2]. Typically, these approaches call for a single network to accommodate several classes of QoS. The idea behind this multitiered approach is that if users pay a premium they will be granted better service. While this research has reached advanced stages, there has been less work focusing on how these QoS guarantees should bes...
The theory of random graphs is applied to study the impact of the degree distribution on the spre... more The theory of random graphs is applied to study the impact of the degree distribution on the spread of email worms. First, the idea of the email address book graph is introduced. It is shown that the structure of this graph plays a critical role in the propagation of worms. Second, convincing evidence is provided that the email address book graph has a heavy tailed degree distribution. The implications of this degree distribution are then investigated. It is shown that this distribution implies that there are some nodes with very high degree. Indeed, the degree is so high, that a small fraction of nodes have enough degree that these nodes could be connected to even node in the graph. It is further shown that these high degree nodes are connected. The result is that these high degree nodes form a highly connect core. This core appears if the degree distribution is heavy tailed and the nodes are connected without any special preference. However, it is shown that graphs such as the IP ...
IEEE INFOCOM 2003. Twenty-second Annual Joint Conference of the IEEE Computer and Communications Societies (IEEE Cat. No.03CH37428), 2003
Proceedings of the 2004 American Control Conference, 2004
Third International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt'05)
— Most standard implementations of TCP perform poorly when packets are reordered. In this paper, ... more — Most standard implementations of TCP perform poorly when packets are reordered. In this paper, we propose a new version of TCP that maintains high throughput when reordering occurs and yet, when packet reordering does not occur, is friendly to other versions of TCP. The proposed TCP variant, or TCP-PR, does not rely on duplicate acknowledgments to detect a packet loss. Instead, timers are maintained to keep track of how long ago a packet was transmitted. In case the corresponding acknowledgment has not yet arrived and the elapsed time since the packet was sent is larger than a given threshold, the packet is assumed lost. Because TCP-PR does not rely on duplicate acknowledgments, packet reordering (including out-or-order acknowledgments) has no effect on TCP-PR’s performance. Through extensive simulations, we show that TCP-PR performs consistently better than existing mechanisms that try to make TCP more robust to packet reordering. In the case that packets are not reordered, we ve...
Abstract—Traffic properties relevant to Active Queue Management (AQM) and the resulting pole/zero... more Abstract—Traffic properties relevant to Active Queue Management (AQM) and the resulting pole/zero and Bode limitations are examined. ARX models of TCP are derived from data collectedinafixed "dumbbell " simulation environment. The pole dynamics at short time scales can be traced to the RTT, which serves as a confirmation of the relevance of the models. It is shown that the weighted sensitivity function is a reasonable metric for AQM performance. However, � � design shows that the sensitivity can only be slightly reduced. The conclusion of this work is that the goal of smoothing TCP’s oscillations (a principle design goal of AQM) will likely not be met in general. I.
In multihop wireless networks, the variability of channels results in some paths providing better... more In multihop wireless networks, the variability of channels results in some paths providing better performance than other paths. While it is well known that some paths are better than others, a significant number of routing protocols do not focus on utilizing optimal paths. However, cooperative diversity, an area of recent interest, provides techniques to efficiently exploit path and channel diversity. This paper examines the potential performance improvements offered by path diversity. Three settings are examined, namely, where the path loss and channel correlation are neglected, where path loss is considered, but channel correlation is neglected, and where path loss and channel correlation are both accounted for. It is shown that by exploiting path diversity, dramatic improvements in the considered route metric may be achieved. Furthermore, in some settings, if the link statistics are held constant, then when path diversity is exploited, the route metric improves with path length. ...
In this paper we present a general hybrid systems modeling framework to describe the flow of traf... more In this paper we present a general hybrid systems modeling framework to describe the flow of traffic in communication networks. To characterize network behavior, these models use averaging to continuously approximate discrete variables such as congestion window and queue size. Because averaging occurs over short time intervals, one still models discrete events such as the occurrence of a drop and the consequent reaction (e.g., congestion control). The proposed hybrid systems modeling framework fills the gap between packet-level and fluid-based models: by averaging discrete variables over a very short time scale (on the order of a round-trip time), our models are able to capture the dynamics of transient phenomena fairly accurately. This provides significant flexibility in modeling various congestion control mechanisms, different queuing policies, multicast transmission, etc. We validate our hybrid modeling methodology by comparing simulations of the hybrid models against packet-leve...
2006 4th International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks
MILCOM 2005 - 2005 IEEE Military Communications Conference
An optimal approach to mitigation of flooding denial of service attacks is presented. The objecti... more An optimal approach to mitigation of flooding denial of service attacks is presented. The objective is to minimize effect of the mitigation while protecting the server. The approach relies on routers filtering enough packets so that the server is not overwhelmed while ensuring that as little filtering is performed as possible. The optimal solution is to filter packets at routers through which the “attack packets” are passing. The identification of which router the packets are passing is carried out by routers filtering a small but time varying fraction of the packets. The arrival of packets at the server is correlated to router filtering providing an indication through which routers the attack packets are passing. Once sufficient confidence in the identification is achieved, the routers that forward more attack packets filter more packets than router that forward less attack packets.
Capacity optimization by optimizing transmission schedules of wireless networks has been an activ... more Capacity optimization by optimizing transmission schedules of wireless networks has been an active area of research for at least 20 years. The challenge is that the space over which the optimization is performed is exponential in the number of links in the network. For example, in the simple SISO case where no power control is used and only one bit-rate is available, the optimization must be performed over a space of size 2 where there are L links in the network. Thus, the optimization cannot be performed for even moderate sized networks of a few tens of links. This abstract discusses recent advances that allow capacity maximization in realistic mesh networks. With these techniques, the maximum capacity of a 500 link network can be determined in approximately 6 minutes on a 2.8GHz PC. This represents a dramatic improvement over the techniques of [1] that can only be applied to networks with fewer than 16 links. With tractable schedule optimization, it is possible to consider optimal...
Most standard implementations of TCP perform poorly when packets are reordered. In this paper, we... more Most standard implementations of TCP perform poorly when packets are reordered. In this paper, we propose a new version of TCP that maintains high throughput when reordering occurs and yet, when packet reordering does not occur, is friendly to other versions of TCP. The proposed TCP variant, or TCPPR, does not rely on duplicate acknowledgments to detect a packet loss. Instead, timers are maintained to keep track of how long ago a packet was transmitted. In case the corresponding acknowledgment has not yet arrived and the elapsed time since the packet was sent is larger than a given threshold, the packet is assumed lost. Because TCP-PR does not rely on duplicate acknowledgments, packet reordering (including out-or-order acknowledgements) has no effect on TCP-PR’s performance. Through extensive simulations, we show that evaluate TCP-PR performs consistently better than existing mechanisms that try to make TCP more robust to packet reordering. In the case that packets are not reordered...
In this paper a mobility model of people in urban areas for mobile wireless network simulation is... more In this paper a mobility model of people in urban areas for mobile wireless network simulation is presented. A 3-layer hierarchical approach is taken where the highest layer is an activity model that determines the high level activity that the node is performing (e.g., working). The second level is a task model that models the specific task within an activity (e.g., meeting with three people). And the third level is an agent model that determines how the person moves from one location to another. These three models are based on a number of surveys and data sources. The activity model is based on a recent US Department of Labor Bureau of Labor Statistics time use study. Such time use studies gather detailed information about how the interviewees spent their time. The task model mostly focuses on mobility of office workers and is based on the current findings from research on meetings analysis. The agent model is based on the work from urban planning that has collected extensive knowl...
Interference and collisions greatly limit the throughput of mesh networks that used contention-ba... more Interference and collisions greatly limit the throughput of mesh networks that used contention-based MAC protocols such as 802.11. Significantly higher throughput is achievable if transmissions are scheduled. However, traditional methods to compute optimal schedules are computationally intractable (unless co-channel interference is neglected). This paper presents a tractable technique to compute optimal schedules and routing in multihop wireless networks. The resulting algorithm consists of three layers of optimization. The inner-most optimization computes an estimate of the capacity. This optimization is a linear or nonlinear optimization with linear constraints. The middle iteration uses the Lagrange multipliers from the inner iteration to modify the space over which the inner optimization is performed. This is a graph theoretic optimization known as the maximum weighted independent set problem. The outermost optimization uses the Lagrange multipliers from the innermost optimizati...
A model based approach to bandwidth pricing is developed. The focus is not on how much an ISP sho... more A model based approach to bandwidth pricing is developed. The focus is not on how much an ISP should sell bandwidth for, but rather, how much bandwidth a video service provider (VSP) will need to use beyond the bandwidth provided via best-effort. An algorithm is presented where the VSP sells the ability to transmit a movie. It is assumed that the end-user pays the VSP for this ability at the beginning of the download, whereas the VSP pays the ISP for the bandwidth at the end of the download. Hence, the VSP must predict how much bandwidth will be required. There has been extensive research focused on how QoS guarantees can be accommodated in data networks [1], [2]. Typically, these approaches call for a single network to accommodate several classes of QoS. The idea behind this multitiered approach is that if users pay a premium they will be granted better service. While this research has reached advanced stages, there has been less work focusing on how these QoS guarantees should bes...
The theory of random graphs is applied to study the impact of the degree distribution on the spre... more The theory of random graphs is applied to study the impact of the degree distribution on the spread of email worms. First, the idea of the email address book graph is introduced. It is shown that the structure of this graph plays a critical role in the propagation of worms. Second, convincing evidence is provided that the email address book graph has a heavy tailed degree distribution. The implications of this degree distribution are then investigated. It is shown that this distribution implies that there are some nodes with very high degree. Indeed, the degree is so high, that a small fraction of nodes have enough degree that these nodes could be connected to even node in the graph. It is further shown that these high degree nodes are connected. The result is that these high degree nodes form a highly connect core. This core appears if the degree distribution is heavy tailed and the nodes are connected without any special preference. However, it is shown that graphs such as the IP ...
IEEE INFOCOM 2003. Twenty-second Annual Joint Conference of the IEEE Computer and Communications Societies (IEEE Cat. No.03CH37428), 2003
Proceedings of the 2004 American Control Conference, 2004