Rick McGeer - Academia.edu (original) (raw)
Papers by Rick McGeer
2018 IEEE/ACM Symposium on Edge Computing (SEC)
IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), 2019
We describe the deployment of an Internet measurement experiment to three testbeds that offer Lin... more We describe the deployment of an Internet measurement experiment to three testbeds that offer Linux containers hosted at widely distributed vantage points: the well-established PlanetLab Central and PlanetLab Europe platforms, and the new EdgeNet platform. The experiment results were published in the proceedings of ACM IMC 2018. We compare the capabilities of each testbed and their effect on the ease of deployment of the experiment. Because the software for this experiment has several library dependencies and requires a recent compiler, it was easiest to deploy on EdgeNet, which is based on Docker and Kubernetes. This extended abstract is accompanied by a demonstration of the reproducible deployment of a measurement tool on EdgeNet.
Proceedings of the 4th International Workshop on Edge Systems, Analytics and Networking, 2021
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific re... more HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
This report documents the program and the outcomes of Dagstuhl Seminar 15072 “Distributed Cloud C... more This report documents the program and the outcomes of Dagstuhl Seminar 15072 “Distributed Cloud Computing”. A distributed cloud connecting multiple, geographically distributed and smaller datacenters, can be an attractive alternative to today’s massive, centralized datacenters. A distributed cloud can reduce communication overheads, costs, and latency’s by offering nearby computation and storage resources. Better data locality can also improve privacy. In this seminar, we revisit the vision of distributed cloud computing, and identify different use cases as well as research challenges.
Abstract—We propose an adaptive software routing over-lay to improve the performance of TCP/IP-ba... more Abstract—We propose an adaptive software routing over-lay to improve the performance of TCP/IP-based internets over links with dynamic impairments. The routing over-lay adaptively distributes traffic optimally over a set of al-ternative paths based on real-time link status information provided by a system of network sensors. The routing over-lay also provides explicit rate feedback to rate-aware TCP clients, allowing them to achieve higher throughputs than legacy clients under static link loss and delay impairments. We present performance results from a collection of Em-ulab experiments on simple network topologies with dy-namic link impairments. The results show that the adap-tive routing overlay achieves significantly higher bulk file transfer throughput than legacy systems in the presence of link impairments. Sensor-based adaptive routing provides substantial gains in TCP throughput even for legacy TCP clients. When rate-aware TCP clients are used, the perfor-mance gain of the ad...
Most switch vendors have launched "open" platform designs for routers and switches, allowing code... more Most switch vendors have launched "open" platform designs for routers and switches, allowing code from customers or third-party vendors to run on their proprietary hardware. An open platform needs a programming interface, to provide switchlets sufficient access to platform features without exposing too much detail. We discuss the design of an abstraction layer and API designed to support portability between vendor platforms, isolation between switchlets and both the platform and other switchlets, high performance, and programming simplicity. The API would also support resource-management abstractions; for example, to allow policy-based allocation of TCAM entries among multiple switchlets.
ArXiv, 2019
The third Global Experimentation for Future Internet (GEFI 2018) workshop was held October 25-26,... more The third Global Experimentation for Future Internet (GEFI 2018) workshop was held October 25-26, 2018 in Tokyo, Japan, hosted by the University of Tokyo. A total of forty-four participants attended, representing Belgium, Brazil, China, Denmark, France, Ireland, Japan, the Republic of Korea, and the United States. The workshop employed a mixed format of presentations and open group discussions to advance multi-national coordination and interoperation of research infrastructure for advanced networking and computer science research. Major topic areas included: softwareization and virtualization of radios and networks; testbed support for networking experiments; EdgeNet; a federated testbed of elastic optical networks; and reproducibility in experimentation. Workshop goals included both the formulation of specific new research collaborations and strategies for coordination and interoperation of research testbeds. Workshop outcomes include a variety of new and ongoing collaborative effo...
2019 IEEE 27th International Conference on Network Protocols (ICNP), 2019
EdgeNet is the prototype of a scalable, sustainable general-purpose testbed for very wide area di... more EdgeNet is the prototype of a scalable, sustainable general-purpose testbed for very wide area distributed systems and extremely low-latency distributed services. In this, it is aimed at the same experimenters and systems that formed the core usage of previous, highly-successful wide-area testbeds such as PlanetLab [1], G-Lab [2], V-Node [5], GENI [4], and SAVI [3], and it incorporates many of the features that characterized those previous testbeds. EdgeNet’s goal is to achieve the usability and research value of the previous generations of wide area testbed, whilst offering radical improvements in the scalability and sustainability of those systems. It achieves this scalability and sustainability through a strategy of using industry-standard open-source software as the basis of its software stack, and by a strategy of hardwarefree, bottom-up, site-driven deployment. EdgeNet follows the Seattle [6] and PlanetIgnite [9] strategy of permitting sites to join the testbed with purely loc...
2016 28th International Teletraffic Congress (ITC 28), 2016
In this paper we describe LiveTalk, a framework for Collaborative Browser-based Replicated-Comput... more In this paper we describe LiveTalk, a framework for Collaborative Browser-based Replicated-Computation applications. LiveTalk permits multiple users separated across the wide area to interact with separate copies of a single application, sharing a single virtual workspace, using very little network bandwidth. LiveTalk features an integrated, browser-based programming environment with native graphics and live evaluation, an integrated, pluggable web server, and a simple messaging service that serves to coordinate activity on shared application sessions, and provides for multiple, mutually-isolated sessions. The first use case for LiveTalk are collaborative big-data visualizations running on thin-client devices such as cellular phones, tablets, and netbooks. These applications form part of a new class of application where the distributed Cloud is leveraged to provide low latency, and high-bandwidth access to geographically disparate users while maintaining the feel of immediacy associated with local computation. The primary motivation of this work is to permit low latency, collaborative applications to be built quickly and easily, while requiring no setup for use by the end-user.
ACM SIGMETRICS Performance Evaluation Review, 2015
Proceedings of the second ACM SIGCOMM workshop on Hot topics in software defined networking, 2013
In this paper, we describe a new protocol for the safe update of OpenFlow networks. This protocol... more In this paper, we describe a new protocol for the safe update of OpenFlow networks. This protocol meets the packet consistency and weak flow consistency conditions, requires neither on-switch resources nor the diversion of packets to refuges during updates, and alls into the family of Tracebased update protocols. The feature of this protocol is a sequence of per-switch rule updates. We derive a logic circuit for the update sequence, such that there exists a consistencypreserving update for the switch network if and only if the circuit is satisfiable subject to unsatisfiability of invariant violations; further, each satisfying minterm of the circuit yields a consistency-preserving update sequence.
Lecture Notes in Computer Science, 2012
Formal verification has seen much success in several domains of hardware and software design. For... more Formal verification has seen much success in several domains of hardware and software design. For example, in hardware verification there has been much work in the verification of microprocessors (e.g. [1]) and memory systems (e.g. [2]). Similarly, software verification has seen success in device-drivers (e.g. [3]) and concurrent software (e.g. [4]). The area of network verification, which consists of both hardware and software components, has received relatively less attention. Traditionally, the focus in this domain has been on performance and security, with less emphasis on functional correctness. However, increasing complexity is resulting in increasing functional failures and thus prompting interest in verification of key correctness properties. This paper reviews the formal verification techniques that have been used here thus far, with the goal of understanding the characteristics of the problem domain that are helpful for each of the techniques, as well as those that pose specific challenges. Finally, it highlights some interesting research challenges that need to be addressed in this important emerging domain.
ACM SIGCOMM Computer Communication Review, 2015
A distributed cloud connecting multiple, geographically distributed and smaller datacenters, can ... more A distributed cloud connecting multiple, geographically distributed and smaller datacenters, can be an attractive alternative to today's massive, centralized datacenters. A distributed cloud can reduce communication overheads, costs, and latencies by o ering nearby computation and storage resources. Better data locality can also improve privacy. In this paper, we revisit the vision of distributed cloud computing, and identify di erent use cases as well as research challenges. This article is based on the Dagstuhl Seminar on Distributed Cloud Computing, which took place in February 2015 at Schloss Dagstuhl.
2014 26th International Teletraffic Congress (ITC), 2014
We describe the GENI Experiment Engine, a Distributed-Platform-as-a-Service facility designed to ... more We describe the GENI Experiment Engine, a Distributed-Platform-as-a-Service facility designed to be implemented on a distributed testbed or infrastructure. The GEE is intended to provide rapid and convenient access to a distributed infrastructure for simple, easy-to-configure experiments and applications. Specifically, the design goal of the GEE is to permit experimenters and application writers to: (a) allocate a GEE slicelet; (b) deploy a simple experiment or application; (c) run the experiment; (d) collect the results; and (e) tear down the experiment, starting from scratch, within five minutes. The GEE consists of four cooperating services over the GENI infrastructure, which together with pre-allocated slicelets and a pre-allocated network offers a complete, ready to use, sliceable platform over the GENI Infrastructure.
ACM SIGOPS Operating Systems Review, 2009
TCP/IP is known to have poor performance under conditions of moderate to high packet loss (5%-20%... more TCP/IP is known to have poor performance under conditions of moderate to high packet loss (5%-20%) and end-to-end latency (20-200 ms). The CHART system, under development by HP and its partners under contract to the US Defense Advanced Research Projects Agency, is a careful re-engineering of Internet Layer 3 and Layer 4 protocols to improve TCP/IP performance in these cases. The CHART system has just completed the second phase of a three-phase, 42-month development cycle. The goal for the 42-month program was a 10x improvement in the performance of TCP/IP under conditions of loss and delay. In independent tests for DARPA at Science Applications In-ternational Corporation, the CHART System demonstrated a 20x performance improvement over TCP/IP, exceeding the goals for the program by a factor of two. Fairness to legacy TCP and UDP ows was further demonstrated in DARPA testing. We describe the CHART System as a set of five interacting services and protocol improvements which act togeth...
MILCOM 2006, 2006
CHART seeks to improve the performance of operational DoD internets through the introduction of a... more CHART seeks to improve the performance of operational DoD internets through the introduction of an intelligent network overlay. TCP performance-particularly between CONUS and forward-deployed components located in combat theaters-can be severely degraded due to high loss rates and long latencies. The lack of current information about network conditions in the core NIPRnet/SIPRnet further compounds the problem, because end hosts lack the data required to make intelligent routing decisions. Deploying CHART's enhanced control plane improves measurement and monitoring of unreliable communication links to provide current network state information to routers implemented in both software and hardware, enabling intelligent routing around faulty links. We describe the design of software and hardware routers sharing a common network 'sensing' infrastructure, the implementation of end-to-end Quality of Service via flow state aware routers, and a new network-aware TCP/IP stack for Linux end systems. Performance test results demonstrate that bulk file transfer throughput can be increased by as much as an order of magnitude in networks with severely impaired communication links.
2008 5th International Conference on Broadband Communications, Networks and Systems, 2008
Proceedings of the 6th ACM/IEEE Symposium on Architectures for Networking and Communications Systems - ANCS '10, 2010
FPGAs, modular programming, Click Reconfigurable network hardware makes it easier to experiment w... more FPGAs, modular programming, Click Reconfigurable network hardware makes it easier to experiment with and prototype high-speed networking systems. However, these devices are still relatively hard to program; for example, the NetFPGA requires users to develop in Verilog. Further, these devices are commonly designed to work with software on a host computer, requiring the co-development of these hardware and software components. We address this situation with Chimpp, a development environment for reconfigurable network hardware, modeled on the popular Click software modular router system. Chimpp employs a modular approach to designing hardware-based packet-processing systems, featuring a simple configuration language similar to that of Click. We demonstrate this development environment by targeting the NetFPGA platform. Chimpp can be combined with Click itself at the software layer for a highly modular, mixed hardware and software design framework. We also enable the integrated simulation of the hardware and software components of a network device together with other network devices using the OMNeT++ network imulator. In contrast to some prior work, Chimpp focuses on making experimentation easy, rather than on optimizing hardware performance. Chimpp also avoids unnecessary restrictions on communication patterns and design styles such as were imposed by prior approaches. We describe our design and implementation of Chimpp, and provide initial evaluations showing how Chimpp makes it easy to implement, simulate, and modify a variety of packet-processing systems on the NetFPGA platform.
Fifth International Conference on Creating, Connecting and Collaborating through Computing (C5 '07), 2007
Controlling access to a large distributed service is a potentially error prone process that may n... more Controlling access to a large distributed service is a potentially error prone process that may negatively impact request throughput and usability. Our Authorization-Based Access Control (or ABAC) URL rewriter solves this problem by providing locally verifiable authorizations and delegation tracking compatible with common web tools. Our access control mechanism is reusable, distributed and meets the scaling requirements of large distributed services. We demonstrate the successful operation of our proposed mechanism on HP's real-time network monitoring and measurement web service, S 3 .
Proceedings of the 2012 workshop on Cloud services, federation, and the 8th open cirrus summit, 2012
In this paper, we argue that federation of cloud systems requires a standard API for users to cre... more In this paper, we argue that federation of cloud systems requires a standard API for users to create, manage, and destroy virtual objects, and a standard naming scheme for virtual objects. We introduce an existing API for this purpose, the Slice-Based Federation Architecture, and demonstrate that it can be implemented on a number of existing cloud management systems. We introduce a simple naming scheme for virtual objects, and discuss its implementation.
2018 IEEE/ACM Symposium on Edge Computing (SEC)
IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), 2019
We describe the deployment of an Internet measurement experiment to three testbeds that offer Lin... more We describe the deployment of an Internet measurement experiment to three testbeds that offer Linux containers hosted at widely distributed vantage points: the well-established PlanetLab Central and PlanetLab Europe platforms, and the new EdgeNet platform. The experiment results were published in the proceedings of ACM IMC 2018. We compare the capabilities of each testbed and their effect on the ease of deployment of the experiment. Because the software for this experiment has several library dependencies and requires a recent compiler, it was easiest to deploy on EdgeNet, which is based on Docker and Kubernetes. This extended abstract is accompanied by a demonstration of the reproducible deployment of a measurement tool on EdgeNet.
Proceedings of the 4th International Workshop on Edge Systems, Analytics and Networking, 2021
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific re... more HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
This report documents the program and the outcomes of Dagstuhl Seminar 15072 “Distributed Cloud C... more This report documents the program and the outcomes of Dagstuhl Seminar 15072 “Distributed Cloud Computing”. A distributed cloud connecting multiple, geographically distributed and smaller datacenters, can be an attractive alternative to today’s massive, centralized datacenters. A distributed cloud can reduce communication overheads, costs, and latency’s by offering nearby computation and storage resources. Better data locality can also improve privacy. In this seminar, we revisit the vision of distributed cloud computing, and identify different use cases as well as research challenges.
Abstract—We propose an adaptive software routing over-lay to improve the performance of TCP/IP-ba... more Abstract—We propose an adaptive software routing over-lay to improve the performance of TCP/IP-based internets over links with dynamic impairments. The routing over-lay adaptively distributes traffic optimally over a set of al-ternative paths based on real-time link status information provided by a system of network sensors. The routing over-lay also provides explicit rate feedback to rate-aware TCP clients, allowing them to achieve higher throughputs than legacy clients under static link loss and delay impairments. We present performance results from a collection of Em-ulab experiments on simple network topologies with dy-namic link impairments. The results show that the adap-tive routing overlay achieves significantly higher bulk file transfer throughput than legacy systems in the presence of link impairments. Sensor-based adaptive routing provides substantial gains in TCP throughput even for legacy TCP clients. When rate-aware TCP clients are used, the perfor-mance gain of the ad...
Most switch vendors have launched "open" platform designs for routers and switches, allowing code... more Most switch vendors have launched "open" platform designs for routers and switches, allowing code from customers or third-party vendors to run on their proprietary hardware. An open platform needs a programming interface, to provide switchlets sufficient access to platform features without exposing too much detail. We discuss the design of an abstraction layer and API designed to support portability between vendor platforms, isolation between switchlets and both the platform and other switchlets, high performance, and programming simplicity. The API would also support resource-management abstractions; for example, to allow policy-based allocation of TCAM entries among multiple switchlets.
ArXiv, 2019
The third Global Experimentation for Future Internet (GEFI 2018) workshop was held October 25-26,... more The third Global Experimentation for Future Internet (GEFI 2018) workshop was held October 25-26, 2018 in Tokyo, Japan, hosted by the University of Tokyo. A total of forty-four participants attended, representing Belgium, Brazil, China, Denmark, France, Ireland, Japan, the Republic of Korea, and the United States. The workshop employed a mixed format of presentations and open group discussions to advance multi-national coordination and interoperation of research infrastructure for advanced networking and computer science research. Major topic areas included: softwareization and virtualization of radios and networks; testbed support for networking experiments; EdgeNet; a federated testbed of elastic optical networks; and reproducibility in experimentation. Workshop goals included both the formulation of specific new research collaborations and strategies for coordination and interoperation of research testbeds. Workshop outcomes include a variety of new and ongoing collaborative effo...
2019 IEEE 27th International Conference on Network Protocols (ICNP), 2019
EdgeNet is the prototype of a scalable, sustainable general-purpose testbed for very wide area di... more EdgeNet is the prototype of a scalable, sustainable general-purpose testbed for very wide area distributed systems and extremely low-latency distributed services. In this, it is aimed at the same experimenters and systems that formed the core usage of previous, highly-successful wide-area testbeds such as PlanetLab [1], G-Lab [2], V-Node [5], GENI [4], and SAVI [3], and it incorporates many of the features that characterized those previous testbeds. EdgeNet’s goal is to achieve the usability and research value of the previous generations of wide area testbed, whilst offering radical improvements in the scalability and sustainability of those systems. It achieves this scalability and sustainability through a strategy of using industry-standard open-source software as the basis of its software stack, and by a strategy of hardwarefree, bottom-up, site-driven deployment. EdgeNet follows the Seattle [6] and PlanetIgnite [9] strategy of permitting sites to join the testbed with purely loc...
2016 28th International Teletraffic Congress (ITC 28), 2016
In this paper we describe LiveTalk, a framework for Collaborative Browser-based Replicated-Comput... more In this paper we describe LiveTalk, a framework for Collaborative Browser-based Replicated-Computation applications. LiveTalk permits multiple users separated across the wide area to interact with separate copies of a single application, sharing a single virtual workspace, using very little network bandwidth. LiveTalk features an integrated, browser-based programming environment with native graphics and live evaluation, an integrated, pluggable web server, and a simple messaging service that serves to coordinate activity on shared application sessions, and provides for multiple, mutually-isolated sessions. The first use case for LiveTalk are collaborative big-data visualizations running on thin-client devices such as cellular phones, tablets, and netbooks. These applications form part of a new class of application where the distributed Cloud is leveraged to provide low latency, and high-bandwidth access to geographically disparate users while maintaining the feel of immediacy associated with local computation. The primary motivation of this work is to permit low latency, collaborative applications to be built quickly and easily, while requiring no setup for use by the end-user.
ACM SIGMETRICS Performance Evaluation Review, 2015
Proceedings of the second ACM SIGCOMM workshop on Hot topics in software defined networking, 2013
In this paper, we describe a new protocol for the safe update of OpenFlow networks. This protocol... more In this paper, we describe a new protocol for the safe update of OpenFlow networks. This protocol meets the packet consistency and weak flow consistency conditions, requires neither on-switch resources nor the diversion of packets to refuges during updates, and alls into the family of Tracebased update protocols. The feature of this protocol is a sequence of per-switch rule updates. We derive a logic circuit for the update sequence, such that there exists a consistencypreserving update for the switch network if and only if the circuit is satisfiable subject to unsatisfiability of invariant violations; further, each satisfying minterm of the circuit yields a consistency-preserving update sequence.
Lecture Notes in Computer Science, 2012
Formal verification has seen much success in several domains of hardware and software design. For... more Formal verification has seen much success in several domains of hardware and software design. For example, in hardware verification there has been much work in the verification of microprocessors (e.g. [1]) and memory systems (e.g. [2]). Similarly, software verification has seen success in device-drivers (e.g. [3]) and concurrent software (e.g. [4]). The area of network verification, which consists of both hardware and software components, has received relatively less attention. Traditionally, the focus in this domain has been on performance and security, with less emphasis on functional correctness. However, increasing complexity is resulting in increasing functional failures and thus prompting interest in verification of key correctness properties. This paper reviews the formal verification techniques that have been used here thus far, with the goal of understanding the characteristics of the problem domain that are helpful for each of the techniques, as well as those that pose specific challenges. Finally, it highlights some interesting research challenges that need to be addressed in this important emerging domain.
ACM SIGCOMM Computer Communication Review, 2015
A distributed cloud connecting multiple, geographically distributed and smaller datacenters, can ... more A distributed cloud connecting multiple, geographically distributed and smaller datacenters, can be an attractive alternative to today's massive, centralized datacenters. A distributed cloud can reduce communication overheads, costs, and latencies by o ering nearby computation and storage resources. Better data locality can also improve privacy. In this paper, we revisit the vision of distributed cloud computing, and identify di erent use cases as well as research challenges. This article is based on the Dagstuhl Seminar on Distributed Cloud Computing, which took place in February 2015 at Schloss Dagstuhl.
2014 26th International Teletraffic Congress (ITC), 2014
We describe the GENI Experiment Engine, a Distributed-Platform-as-a-Service facility designed to ... more We describe the GENI Experiment Engine, a Distributed-Platform-as-a-Service facility designed to be implemented on a distributed testbed or infrastructure. The GEE is intended to provide rapid and convenient access to a distributed infrastructure for simple, easy-to-configure experiments and applications. Specifically, the design goal of the GEE is to permit experimenters and application writers to: (a) allocate a GEE slicelet; (b) deploy a simple experiment or application; (c) run the experiment; (d) collect the results; and (e) tear down the experiment, starting from scratch, within five minutes. The GEE consists of four cooperating services over the GENI infrastructure, which together with pre-allocated slicelets and a pre-allocated network offers a complete, ready to use, sliceable platform over the GENI Infrastructure.
ACM SIGOPS Operating Systems Review, 2009
TCP/IP is known to have poor performance under conditions of moderate to high packet loss (5%-20%... more TCP/IP is known to have poor performance under conditions of moderate to high packet loss (5%-20%) and end-to-end latency (20-200 ms). The CHART system, under development by HP and its partners under contract to the US Defense Advanced Research Projects Agency, is a careful re-engineering of Internet Layer 3 and Layer 4 protocols to improve TCP/IP performance in these cases. The CHART system has just completed the second phase of a three-phase, 42-month development cycle. The goal for the 42-month program was a 10x improvement in the performance of TCP/IP under conditions of loss and delay. In independent tests for DARPA at Science Applications In-ternational Corporation, the CHART System demonstrated a 20x performance improvement over TCP/IP, exceeding the goals for the program by a factor of two. Fairness to legacy TCP and UDP ows was further demonstrated in DARPA testing. We describe the CHART System as a set of five interacting services and protocol improvements which act togeth...
MILCOM 2006, 2006
CHART seeks to improve the performance of operational DoD internets through the introduction of a... more CHART seeks to improve the performance of operational DoD internets through the introduction of an intelligent network overlay. TCP performance-particularly between CONUS and forward-deployed components located in combat theaters-can be severely degraded due to high loss rates and long latencies. The lack of current information about network conditions in the core NIPRnet/SIPRnet further compounds the problem, because end hosts lack the data required to make intelligent routing decisions. Deploying CHART's enhanced control plane improves measurement and monitoring of unreliable communication links to provide current network state information to routers implemented in both software and hardware, enabling intelligent routing around faulty links. We describe the design of software and hardware routers sharing a common network 'sensing' infrastructure, the implementation of end-to-end Quality of Service via flow state aware routers, and a new network-aware TCP/IP stack for Linux end systems. Performance test results demonstrate that bulk file transfer throughput can be increased by as much as an order of magnitude in networks with severely impaired communication links.
2008 5th International Conference on Broadband Communications, Networks and Systems, 2008
Proceedings of the 6th ACM/IEEE Symposium on Architectures for Networking and Communications Systems - ANCS '10, 2010
FPGAs, modular programming, Click Reconfigurable network hardware makes it easier to experiment w... more FPGAs, modular programming, Click Reconfigurable network hardware makes it easier to experiment with and prototype high-speed networking systems. However, these devices are still relatively hard to program; for example, the NetFPGA requires users to develop in Verilog. Further, these devices are commonly designed to work with software on a host computer, requiring the co-development of these hardware and software components. We address this situation with Chimpp, a development environment for reconfigurable network hardware, modeled on the popular Click software modular router system. Chimpp employs a modular approach to designing hardware-based packet-processing systems, featuring a simple configuration language similar to that of Click. We demonstrate this development environment by targeting the NetFPGA platform. Chimpp can be combined with Click itself at the software layer for a highly modular, mixed hardware and software design framework. We also enable the integrated simulation of the hardware and software components of a network device together with other network devices using the OMNeT++ network imulator. In contrast to some prior work, Chimpp focuses on making experimentation easy, rather than on optimizing hardware performance. Chimpp also avoids unnecessary restrictions on communication patterns and design styles such as were imposed by prior approaches. We describe our design and implementation of Chimpp, and provide initial evaluations showing how Chimpp makes it easy to implement, simulate, and modify a variety of packet-processing systems on the NetFPGA platform.
Fifth International Conference on Creating, Connecting and Collaborating through Computing (C5 '07), 2007
Controlling access to a large distributed service is a potentially error prone process that may n... more Controlling access to a large distributed service is a potentially error prone process that may negatively impact request throughput and usability. Our Authorization-Based Access Control (or ABAC) URL rewriter solves this problem by providing locally verifiable authorizations and delegation tracking compatible with common web tools. Our access control mechanism is reusable, distributed and meets the scaling requirements of large distributed services. We demonstrate the successful operation of our proposed mechanism on HP's real-time network monitoring and measurement web service, S 3 .
Proceedings of the 2012 workshop on Cloud services, federation, and the 8th open cirrus summit, 2012
In this paper, we argue that federation of cloud systems requires a standard API for users to cre... more In this paper, we argue that federation of cloud systems requires a standard API for users to create, manage, and destroy virtual objects, and a standard naming scheme for virtual objects. We introduce an existing API for this purpose, the Slice-Based Federation Architecture, and demonstrate that it can be implemented on a number of existing cloud management systems. We introduce a simple naming scheme for virtual objects, and discuss its implementation.