Eric Van Hensbergen - Academia.edu (original) (raw)

Uploads

Papers by Eric Van Hensbergen

Research paper thumbnail of Private delegated computations using strong isolation

IEEE Transactions on Emerging Topics in Computing

Research paper thumbnail of Brasil

Research paper thumbnail of Method and apparatus for scheduling in a non-uniform compute device

OSTI OAI (U.S. Department of Energy Office of Scientific and Technical Information), Feb 4, 2020

Research paper thumbnail of Cache-based communication between execution threads of a data processing system

OSTI OAI (U.S. Department of Energy Office of Scientific and Technical Information), Nov 12, 2019

Research paper thumbnail of IBM Research Report Dynamic Policy Disk Caching for Storage Networking

Research paper thumbnail of Distributed Seamless Desktop Extension

XCPU2 is the evolution of our XCPU process management system which allows the users to compose th... more XCPU2 is the evolution of our XCPU process management system which allows the users to compose the environment of the remote cluster nodes to match that of their desktop workstation. This creates the illusion of cluster computation resources being a seamless extension of their desktop interface, facilitating cluster acceleration of workflows which can be composed and visualized on the end-user desktop. XCPU2 allows programs running on the cluster to use the same versions of the libraries and tools the user installed locally on their desktop, and access support files such as configuration and data in the familiar places in which they are located on the end-user's workstation. XCPU2 builds on our earlier work with the XCPU process management system. Like XCPU, XCPU2's interface is repre- sented as a set of files exported by a 9P file server. It supports heterogeneous clusters and multiple head nodes. Unlike XCPU, it uses a pull instead of push model for distributed applications, binaries, and data. In this paper we describe the XCPU2 clustering model, its operation and how the per-job filesystem configuration can be used to solve some of the common problems when running a cluster.

Research paper thumbnail of Cluster Computing with OpenHPC

OpenHPC is a newly formed, community-based project that is providing an integrated collection of ... more OpenHPC is a newly formed, community-based project that is providing an integrated collection of HPC-centric software components that can be used to implement a full-featured reference HPC compute resource. Components span the entire HPC software ecosystem including provisioning and system administration tools, resource management, I/O services, development tools, numerical libraries, and performance analysis tools. Common clustering tools and scientific libraries are distributed as pre-built and validated binaries and are meant to seamlessly layer on top of existing Linux distributions. The architecture of OpenHPC is intentionally modular to allow end users to pick and choose from the provided components, as well as to foster a community of open contribution. This paper presents an overview of the underlying community vision, governance structure, packaging conventions, build and release infrastructure and validation methodologies.

Research paper thumbnail of IBM Research Report HARE: Final Report

Research paper thumbnail of IBM Research Report Revisiting Link-Layer Storage Networking

Research paper thumbnail of Empowering Users and Applications with Private Namespaces

Private name spaces were first introduced into LINUX during the 2.5 kernel series. Their use has ... more Private name spaces were first introduced into LINUX during the 2.5 kernel series. Their use has been limited due to name space manipulation being considered a privileged operation. Giving users and applications the ability to create private name spaces as well as the ability to mount and bind resources is the key to unlocking the full potential of this technology. There are serious performance, security and stability issues involved with user-controlled dynamic private name spaces in LINUX. This paper proposes mechanisms and policies for maintaining system integrity while unlocking the power of dynamic name spaces for normal users. It discusses relevant potential applications of this technology including its use with F ILESYSTEM IN USERSPACE[24], V9FS[8] (the LINUX port of the PLAN 9 resource sharing protocol) and PLAN 9 FROM USER SPACE[4] (the PLAN 9 application suite including user space synthetic file servers ported to UNIX variants).

Research paper thumbnail of SC12 Poster V5

Research paper thumbnail of Basic Resource Aggregation System Infrastructure Layer

ABSTRACT Brasil is a self-contained service which can be deployed across a cluster to provide a d... more ABSTRACT Brasil is a self-contained service which can be deployed across a cluster to provide a dataflow workload distribution and communication aggregation mechanism. Together with our dataflow shell, named PUSH, it is intended to be used for the management of non-...

Research paper thumbnail of Service delivery using profile based management

Research paper thumbnail of Why Elasticity Matters

Research paper thumbnail of RC23694 (W0508-091) August 16, 2005 Computer Science IBM Research Report

This document re-evaluates the software stack in the light of para-virtualization technology and ... more This document re-evaluates the software stack in the light of para-virtualization technology and hypervisor support within next generation processors and operating systems. We describe an infrastructure enabling the use of logical partitions (LPARs) for the execution of stand-alone applications along side traditional operating systems. The design goal is to provide an environment allowing normal users to execute, interact and manage these custom kernels in much the same way they would with typical applications. The development environment is a set of modular component libraries providing necessary system services, and a familiar debug environment provided by exposing partition memory and control interfaces to a "controller" partition. We describe the implementation of our prototype using the IBM research hypervisor along with the Linux kernel and explore potential applications which could benefit from this new environment. 1.

Research paper thumbnail of Service Oriented File Systems

Service Oriented Architectures (SOAs) are a loose coupling of network services providing methods ... more Service Oriented Architectures (SOAs) are a loose coupling of network services providing methods for systems development and integration. Interoperability between different systems and programming languages is provided via communication protocols and well defined messages. The recent development trend has been to favor RESTful approaches for these interfaces, which encode relevant context and semantic metadata into the URL of an HTTP GET or PUT operation. We observe that this approach is essentially a simplified web-instantiation of synthetic file system based service interfaces, such as those originally pioneered by UNIX and later the Plan 9 and Inferno operating systems. In this paper we advocate the collapse of the software stack by abstracting the underlying transport and naming details, and accessing RESTful services via standard file system interfaces. We explore the research challenges and opportunities presented by taking such an approach to building comprehensive dynamic di...

Research paper thumbnail of SMARTER: Experiences with Cloud Native on the Edge

The decreasing cost and power consumption of intelligent, interconnected, and interactive devices... more The decreasing cost and power consumption of intelligent, interconnected, and interactive devices at the edge of the internet are creating massive opportunities to instrument our cities, factories, farms, and environment to improve efficiency, safety and productivity. Developing, debugging, deploying and securing software for the estimated trillion connected devices present substantial challenges. As part of the SMARTER (Secure Municipal, Agricultural, Rural, and Telco Edge Research) project, Arm has been exploring the use of cloudnative technology and methodologies in edge environments to evaluate their effectiveness at addressing these problems at scale.

Research paper thumbnail of ARM HPC Ecosystem and the Reemergence of Vectors

Proceedings of the Computing Frontiers Conference, 2017

Research paper thumbnail of Providing a cloud network infrastructure on a supercomputer

Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing - HPDC '10, 2010

Research paper thumbnail of Statistically regulating program behavior via mainstream computing

Proceedings of the 8th annual IEEE/ ACM international symposium on Code generation and optimization - CGO '10, 2010

Research paper thumbnail of Private delegated computations using strong isolation

IEEE Transactions on Emerging Topics in Computing

Research paper thumbnail of Brasil

Research paper thumbnail of Method and apparatus for scheduling in a non-uniform compute device

OSTI OAI (U.S. Department of Energy Office of Scientific and Technical Information), Feb 4, 2020

Research paper thumbnail of Cache-based communication between execution threads of a data processing system

OSTI OAI (U.S. Department of Energy Office of Scientific and Technical Information), Nov 12, 2019

Research paper thumbnail of IBM Research Report Dynamic Policy Disk Caching for Storage Networking

Research paper thumbnail of Distributed Seamless Desktop Extension

XCPU2 is the evolution of our XCPU process management system which allows the users to compose th... more XCPU2 is the evolution of our XCPU process management system which allows the users to compose the environment of the remote cluster nodes to match that of their desktop workstation. This creates the illusion of cluster computation resources being a seamless extension of their desktop interface, facilitating cluster acceleration of workflows which can be composed and visualized on the end-user desktop. XCPU2 allows programs running on the cluster to use the same versions of the libraries and tools the user installed locally on their desktop, and access support files such as configuration and data in the familiar places in which they are located on the end-user's workstation. XCPU2 builds on our earlier work with the XCPU process management system. Like XCPU, XCPU2's interface is repre- sented as a set of files exported by a 9P file server. It supports heterogeneous clusters and multiple head nodes. Unlike XCPU, it uses a pull instead of push model for distributed applications, binaries, and data. In this paper we describe the XCPU2 clustering model, its operation and how the per-job filesystem configuration can be used to solve some of the common problems when running a cluster.

Research paper thumbnail of Cluster Computing with OpenHPC

OpenHPC is a newly formed, community-based project that is providing an integrated collection of ... more OpenHPC is a newly formed, community-based project that is providing an integrated collection of HPC-centric software components that can be used to implement a full-featured reference HPC compute resource. Components span the entire HPC software ecosystem including provisioning and system administration tools, resource management, I/O services, development tools, numerical libraries, and performance analysis tools. Common clustering tools and scientific libraries are distributed as pre-built and validated binaries and are meant to seamlessly layer on top of existing Linux distributions. The architecture of OpenHPC is intentionally modular to allow end users to pick and choose from the provided components, as well as to foster a community of open contribution. This paper presents an overview of the underlying community vision, governance structure, packaging conventions, build and release infrastructure and validation methodologies.

Research paper thumbnail of IBM Research Report HARE: Final Report

Research paper thumbnail of IBM Research Report Revisiting Link-Layer Storage Networking

Research paper thumbnail of Empowering Users and Applications with Private Namespaces

Private name spaces were first introduced into LINUX during the 2.5 kernel series. Their use has ... more Private name spaces were first introduced into LINUX during the 2.5 kernel series. Their use has been limited due to name space manipulation being considered a privileged operation. Giving users and applications the ability to create private name spaces as well as the ability to mount and bind resources is the key to unlocking the full potential of this technology. There are serious performance, security and stability issues involved with user-controlled dynamic private name spaces in LINUX. This paper proposes mechanisms and policies for maintaining system integrity while unlocking the power of dynamic name spaces for normal users. It discusses relevant potential applications of this technology including its use with F ILESYSTEM IN USERSPACE[24], V9FS[8] (the LINUX port of the PLAN 9 resource sharing protocol) and PLAN 9 FROM USER SPACE[4] (the PLAN 9 application suite including user space synthetic file servers ported to UNIX variants).

Research paper thumbnail of SC12 Poster V5

Research paper thumbnail of Basic Resource Aggregation System Infrastructure Layer

ABSTRACT Brasil is a self-contained service which can be deployed across a cluster to provide a d... more ABSTRACT Brasil is a self-contained service which can be deployed across a cluster to provide a dataflow workload distribution and communication aggregation mechanism. Together with our dataflow shell, named PUSH, it is intended to be used for the management of non-...

Research paper thumbnail of Service delivery using profile based management

Research paper thumbnail of Why Elasticity Matters

Research paper thumbnail of RC23694 (W0508-091) August 16, 2005 Computer Science IBM Research Report

This document re-evaluates the software stack in the light of para-virtualization technology and ... more This document re-evaluates the software stack in the light of para-virtualization technology and hypervisor support within next generation processors and operating systems. We describe an infrastructure enabling the use of logical partitions (LPARs) for the execution of stand-alone applications along side traditional operating systems. The design goal is to provide an environment allowing normal users to execute, interact and manage these custom kernels in much the same way they would with typical applications. The development environment is a set of modular component libraries providing necessary system services, and a familiar debug environment provided by exposing partition memory and control interfaces to a "controller" partition. We describe the implementation of our prototype using the IBM research hypervisor along with the Linux kernel and explore potential applications which could benefit from this new environment. 1.

Research paper thumbnail of Service Oriented File Systems

Service Oriented Architectures (SOAs) are a loose coupling of network services providing methods ... more Service Oriented Architectures (SOAs) are a loose coupling of network services providing methods for systems development and integration. Interoperability between different systems and programming languages is provided via communication protocols and well defined messages. The recent development trend has been to favor RESTful approaches for these interfaces, which encode relevant context and semantic metadata into the URL of an HTTP GET or PUT operation. We observe that this approach is essentially a simplified web-instantiation of synthetic file system based service interfaces, such as those originally pioneered by UNIX and later the Plan 9 and Inferno operating systems. In this paper we advocate the collapse of the software stack by abstracting the underlying transport and naming details, and accessing RESTful services via standard file system interfaces. We explore the research challenges and opportunities presented by taking such an approach to building comprehensive dynamic di...

Research paper thumbnail of SMARTER: Experiences with Cloud Native on the Edge

The decreasing cost and power consumption of intelligent, interconnected, and interactive devices... more The decreasing cost and power consumption of intelligent, interconnected, and interactive devices at the edge of the internet are creating massive opportunities to instrument our cities, factories, farms, and environment to improve efficiency, safety and productivity. Developing, debugging, deploying and securing software for the estimated trillion connected devices present substantial challenges. As part of the SMARTER (Secure Municipal, Agricultural, Rural, and Telco Edge Research) project, Arm has been exploring the use of cloudnative technology and methodologies in edge environments to evaluate their effectiveness at addressing these problems at scale.

Research paper thumbnail of ARM HPC Ecosystem and the Reemergence of Vectors

Proceedings of the Computing Frontiers Conference, 2017

Research paper thumbnail of Providing a cloud network infrastructure on a supercomputer

Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing - HPDC '10, 2010

Research paper thumbnail of Statistically regulating program behavior via mainstream computing

Proceedings of the 8th annual IEEE/ ACM international symposium on Code generation and optimization - CGO '10, 2010