Alan Sussman - Academia.edu (original) (raw)

Papers by Alan Sussman

Research paper thumbnail of Performance Portability for Advanced Architectures

Computing in Science & Engineering

Research paper thumbnail of 24th AIPR Workshop on Tools and Techniques for Modeling and Simulation

24Th Aipr Workshop on Tools and Techniques For Modeling and Simulation, Feb 1, 1996

ABSTRACT

Research paper thumbnail of High-performance image database system for remote sensing

We present the design of and performance results for an image database system for remotely sensed... more We present the design of and performance results for an image database system for remotely sensed imagery. The system stores and serves level 1B remotely sensed data, providing users with a exible and e cient means for specifying and obtaining image-like products on either a global or a local scale. We have developed both parallel and sequential versions of the system; the parallel version uses the CHAOS++ library, developed at the University of Maryland as part of an NSF Grand Challenge project, to support parallel object oriented programming.

Research paper thumbnail of Exploration and Visualization of Very Large Datasets with the Active Data Repository

The analysis and exploration of scientific datasets is a challenging and increasingly important p... more The analysis and exploration of scientific datasets is a challenging and increasingly important part of scientific research. Visualization is a powerful analysis tool that converts numerical values into an image, which can be more easily analyzed by a human. We have developed a framework, called the Active Data Repository, that is designed to provide support for applications that analyze, explore and visualize very large multi-dimensional datasets. ADR targets distributed memory parallel machines with one or more disks attached to each node. In this paper, we present implementations of ray-casting based volume rendering and iso-surface rendering methods using ADR for visualizing out-of-core datasets, and describe experimental performance results.

Research paper thumbnail of Model-driven mapping onto distributed memory parallel computers

Proceedings Supercomputing '92, 1992

Research paper thumbnail of Building complex coupled physical simulations on the grid with InterComm

Engineering with Computers, 2006

Research paper thumbnail of Active Proxy-G: Optimizing the Query Execution Process in the Grid

ACM/IEEE SC 2002 Conference (SC'02), 2002

The Grid environment facilitates collaborative work and allows many users to query and process da... more The Grid environment facilitates collaborative work and allows many users to query and process data over geographically dispersed data repositories. Over the past several years, there has been a growing interest in developing applications that interactively analyze datasets, potentially in a collaborative setting. We describe an Active Proxy-G service that is able to cache query results, use those results for answering new incoming queries, generate subqueries for the parts of a query that cannot be produced from the cache, and submit the subqueries for final processing at application servers that store the raw datasets. We present an experimental evaluation to illustrate the effects of various design tradeoffs. We also show the benefits that two real applications gain from using the middleware.

Research paper thumbnail of Runtime coupling of data-parallel programs

Proceedings of the 10th international conference on Supercomputing - ICS '96, 1996

We consider the problem of e ciently coupling multiple data-parallel programs at runtime. We prop... more We consider the problem of e ciently coupling multiple data-parallel programs at runtime. We propose an approach that establishes a mapping between data structures in di erent data-parallel programs and implements a user speci ed consistency model. Mappings are established at runtime and new mappings between programs can be added and deleted while the programs are in execution. Mappings, or the identity of the processors involved, do not have to be known at compile-time or even link-time. Programs can be made to interact with di erent granularities of interaction without requiring any re-coding. A priori knowledge of data movement requirements allows for bu ering of data and overlap of computations between coupled applications. E cient data movement is achieved by pre-computing an optimized schedule. We describe our prototype implementation and evaluate its performance for a set of synthetic benchmarks that examine the variation of performance with coupling parameters. We demonstrate that the cost of the added exibility gained by our coupling method is not prohibitive when compared with a monolithic code that does the same computation.

Research paper thumbnail of Data parallel programming in an adaptive environment

Proceedings of 9th International Parallel Processing Symposium, 1995

For better utilization of computing resources, it is important to consider parallel programming e... more For better utilization of computing resources, it is important to consider parallel programming environments in which the number of available processors varies at runtime. In this paper, we discuss runtime support for data parallel programming in such an adaptive environment. Executing data parallel programs in an adaptive environment requires redistributing data when the number of processors changes, and also requires determining new loop bounds and communication patterns for the new set of processors. We have developed a runtime library to provide this support. We discuss how the runtime library can be used by compilers to generate code for an adaptive environment. We also present performance results for a multiblock Navier-Stokes solver run on a network of workstations using PVM for message passing. Our experiments show that if the number of processors is not varied frequently, the cost of data redistribution is not signi cant compared to the time required for the actual computations.

Research paper thumbnail of Aarts

Proceedings of the 1st International Workshop on Adaptive Self-Tuning Computing Systems for the Exaflop Era - EXADAPT '11, 2011

We present an online lightweight auto-tuning system for shared-memory parallel programs. We emplo... more We present an online lightweight auto-tuning system for shared-memory parallel programs. We employ an online adaptive tuning algorithm that is based on performance measurements, to adapt to performance variability that arises during program execution. We address the impact of synchronous vs. asynchronous interactions between the application and the tuning system, and describe an adaptive approach that benefits from the improvements

Research paper thumbnail of Direct-dependency-based software compatibility testing

Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering - ASE '07, 2007

Software compatibility testing is an important quality assurance task aimed at ensuring that comp... more Software compatibility testing is an important quality assurance task aimed at ensuring that component-based software systems build and/or execute properly across a broad range of user system configurations. Because each configuration can involve multiple components with different versions, and because there are complex and changing interdependencies between components and their versions, it is generally infeasible to test all potential configurations. Therefore, compatibility testing usually means examining only a handful of default or popular configurations to detect problems, and as a result costly errors can and do escape to the field. This paper presents an improved approach to compatibility testing called RACHET. We formally model the configuration space for component-based systems and use the model to generate test plans covering user-specified portion of the space-the example in this paper is covering all direct dependencies between components. The test plan is executed efficiently in parallel, by distributing work so as to best utilize test resources. We conducted experiments and simulation studies applying our approach to a large-scale data management middleware system. The results showed that for this system RACHET discovered incompatibilities between components at a small fraction of the cost for exhaustive testing without compromising test quality.

Research paper thumbnail of Interoperability of data parallel runtime libraries

Proceedings 11th International Parallel Processing Symposium, 1997

This paper describes a framework for providing the ability to use multiple specialized data paral... more This paper describes a framework for providing the ability to use multiple specialized data parallel libraries and/or languages within a single application. The ability to use multiple libraries is required in many application areas, such as multidisciplinary complex physical simulations and remote sensing image database applications. An application can consist of one program or multiple programs that use di erent libraries to parallelize operations on distributed data structures. The framework is embodied in a runtime library called Meta-Chaos that has been used to exchange data between data parallel programs written using High Performance Fortran, the Chaos and Multiblock Parti libraries developed at Maryland for handling various types of unstructured problems, and the runtime library for pC++, a data parallel version of C++ from Indiana University. Experimental results show that Meta-Chaos is able to move data between libraries e ciently, and that Meta-Chaos provides e ective support for complex applications.

Research paper thumbnail of Comparing the Performance of High-Level Middleware Systems in Shared and Distributed Memory Parallel Environments

19th IEEE International Parallel and Distributed Processing Symposium, 2005

The utilization of toolkits for writing parallel and/or distributed applications has been shown t... more The utilization of toolkits for writing parallel and/or distributed applications has been shown to greatly enhance developer's productivity. Such an approach hides many of the complexities associated with writing these applications, rather than relying purely on programming language aids and parallel library support, such as MPI or PVM. In this work, we evaluate three different middleware systems that have been used to implement a computation and I/Ointensive data analysis application from the domain of computer vision. This study shows the benefits and overheads associated with each of the middleware systems, in different computational environments and with different workloads. Our results lead the way toward being able to make better decisions for tuning the application environment, for selecting the appropriate middleware, and also for designing more powerful middleware systems to efficiently build and run modern, highly complex applications in both parallel and distributed computing environments.

Research paper thumbnail of Decentralized, accurate, and low-cost network bandwidth prediction

2011 Proceedings IEEE INFOCOM, 2011

ABSTRACT óThe distributed nature of modern computing makes end-to-end prediction of network bandw... more ABSTRACT óThe distributed nature of modern computing makes end-to-end prediction of network bandwidth increasingly im- portant. Our work is inspired by prior work that treats the Internet and bandwidth as an approximate tree metric space. This paper presents a decentralized, accurate, and low cost system that predicts pairwise bandwidth between hosts. We describe an algorithm to construct a distributed tree that embeds bandwidth measurements. The correctness of the algorithm is provable when driven by precise measurements. We then describe three novel heuristics that achieve high accuracy for predicting bandwidth even with imprecise input data. Simulation experiments with a real-world dataset conrm that our approach shows high accuracy with low cost.

Research paper thumbnail of Using participation factors to improve the consistency and accuracy of Prony analysis for voltage stability monitoring applications

Research paper thumbnail of The CISM code coupling strategy

Journal of Atmospheric and Solar-Terrestrial Physics, 2004

ABSTRACT The success of the Center for Integrated Space Weather Modeling (CISM) depends on the pr... more ABSTRACT The success of the Center for Integrated Space Weather Modeling (CISM) depends on the production of an ever-improving series of comprehensive scientific models describing the Solar Terrestrial environment from the solar surface to the upper atmosphere of earth. We describe here our strategy for coupling the codes we have selected as the basis for these models, which include core global codes which address the corona, heliosphere, the earth's magnetosphere, and ionosphere, and codes which model important local processes such as magnetic reconnection. Coupling these codes requires four separate functions: efficient transmission of information among codes, interpolation of grid quantities, translation of physical variables between codes with differing physical models, and control mechanisms to synchronize the interaction of codes. The characteristics of these codes dictate an approach involving loosely coupled groups of independently running programs. We have selected two existing software packages, InterComm and Overture, to provide the basis of our coupling framework. By combining the strengths of these packages, we obtain the benefits of simplified coding of translation routines and inter-grid communication between distinct codes with minimal code modification. The NASA Living With a Star program shares both the scientific goals and code coupling challenges of CISM, and is equally involved in the coupling strategy and development we present.

Research paper thumbnail of Testing component compatibility in evolving configurations

Information and Software Technology, 2013

Software components are increasingly assembled from other components. Each component may further ... more Software components are increasingly assembled from other components. Each component may further depend on others, and each may have multiple active versions. The total number of configurations-combinations of components and their versions-in use can be very large. Moreover, components are constantly being enhanced and new versions are being released. Component developers, therefore, spend considerable time and effort doing compatibility testing-determining whether their components can be built correctly for all deployed configurations-both for existing active component versions and new releases. In previous work we developed Rachet, a distributed, cache-aware mechanism to support large-scale compatibility testing of component-based software with a fixed set of component versions. In this paper, we observe that it is too expensive to perform compatibility testing from scratch each time a new version of a component is released. We thus add a new dimension to Rachet: to perform incremental and prioritized compatibility testing. We describe algorithms to compute differences in component compatibilities between current and previous component builds, a formal test adequacy criterion based on covering the differences, and cache-aware configuration sampling and testing methods that attempt to reuse effort from previous testing sessions. Because testers are often interested in focusing test effort on newly released and modified components and their versions, we have developed a prioritization mechanism that enhances compatibility testing by examining the configurations that test new or modified component versions first, while also distributing the work over a cluster of machines. We evaluate our approach using the 5-year evolution history of a scientific middleware component. Our results show that our methods can increase performance significantly over Rachet's previous retest-all approach and also tests important component compatibilities early in the overall testing process, making the process of compatibility testing practical for evolving components.

Research paper thumbnail of A simulation and data analysis system for large-scale, data-driven oil reservoir simulation studies

Concurrency and Computation: Practice and Experience, 2005

The main goal of oil reservoir management is to provide more efficient, cost-effective and enviro... more The main goal of oil reservoir management is to provide more efficient, cost-effective and environmentally safer production of oil from reservoirs. Numerical simulations can aid in the design and implementation of optimal production strategies. However, traditional simulation-based approaches to optimizing reservoir management are rapidly overwhelmed by data volume when large numbers of realizations are sought using detailed geologic descriptions. In this paper, we describe a software architecture to facilitate large-scale simulation studies, involving ensembles of long-running simulations and analysis of vast volumes of output data.

Research paper thumbnail of EOSDIS Project on High-Performance I/O Techniques

Research paper thumbnail of Datacutter and a client interface for the storage resource broker with datacutter services

The continuing increase in the capabilities of high performance computers and continueddecreases ... more The continuing increase in the capabilities of high performance computers and continueddecreases in the cost of secondary and tertiary storage systems is making it increasingly feasibleto generate and archive very large (e.g. petabyte and larger) datasets. Applications are alsoincreasingly likely to make use of archived data obtained by different types of sensors. Suchsensors include imaging devices deployed on satellites and

Research paper thumbnail of Performance Portability for Advanced Architectures

Computing in Science & Engineering

Research paper thumbnail of 24th AIPR Workshop on Tools and Techniques for Modeling and Simulation

24Th Aipr Workshop on Tools and Techniques For Modeling and Simulation, Feb 1, 1996

ABSTRACT

Research paper thumbnail of High-performance image database system for remote sensing

We present the design of and performance results for an image database system for remotely sensed... more We present the design of and performance results for an image database system for remotely sensed imagery. The system stores and serves level 1B remotely sensed data, providing users with a exible and e cient means for specifying and obtaining image-like products on either a global or a local scale. We have developed both parallel and sequential versions of the system; the parallel version uses the CHAOS++ library, developed at the University of Maryland as part of an NSF Grand Challenge project, to support parallel object oriented programming.

Research paper thumbnail of Exploration and Visualization of Very Large Datasets with the Active Data Repository

The analysis and exploration of scientific datasets is a challenging and increasingly important p... more The analysis and exploration of scientific datasets is a challenging and increasingly important part of scientific research. Visualization is a powerful analysis tool that converts numerical values into an image, which can be more easily analyzed by a human. We have developed a framework, called the Active Data Repository, that is designed to provide support for applications that analyze, explore and visualize very large multi-dimensional datasets. ADR targets distributed memory parallel machines with one or more disks attached to each node. In this paper, we present implementations of ray-casting based volume rendering and iso-surface rendering methods using ADR for visualizing out-of-core datasets, and describe experimental performance results.

Research paper thumbnail of Model-driven mapping onto distributed memory parallel computers

Proceedings Supercomputing '92, 1992

Research paper thumbnail of Building complex coupled physical simulations on the grid with InterComm

Engineering with Computers, 2006

Research paper thumbnail of Active Proxy-G: Optimizing the Query Execution Process in the Grid

ACM/IEEE SC 2002 Conference (SC'02), 2002

The Grid environment facilitates collaborative work and allows many users to query and process da... more The Grid environment facilitates collaborative work and allows many users to query and process data over geographically dispersed data repositories. Over the past several years, there has been a growing interest in developing applications that interactively analyze datasets, potentially in a collaborative setting. We describe an Active Proxy-G service that is able to cache query results, use those results for answering new incoming queries, generate subqueries for the parts of a query that cannot be produced from the cache, and submit the subqueries for final processing at application servers that store the raw datasets. We present an experimental evaluation to illustrate the effects of various design tradeoffs. We also show the benefits that two real applications gain from using the middleware.

Research paper thumbnail of Runtime coupling of data-parallel programs

Proceedings of the 10th international conference on Supercomputing - ICS '96, 1996

We consider the problem of e ciently coupling multiple data-parallel programs at runtime. We prop... more We consider the problem of e ciently coupling multiple data-parallel programs at runtime. We propose an approach that establishes a mapping between data structures in di erent data-parallel programs and implements a user speci ed consistency model. Mappings are established at runtime and new mappings between programs can be added and deleted while the programs are in execution. Mappings, or the identity of the processors involved, do not have to be known at compile-time or even link-time. Programs can be made to interact with di erent granularities of interaction without requiring any re-coding. A priori knowledge of data movement requirements allows for bu ering of data and overlap of computations between coupled applications. E cient data movement is achieved by pre-computing an optimized schedule. We describe our prototype implementation and evaluate its performance for a set of synthetic benchmarks that examine the variation of performance with coupling parameters. We demonstrate that the cost of the added exibility gained by our coupling method is not prohibitive when compared with a monolithic code that does the same computation.

Research paper thumbnail of Data parallel programming in an adaptive environment

Proceedings of 9th International Parallel Processing Symposium, 1995

For better utilization of computing resources, it is important to consider parallel programming e... more For better utilization of computing resources, it is important to consider parallel programming environments in which the number of available processors varies at runtime. In this paper, we discuss runtime support for data parallel programming in such an adaptive environment. Executing data parallel programs in an adaptive environment requires redistributing data when the number of processors changes, and also requires determining new loop bounds and communication patterns for the new set of processors. We have developed a runtime library to provide this support. We discuss how the runtime library can be used by compilers to generate code for an adaptive environment. We also present performance results for a multiblock Navier-Stokes solver run on a network of workstations using PVM for message passing. Our experiments show that if the number of processors is not varied frequently, the cost of data redistribution is not signi cant compared to the time required for the actual computations.

Research paper thumbnail of Aarts

Proceedings of the 1st International Workshop on Adaptive Self-Tuning Computing Systems for the Exaflop Era - EXADAPT '11, 2011

We present an online lightweight auto-tuning system for shared-memory parallel programs. We emplo... more We present an online lightweight auto-tuning system for shared-memory parallel programs. We employ an online adaptive tuning algorithm that is based on performance measurements, to adapt to performance variability that arises during program execution. We address the impact of synchronous vs. asynchronous interactions between the application and the tuning system, and describe an adaptive approach that benefits from the improvements

Research paper thumbnail of Direct-dependency-based software compatibility testing

Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering - ASE '07, 2007

Software compatibility testing is an important quality assurance task aimed at ensuring that comp... more Software compatibility testing is an important quality assurance task aimed at ensuring that component-based software systems build and/or execute properly across a broad range of user system configurations. Because each configuration can involve multiple components with different versions, and because there are complex and changing interdependencies between components and their versions, it is generally infeasible to test all potential configurations. Therefore, compatibility testing usually means examining only a handful of default or popular configurations to detect problems, and as a result costly errors can and do escape to the field. This paper presents an improved approach to compatibility testing called RACHET. We formally model the configuration space for component-based systems and use the model to generate test plans covering user-specified portion of the space-the example in this paper is covering all direct dependencies between components. The test plan is executed efficiently in parallel, by distributing work so as to best utilize test resources. We conducted experiments and simulation studies applying our approach to a large-scale data management middleware system. The results showed that for this system RACHET discovered incompatibilities between components at a small fraction of the cost for exhaustive testing without compromising test quality.

Research paper thumbnail of Interoperability of data parallel runtime libraries

Proceedings 11th International Parallel Processing Symposium, 1997

This paper describes a framework for providing the ability to use multiple specialized data paral... more This paper describes a framework for providing the ability to use multiple specialized data parallel libraries and/or languages within a single application. The ability to use multiple libraries is required in many application areas, such as multidisciplinary complex physical simulations and remote sensing image database applications. An application can consist of one program or multiple programs that use di erent libraries to parallelize operations on distributed data structures. The framework is embodied in a runtime library called Meta-Chaos that has been used to exchange data between data parallel programs written using High Performance Fortran, the Chaos and Multiblock Parti libraries developed at Maryland for handling various types of unstructured problems, and the runtime library for pC++, a data parallel version of C++ from Indiana University. Experimental results show that Meta-Chaos is able to move data between libraries e ciently, and that Meta-Chaos provides e ective support for complex applications.

Research paper thumbnail of Comparing the Performance of High-Level Middleware Systems in Shared and Distributed Memory Parallel Environments

19th IEEE International Parallel and Distributed Processing Symposium, 2005

The utilization of toolkits for writing parallel and/or distributed applications has been shown t... more The utilization of toolkits for writing parallel and/or distributed applications has been shown to greatly enhance developer's productivity. Such an approach hides many of the complexities associated with writing these applications, rather than relying purely on programming language aids and parallel library support, such as MPI or PVM. In this work, we evaluate three different middleware systems that have been used to implement a computation and I/Ointensive data analysis application from the domain of computer vision. This study shows the benefits and overheads associated with each of the middleware systems, in different computational environments and with different workloads. Our results lead the way toward being able to make better decisions for tuning the application environment, for selecting the appropriate middleware, and also for designing more powerful middleware systems to efficiently build and run modern, highly complex applications in both parallel and distributed computing environments.

Research paper thumbnail of Decentralized, accurate, and low-cost network bandwidth prediction

2011 Proceedings IEEE INFOCOM, 2011

ABSTRACT óThe distributed nature of modern computing makes end-to-end prediction of network bandw... more ABSTRACT óThe distributed nature of modern computing makes end-to-end prediction of network bandwidth increasingly im- portant. Our work is inspired by prior work that treats the Internet and bandwidth as an approximate tree metric space. This paper presents a decentralized, accurate, and low cost system that predicts pairwise bandwidth between hosts. We describe an algorithm to construct a distributed tree that embeds bandwidth measurements. The correctness of the algorithm is provable when driven by precise measurements. We then describe three novel heuristics that achieve high accuracy for predicting bandwidth even with imprecise input data. Simulation experiments with a real-world dataset conrm that our approach shows high accuracy with low cost.

Research paper thumbnail of Using participation factors to improve the consistency and accuracy of Prony analysis for voltage stability monitoring applications

Research paper thumbnail of The CISM code coupling strategy

Journal of Atmospheric and Solar-Terrestrial Physics, 2004

ABSTRACT The success of the Center for Integrated Space Weather Modeling (CISM) depends on the pr... more ABSTRACT The success of the Center for Integrated Space Weather Modeling (CISM) depends on the production of an ever-improving series of comprehensive scientific models describing the Solar Terrestrial environment from the solar surface to the upper atmosphere of earth. We describe here our strategy for coupling the codes we have selected as the basis for these models, which include core global codes which address the corona, heliosphere, the earth's magnetosphere, and ionosphere, and codes which model important local processes such as magnetic reconnection. Coupling these codes requires four separate functions: efficient transmission of information among codes, interpolation of grid quantities, translation of physical variables between codes with differing physical models, and control mechanisms to synchronize the interaction of codes. The characteristics of these codes dictate an approach involving loosely coupled groups of independently running programs. We have selected two existing software packages, InterComm and Overture, to provide the basis of our coupling framework. By combining the strengths of these packages, we obtain the benefits of simplified coding of translation routines and inter-grid communication between distinct codes with minimal code modification. The NASA Living With a Star program shares both the scientific goals and code coupling challenges of CISM, and is equally involved in the coupling strategy and development we present.

Research paper thumbnail of Testing component compatibility in evolving configurations

Information and Software Technology, 2013

Software components are increasingly assembled from other components. Each component may further ... more Software components are increasingly assembled from other components. Each component may further depend on others, and each may have multiple active versions. The total number of configurations-combinations of components and their versions-in use can be very large. Moreover, components are constantly being enhanced and new versions are being released. Component developers, therefore, spend considerable time and effort doing compatibility testing-determining whether their components can be built correctly for all deployed configurations-both for existing active component versions and new releases. In previous work we developed Rachet, a distributed, cache-aware mechanism to support large-scale compatibility testing of component-based software with a fixed set of component versions. In this paper, we observe that it is too expensive to perform compatibility testing from scratch each time a new version of a component is released. We thus add a new dimension to Rachet: to perform incremental and prioritized compatibility testing. We describe algorithms to compute differences in component compatibilities between current and previous component builds, a formal test adequacy criterion based on covering the differences, and cache-aware configuration sampling and testing methods that attempt to reuse effort from previous testing sessions. Because testers are often interested in focusing test effort on newly released and modified components and their versions, we have developed a prioritization mechanism that enhances compatibility testing by examining the configurations that test new or modified component versions first, while also distributing the work over a cluster of machines. We evaluate our approach using the 5-year evolution history of a scientific middleware component. Our results show that our methods can increase performance significantly over Rachet's previous retest-all approach and also tests important component compatibilities early in the overall testing process, making the process of compatibility testing practical for evolving components.

Research paper thumbnail of A simulation and data analysis system for large-scale, data-driven oil reservoir simulation studies

Concurrency and Computation: Practice and Experience, 2005

The main goal of oil reservoir management is to provide more efficient, cost-effective and enviro... more The main goal of oil reservoir management is to provide more efficient, cost-effective and environmentally safer production of oil from reservoirs. Numerical simulations can aid in the design and implementation of optimal production strategies. However, traditional simulation-based approaches to optimizing reservoir management are rapidly overwhelmed by data volume when large numbers of realizations are sought using detailed geologic descriptions. In this paper, we describe a software architecture to facilitate large-scale simulation studies, involving ensembles of long-running simulations and analysis of vast volumes of output data.

Research paper thumbnail of EOSDIS Project on High-Performance I/O Techniques

Research paper thumbnail of Datacutter and a client interface for the storage resource broker with datacutter services

The continuing increase in the capabilities of high performance computers and continueddecreases ... more The continuing increase in the capabilities of high performance computers and continueddecreases in the cost of secondary and tertiary storage systems is making it increasingly feasibleto generate and archive very large (e.g. petabyte and larger) datasets. Applications are alsoincreasingly likely to make use of archived data obtained by different types of sensors. Suchsensors include imaging devices deployed on satellites and