Steve Fisher - Academia.edu (original) (raw)
Papers by Steve Fisher
The value of metadata to the scientist is well known: with the right choice of metadata, data fil... more The value of metadata to the scientist is well known: with the right choice of metadata, data files can be selected very quickly without having to scan through huge volumes of data. The ICAT metadata catalog[1] (which is part of the ICAT project[2]) allows the scientist to store and query information about individual data files and sets of data files as well as storing provenance information. This paper explains how a generic job management system, exposed as a web portal, has been built on top of ICAT. This gives the scientist easy access to a high performance computing infrastructure without allowing the complexities of that infrastructure to impede progress. The aim was to build a job and data management portal capable of dealing with batch and interactive work that would be simple to use and that was based on tried and tested, scalable, and preferably open source technologies. For the team operating the portal, it needed to be generic and configurable enough so that they can, wi...
The European Physical Journal C, 2011
A search is presented for a high mass neutral particle that decays directly to the e ± µ ∓ final ... more A search is presented for a high mass neutral particle that decays directly to the e ± µ ∓ final state. The data sample was recorded by the ATLAS detector in √ s = 7 TeV pp collisions at the LHC from March to June 2011 and corresponds to an integrated luminosity of 1.07 fb −1. The data are found to be consistent with the Standard Model background. The high e ± µ ∓ mass region is used to set 95% confidence level upper limits on the production of two possible new physics processes: tau sneutrinos in an R-parity violating supersymmetric model and Z ′-like vector bosons in a lepton flavor violating model.
European Physical Journal C, Dec 1, 2011
The cross section for the production of W bosons with subsequent decay W→ τντ is measured with th... more The cross section for the production of W bosons with subsequent decay W→ τντ is measured with the ATLAS detector at the LHC. The analysis is based on a data sample that was recorded in 2010 at a proton–proton center-of-mass energy of s= 7 TeV and corresponds to an integrated luminosity of 34 pb− 1. The cross section is measured in a region of high detector acceptance and then extrapolated to the full phase space. The product of the total W production cross section and the W→ τντ branching ratio is ...
Resources available in Grid systems must be described in a precise and systematic manner if they ... more Resources available in Grid systems must be described in a precise and systematic manner if they are to be able to be discovered for subsequent management or use. This document presents an abstract modeling for Grid resources that can be used to deflne mapping to concrete schemas for Grid Information Services.
Proceedings of the Third ACM International Conference on Distributed Event-Based Systems - DEBS '09, 2009
We note that over the past decade publish-subscribe systems have improved the ability of users to... more We note that over the past decade publish-subscribe systems have improved the ability of users to exchange information but we argue that these systems have been either restrictive or simplistic - relying on narrowly defined channels for data exchange. The Information Dissemination (INFOD) approach we present here introduces a flexible and dynamic framework for brokering information in publish-subscribe systems. INFOD enables communities of interest to constitute and use vocabularies to describe their interests as well as their capabilities (available information). Publishers, consumers and subscribers are real-world entities characterized in terms of vocabularies and constraints within an INFOD registry. Subscribers define subscriptions expressing events of interest at candidate publishers and constraints on the run-time event data. The registry matches entities based on subscriptions and constraints, and sets up event channels between them.
Physical Review Letters, 2005
The standard model predictions for W and Z production are tested using an integrated luminosity o... more The standard model predictions for W and Z production are tested using an integrated luminosity of 200 pb ÿ1 of p p collision data collected at the Collider Detector at Fermilab. The cross sections are measured by selecting leptonic decays of the W and Z bosons, and photons with transverse energy E T > 7 GeV that are well separated from leptons. The production cross sections and kinematic distributions for the W and Z data are compared to SM predictions.
The European Physical Journal C, 2011
UvA-DARE (Digital Academic Repository) Search for a heavy neutral particle decaying into an elect... more UvA-DARE (Digital Academic Repository) Search for a heavy neutral particle decaying into an electron and a muon using 1 fb-1 of ATLAS data
The European Physical Journal C, 2011
Measurements of underlying-event properties using neutral and charged particles in pp collisions ... more Measurements of underlying-event properties using neutral and charged particles in pp collisions at √ s = 900 GeV and √ s = 7 TeV with the ATLAS detector at the LHC The ATLAS Collaboration the date of receipt and acceptance should be inserted later Abstract. We present first measurements of charged and neutral particle-flow correlations in pp collisions using the ATLAS calorimeters. Data were collected in 2009 and 2010 at centre-of-mass energies of 900 GeV and 7 TeV. Events were selected using a minimum-bias trigger which required a charged particle in scintillation counters on either side of the interaction point. Particle flows, sensitive to the underlying event, are measured using clusters of energy in the ATLAS calorimeters, taking advantage of their fine granularity. No Monte Carlo generator used in this analysis can accurately describe the measurements. The results are independent of those based on charged particles measured by the ATLAS tracking systems and can be used to constrain the parameters of Monte Carlo generators.
Description/Abstract In September 2003, the D0 experiment at TEvatron has launched a reprocessing... more Description/Abstract In September 2003, the D0 experiment at TEvatron has launched a reprocessing effort. In total 519,212,822 of the experiment's events have been reprocessed to use the new perceptions of the detector's behavior. Out of these events 97,619,114 ...
scss.tcd.ie
Abstract. Grids are complex distributed systems that provide access to computational resources in... more Abstract. Grids are complex distributed systems that provide access to computational resources in a transparent fashion. The task of collecting information about the state of the Grid itself and making it available to other Grid components for querying is called Grid ...
Lecture Notes in Computer Science, 2003
Physical review letters, Feb 2, 2005
The standard model predictions for W and Z production are tested using an integrated luminosity o... more The standard model predictions for W and Z production are tested using an integrated luminosity of 200 pbÿ1 of pp collision data collected at the Collider Detector at Fermilab. The cross sections are measured by selecting leptonic decays of the W and Z bosons, and photons with transverse energy ET> 7 GeV that are well separated from leptons. The production cross sections and kinematic distributions for the W and Z data are compared to SM predictions.
Inclusive multi-jet production is studied using the ATLAS detector for protonproton collisions wi... more Inclusive multi-jet production is studied using the ATLAS detector for protonproton collisions with a center-of-mass energy of 7 TeV. The data sample corresponds to an integrated luminosity of 2.4 pb −1 , using the first proton-proton data collected by the ATLAS detector in 2010. Results on multi-jet cross sections are presented and compared to both leadingorder plus parton-shower Monte Carlo predictions and next-to-leading-order QCD calculations. High-energy particles produced in collisions initially pass through an inner tracking system embedded in a strong solenoidal magnetic field. The inner tracker covers a pseudorapidity |η| < 2.5 and has full coverage in azimuth. In the barrel region, it is made of three main components arranged in concentric layers, all of which are immersed in a 2 Tesla field provided by the inner solenoid magnet. Three layers of silicon pixel detectors provide a two-dimensional hit position very close to the interaction point. Silicon microstrip detectors are then used in the next four layers, providing excellent position resolution for charged particles. A transitionradiation detector is the outermost component of the tracker, with coarser position resolution with respect to the silicon, but giving a large lever-arm for track reconstruction in addition to particle identification capabilities. The ATLAS calorimeter is also composed of many subdetectors. The electromagnetic calorimeter (|η| < 3.2) is a high-granularity sampling detector in which the sensitive medium is liquid argon (LAr). The hadronic calorimeters are divided into three sections: a tile scintillator/steel calorimeter in used in both the barrel (|η| < 1.0) and extended barrel cylinders (0.8 < |η| < 1.7); the hadronic endcap covers the region 1.5 < |η| < 3.2 and consists of LAr/copper calorimeter modules. Finally the forward calorimeter measures both electromagnetic and hadronic energy in the range 3.2 < |η| < 4.9 using LAr/copper and LAr/tungsten modules. The total coverage of the ATLAS calorimeters is therefore |η| < 4.9. A muon spectrometer, which is designed to identify muons and measure both their trajectories and momenta with high accuracy: the design momentum resolution is 10% at momenta transverse to the beam line (p T) of 1 TeV. The muon spectrometer, comprises three toroidal magnet systems consisting of eight coils each with a bending power Bdl = 1 − 7.5 Tm. The ATLAS trigger system employs three trigger levels, of which only the hardware-based first level trigger is used in this analysis. Events are selected using the calorimeter-based jet trigger. The first level jet trigger [4] uses coarse detector information to identify areas in the calorimeter where energy deposits above a certain threshold occur. A simplified jet finding algorithm based on a sliding window of size ∆φ × ∆η = 0.8 × 0.8 is used to identify these areas. This algorithm uses coarse calorimeter towers with a granularity of ∆φ × ∆η = 0.2 × 0.2 as inputs. 3. Cross Section Definitions and Kinematics The anti-k t algorithm [5, 6] with full four-momentum recombination is used to identify jets. For high multiplicity studies, namely up to six jets in an event, the resolution parameter in the jet reconstruction is fixed to R = 0.4 to contend with the limited phase space and to reduce the impact of the underlying event [7] in the jet energy determination. For testing NLO pQCD calculations, where the study focuses on three-jet events, a resolution parameter of R = 0.6 is preferred, since a larger value of R is less sensitive to theoretical scale uncertainties. The anti-k t algorithm was chosen because it can be implemented in the NLO pQCD calculation, is infra-red safe to all orders and produces jets with a simple geometrical shape. Jet measurements are corrected for all experimental effects and refer to the particle-level final state. At the particle level, jets are built using all final-state particles with a proper lifetime longer than 10 ps, including muons and neutrinos from hadronic decays. Cross sections are calculated in bins of inclusive jet multiplicity, meaning that an event is recorded in a jet multiplicity bin if it contains a number of jets that is equal to or greater than that multiplicity. For example, an event with three reconstructed jets will be counted both in the two-jet and three-jet multiplicity bin. Inclusive multiplicity bins are used because they are stable in the pQCD fixed-order calculation, unlike exclusive bins. Only jets with p T ≥ 60 GeV and |y| ≤ 2.8 are counted in the measurement. These cuts are chosen to ensure that the jets are reconstructed with high efficiency. The leading jet is further required to have p T ≥ 80 GeV to stabilize the NLO pQCD calculations [8].
Proceedings of the Third ACM International Conference on Distributed Event-Based Systems - DEBS '09, 2009
The increasing volume, diversity, and complexity of resources and data continues to raise challen... more The increasing volume, diversity, and complexity of resources and data continues to raise challenges in the sharing and dissemination of information. Although publish-subscribe systems have improved the ability of users to exchange information, we argue that these systems have been either restrictive or simplistic - relying significantly on narrowly defined channels for data exchange. The Information Dissemination (INFOD) approach we present here introduces a flexible and dynamic framework for brokering information in publish-subscribe systems. INFOD enables communities of interest to themselves constitute and use vocabularies for describing their interests as well as their capabilities (available information). Publishers, consumers and subscribers are real-world entities characterized in terms of vocabularies and their interests as constraints within an INFOD registry. Subscribers define subscriptions primarily as XQuery constraints expressing events of interest at candidate publishers. Subscriptions also specify constraints on the run-time data that must be disseminated to specific consumers. Entity descriptions, property constraints, and subscriptions comprise the metadata information that INFOD uses to associate and link entities within a community. We refer to the process of associating entities in INFOD as mutual filtering, which we realize with a three-way join across publishers, consumers, and subscriber entities. We demonstrate and evaluate the INFOD approach in an emergency response use case scenario that employs INFOD to support changing event dynamics and varying publisher-consumer-subscriber requirements. We use industry standard technologies and present system performance results for the mutual filtering steps for a variety of subscription constraints and broad classes of publisher, subscriber, consumer property descriptions.
AHM2004, Nottingham, UK, 2004
R-GMA is a realization of the Grid Monitoring Architecture (GMA) that also exploits the power of ... more R-GMA is a realization of the Grid Monitoring Architecture (GMA) that also exploits the power of the relational data model and the SQL query language. The biggest challenge during the development of R-GMA was to ensure that it could be scaled to operate in a large grid reliably. The system is being used in areas as diverse as resource discovery, job logging and bookkeeping, network monitoring and accounting. A secure, reliable and scalable web services version of R-GMA is being developed within the follow-on ...
UK e-Science All Hands …, 2005
&quot;&quot;This article describes the implementation of an accounting tool in th... more &quot;&quot;This article describes the implementation of an accounting tool in the LHC Computing Grid (LCG): a distributed computing grid project consisting of over 100 resource centres and more than 10,000 CPUs. APEL (Accounting Processor for Event Logs) parses batch, system and gatekeeper logs generated by a site and builds accounting records, which provide a summary of the resources consumed based on attributes such as CPU time, Wall Clock Time, Memory and grid user DN. The accounting data is published into the R-GMA information and monitoring system, and archived for processing by a graphical front-end utilised by the accounting web tool.&quot;&quot;
We describe R-GMA (Relational Grid Monitoring Architecture) which has been developed within the E... more We describe R-GMA (Relational Grid Monitoring Architecture) which has been developed within the European DataGrid Project as a Grid Information and Monitoring System. Is is based on the GMA from GGF, which is a simple Consumer-Producer model. The special strength of this implementation comes from the power of the relational model. We offer a global view of the information as if each Virtual Organisation had one large relational database. We provide a number of different Producer types with different characteristics; for example some support streaming of information. We also provide combined Consumer/Producers, which are able to combine information and republish it. At the heart of the system is the mediator, which for any query is able to find and connect to the best Producers for the job. We have developed components to allow a measure of inter-working between MDS and R-GMA. We have used it both for information about the grid (primarily to find out about what services are available at any one time) and for application monitoring. R-GMA has been deployed in various testbeds; we describe some preliminary results and experiences of this deployment.
We describe R-GMA (Relational Grid Monitoring Architecture) which is being developed within the E... more We describe R-GMA (Relational Grid Monitoring Architecture) which is being developed within the European DataGrid Project as a Grid Information and Monitoring System for both the grid itself and for use by applications. Is is based on the GMA from GGF, which is a simple Consumer-Producer model. The special strength of this implementation comes from the power of the relational model. We offer a global view of the information as if each VO had one large relational database. We provide a number of different Producer types with different characteristics; for example some support streaming of information. We also provide combined Consumer/Producers, which are able to combine information and republish it. At the heart of the system is the mediator, which for any query is able to find and connect to the best Producers to do the job. In addition to having some of our own sensors able to publish information, we are able to invoke MDS info-provider scripts and publish the resulting information via R-GMA.
Lecture Notes in Computer Science, 2005
R-GMA (Relational Grid Monitoring Architecture) [1] is a grid monitoring and information system t... more R-GMA (Relational Grid Monitoring Architecture) [1] is a grid monitoring and information system that provides a global view of data distributed across a grid system. R-GMA creates the impression of a single centralised repository of information, but in reality the information can be stored at many different locations on the grid. The Registry and Schema are key components of R-GMA. The Registry matches queries for information to data sources that provide the appropriate information. The Schema defines the tables that can be queried. Without the combined availability of these components, R-GMA ceases to operate as a useful service. This paper presents an overview of R-GMA and describes the Registry replication design and implementation. A replication algorithm for the Schema is also described.
The value of metadata to the scientist is well known: with the right choice of metadata, data fil... more The value of metadata to the scientist is well known: with the right choice of metadata, data files can be selected very quickly without having to scan through huge volumes of data. The ICAT metadata catalog[1] (which is part of the ICAT project[2]) allows the scientist to store and query information about individual data files and sets of data files as well as storing provenance information. This paper explains how a generic job management system, exposed as a web portal, has been built on top of ICAT. This gives the scientist easy access to a high performance computing infrastructure without allowing the complexities of that infrastructure to impede progress. The aim was to build a job and data management portal capable of dealing with batch and interactive work that would be simple to use and that was based on tried and tested, scalable, and preferably open source technologies. For the team operating the portal, it needed to be generic and configurable enough so that they can, wi...
The European Physical Journal C, 2011
A search is presented for a high mass neutral particle that decays directly to the e ± µ ∓ final ... more A search is presented for a high mass neutral particle that decays directly to the e ± µ ∓ final state. The data sample was recorded by the ATLAS detector in √ s = 7 TeV pp collisions at the LHC from March to June 2011 and corresponds to an integrated luminosity of 1.07 fb −1. The data are found to be consistent with the Standard Model background. The high e ± µ ∓ mass region is used to set 95% confidence level upper limits on the production of two possible new physics processes: tau sneutrinos in an R-parity violating supersymmetric model and Z ′-like vector bosons in a lepton flavor violating model.
European Physical Journal C, Dec 1, 2011
The cross section for the production of W bosons with subsequent decay W→ τντ is measured with th... more The cross section for the production of W bosons with subsequent decay W→ τντ is measured with the ATLAS detector at the LHC. The analysis is based on a data sample that was recorded in 2010 at a proton–proton center-of-mass energy of s= 7 TeV and corresponds to an integrated luminosity of 34 pb− 1. The cross section is measured in a region of high detector acceptance and then extrapolated to the full phase space. The product of the total W production cross section and the W→ τντ branching ratio is ...
Resources available in Grid systems must be described in a precise and systematic manner if they ... more Resources available in Grid systems must be described in a precise and systematic manner if they are to be able to be discovered for subsequent management or use. This document presents an abstract modeling for Grid resources that can be used to deflne mapping to concrete schemas for Grid Information Services.
Proceedings of the Third ACM International Conference on Distributed Event-Based Systems - DEBS '09, 2009
We note that over the past decade publish-subscribe systems have improved the ability of users to... more We note that over the past decade publish-subscribe systems have improved the ability of users to exchange information but we argue that these systems have been either restrictive or simplistic - relying on narrowly defined channels for data exchange. The Information Dissemination (INFOD) approach we present here introduces a flexible and dynamic framework for brokering information in publish-subscribe systems. INFOD enables communities of interest to constitute and use vocabularies to describe their interests as well as their capabilities (available information). Publishers, consumers and subscribers are real-world entities characterized in terms of vocabularies and constraints within an INFOD registry. Subscribers define subscriptions expressing events of interest at candidate publishers and constraints on the run-time event data. The registry matches entities based on subscriptions and constraints, and sets up event channels between them.
Physical Review Letters, 2005
The standard model predictions for W and Z production are tested using an integrated luminosity o... more The standard model predictions for W and Z production are tested using an integrated luminosity of 200 pb ÿ1 of p p collision data collected at the Collider Detector at Fermilab. The cross sections are measured by selecting leptonic decays of the W and Z bosons, and photons with transverse energy E T > 7 GeV that are well separated from leptons. The production cross sections and kinematic distributions for the W and Z data are compared to SM predictions.
The European Physical Journal C, 2011
UvA-DARE (Digital Academic Repository) Search for a heavy neutral particle decaying into an elect... more UvA-DARE (Digital Academic Repository) Search for a heavy neutral particle decaying into an electron and a muon using 1 fb-1 of ATLAS data
The European Physical Journal C, 2011
Measurements of underlying-event properties using neutral and charged particles in pp collisions ... more Measurements of underlying-event properties using neutral and charged particles in pp collisions at √ s = 900 GeV and √ s = 7 TeV with the ATLAS detector at the LHC The ATLAS Collaboration the date of receipt and acceptance should be inserted later Abstract. We present first measurements of charged and neutral particle-flow correlations in pp collisions using the ATLAS calorimeters. Data were collected in 2009 and 2010 at centre-of-mass energies of 900 GeV and 7 TeV. Events were selected using a minimum-bias trigger which required a charged particle in scintillation counters on either side of the interaction point. Particle flows, sensitive to the underlying event, are measured using clusters of energy in the ATLAS calorimeters, taking advantage of their fine granularity. No Monte Carlo generator used in this analysis can accurately describe the measurements. The results are independent of those based on charged particles measured by the ATLAS tracking systems and can be used to constrain the parameters of Monte Carlo generators.
Description/Abstract In September 2003, the D0 experiment at TEvatron has launched a reprocessing... more Description/Abstract In September 2003, the D0 experiment at TEvatron has launched a reprocessing effort. In total 519,212,822 of the experiment's events have been reprocessed to use the new perceptions of the detector's behavior. Out of these events 97,619,114 ...
scss.tcd.ie
Abstract. Grids are complex distributed systems that provide access to computational resources in... more Abstract. Grids are complex distributed systems that provide access to computational resources in a transparent fashion. The task of collecting information about the state of the Grid itself and making it available to other Grid components for querying is called Grid ...
Lecture Notes in Computer Science, 2003
Physical review letters, Feb 2, 2005
The standard model predictions for W and Z production are tested using an integrated luminosity o... more The standard model predictions for W and Z production are tested using an integrated luminosity of 200 pbÿ1 of pp collision data collected at the Collider Detector at Fermilab. The cross sections are measured by selecting leptonic decays of the W and Z bosons, and photons with transverse energy ET> 7 GeV that are well separated from leptons. The production cross sections and kinematic distributions for the W and Z data are compared to SM predictions.
Inclusive multi-jet production is studied using the ATLAS detector for protonproton collisions wi... more Inclusive multi-jet production is studied using the ATLAS detector for protonproton collisions with a center-of-mass energy of 7 TeV. The data sample corresponds to an integrated luminosity of 2.4 pb −1 , using the first proton-proton data collected by the ATLAS detector in 2010. Results on multi-jet cross sections are presented and compared to both leadingorder plus parton-shower Monte Carlo predictions and next-to-leading-order QCD calculations. High-energy particles produced in collisions initially pass through an inner tracking system embedded in a strong solenoidal magnetic field. The inner tracker covers a pseudorapidity |η| < 2.5 and has full coverage in azimuth. In the barrel region, it is made of three main components arranged in concentric layers, all of which are immersed in a 2 Tesla field provided by the inner solenoid magnet. Three layers of silicon pixel detectors provide a two-dimensional hit position very close to the interaction point. Silicon microstrip detectors are then used in the next four layers, providing excellent position resolution for charged particles. A transitionradiation detector is the outermost component of the tracker, with coarser position resolution with respect to the silicon, but giving a large lever-arm for track reconstruction in addition to particle identification capabilities. The ATLAS calorimeter is also composed of many subdetectors. The electromagnetic calorimeter (|η| < 3.2) is a high-granularity sampling detector in which the sensitive medium is liquid argon (LAr). The hadronic calorimeters are divided into three sections: a tile scintillator/steel calorimeter in used in both the barrel (|η| < 1.0) and extended barrel cylinders (0.8 < |η| < 1.7); the hadronic endcap covers the region 1.5 < |η| < 3.2 and consists of LAr/copper calorimeter modules. Finally the forward calorimeter measures both electromagnetic and hadronic energy in the range 3.2 < |η| < 4.9 using LAr/copper and LAr/tungsten modules. The total coverage of the ATLAS calorimeters is therefore |η| < 4.9. A muon spectrometer, which is designed to identify muons and measure both their trajectories and momenta with high accuracy: the design momentum resolution is 10% at momenta transverse to the beam line (p T) of 1 TeV. The muon spectrometer, comprises three toroidal magnet systems consisting of eight coils each with a bending power Bdl = 1 − 7.5 Tm. The ATLAS trigger system employs three trigger levels, of which only the hardware-based first level trigger is used in this analysis. Events are selected using the calorimeter-based jet trigger. The first level jet trigger [4] uses coarse detector information to identify areas in the calorimeter where energy deposits above a certain threshold occur. A simplified jet finding algorithm based on a sliding window of size ∆φ × ∆η = 0.8 × 0.8 is used to identify these areas. This algorithm uses coarse calorimeter towers with a granularity of ∆φ × ∆η = 0.2 × 0.2 as inputs. 3. Cross Section Definitions and Kinematics The anti-k t algorithm [5, 6] with full four-momentum recombination is used to identify jets. For high multiplicity studies, namely up to six jets in an event, the resolution parameter in the jet reconstruction is fixed to R = 0.4 to contend with the limited phase space and to reduce the impact of the underlying event [7] in the jet energy determination. For testing NLO pQCD calculations, where the study focuses on three-jet events, a resolution parameter of R = 0.6 is preferred, since a larger value of R is less sensitive to theoretical scale uncertainties. The anti-k t algorithm was chosen because it can be implemented in the NLO pQCD calculation, is infra-red safe to all orders and produces jets with a simple geometrical shape. Jet measurements are corrected for all experimental effects and refer to the particle-level final state. At the particle level, jets are built using all final-state particles with a proper lifetime longer than 10 ps, including muons and neutrinos from hadronic decays. Cross sections are calculated in bins of inclusive jet multiplicity, meaning that an event is recorded in a jet multiplicity bin if it contains a number of jets that is equal to or greater than that multiplicity. For example, an event with three reconstructed jets will be counted both in the two-jet and three-jet multiplicity bin. Inclusive multiplicity bins are used because they are stable in the pQCD fixed-order calculation, unlike exclusive bins. Only jets with p T ≥ 60 GeV and |y| ≤ 2.8 are counted in the measurement. These cuts are chosen to ensure that the jets are reconstructed with high efficiency. The leading jet is further required to have p T ≥ 80 GeV to stabilize the NLO pQCD calculations [8].
Proceedings of the Third ACM International Conference on Distributed Event-Based Systems - DEBS '09, 2009
The increasing volume, diversity, and complexity of resources and data continues to raise challen... more The increasing volume, diversity, and complexity of resources and data continues to raise challenges in the sharing and dissemination of information. Although publish-subscribe systems have improved the ability of users to exchange information, we argue that these systems have been either restrictive or simplistic - relying significantly on narrowly defined channels for data exchange. The Information Dissemination (INFOD) approach we present here introduces a flexible and dynamic framework for brokering information in publish-subscribe systems. INFOD enables communities of interest to themselves constitute and use vocabularies for describing their interests as well as their capabilities (available information). Publishers, consumers and subscribers are real-world entities characterized in terms of vocabularies and their interests as constraints within an INFOD registry. Subscribers define subscriptions primarily as XQuery constraints expressing events of interest at candidate publishers. Subscriptions also specify constraints on the run-time data that must be disseminated to specific consumers. Entity descriptions, property constraints, and subscriptions comprise the metadata information that INFOD uses to associate and link entities within a community. We refer to the process of associating entities in INFOD as mutual filtering, which we realize with a three-way join across publishers, consumers, and subscriber entities. We demonstrate and evaluate the INFOD approach in an emergency response use case scenario that employs INFOD to support changing event dynamics and varying publisher-consumer-subscriber requirements. We use industry standard technologies and present system performance results for the mutual filtering steps for a variety of subscription constraints and broad classes of publisher, subscriber, consumer property descriptions.
AHM2004, Nottingham, UK, 2004
R-GMA is a realization of the Grid Monitoring Architecture (GMA) that also exploits the power of ... more R-GMA is a realization of the Grid Monitoring Architecture (GMA) that also exploits the power of the relational data model and the SQL query language. The biggest challenge during the development of R-GMA was to ensure that it could be scaled to operate in a large grid reliably. The system is being used in areas as diverse as resource discovery, job logging and bookkeeping, network monitoring and accounting. A secure, reliable and scalable web services version of R-GMA is being developed within the follow-on ...
UK e-Science All Hands …, 2005
&quot;&quot;This article describes the implementation of an accounting tool in th... more &quot;&quot;This article describes the implementation of an accounting tool in the LHC Computing Grid (LCG): a distributed computing grid project consisting of over 100 resource centres and more than 10,000 CPUs. APEL (Accounting Processor for Event Logs) parses batch, system and gatekeeper logs generated by a site and builds accounting records, which provide a summary of the resources consumed based on attributes such as CPU time, Wall Clock Time, Memory and grid user DN. The accounting data is published into the R-GMA information and monitoring system, and archived for processing by a graphical front-end utilised by the accounting web tool.&quot;&quot;
We describe R-GMA (Relational Grid Monitoring Architecture) which has been developed within the E... more We describe R-GMA (Relational Grid Monitoring Architecture) which has been developed within the European DataGrid Project as a Grid Information and Monitoring System. Is is based on the GMA from GGF, which is a simple Consumer-Producer model. The special strength of this implementation comes from the power of the relational model. We offer a global view of the information as if each Virtual Organisation had one large relational database. We provide a number of different Producer types with different characteristics; for example some support streaming of information. We also provide combined Consumer/Producers, which are able to combine information and republish it. At the heart of the system is the mediator, which for any query is able to find and connect to the best Producers for the job. We have developed components to allow a measure of inter-working between MDS and R-GMA. We have used it both for information about the grid (primarily to find out about what services are available at any one time) and for application monitoring. R-GMA has been deployed in various testbeds; we describe some preliminary results and experiences of this deployment.
We describe R-GMA (Relational Grid Monitoring Architecture) which is being developed within the E... more We describe R-GMA (Relational Grid Monitoring Architecture) which is being developed within the European DataGrid Project as a Grid Information and Monitoring System for both the grid itself and for use by applications. Is is based on the GMA from GGF, which is a simple Consumer-Producer model. The special strength of this implementation comes from the power of the relational model. We offer a global view of the information as if each VO had one large relational database. We provide a number of different Producer types with different characteristics; for example some support streaming of information. We also provide combined Consumer/Producers, which are able to combine information and republish it. At the heart of the system is the mediator, which for any query is able to find and connect to the best Producers to do the job. In addition to having some of our own sensors able to publish information, we are able to invoke MDS info-provider scripts and publish the resulting information via R-GMA.
Lecture Notes in Computer Science, 2005
R-GMA (Relational Grid Monitoring Architecture) [1] is a grid monitoring and information system t... more R-GMA (Relational Grid Monitoring Architecture) [1] is a grid monitoring and information system that provides a global view of data distributed across a grid system. R-GMA creates the impression of a single centralised repository of information, but in reality the information can be stored at many different locations on the grid. The Registry and Schema are key components of R-GMA. The Registry matches queries for information to data sources that provide the appropriate information. The Schema defines the tables that can be queried. Without the combined availability of these components, R-GMA ceases to operate as a useful service. This paper presents an overview of R-GMA and describes the Registry replication design and implementation. A replication algorithm for the Schema is also described.