Gabriele Pierantoni | Trinity College Dublin (original) (raw)
Uploads
Papers by Gabriele Pierantoni
Heliophysics is the study of highly energetic events that originate on the sun and propogate thro... more Heliophysics is the study of highly energetic events that originate on the sun and propogate through the solar system. Such events can cause critical and possibly fatal disruption of the electromagnetic systems on spacecraft and on ground based structures such as electric power grids, so there is a clear need to understand the events in their totality as they propogate through space and time. This poses a fascinating eScience challenge since the data is gathered by many observatories and communities that have hitherto not needed to work together. We describe how we are developing an eScience infrastructure to make the discovery and analysis of such complex events possible for the communities of heliophysics. The new systematic and data-centric science which will develop from this will be a child of both the space and information ages.
Future Generation Computer Systems, 2013
Science Gateways for Distributed Computing Infrastructures, 2014
2014 6th International Workshop on Science Gateways, 2014
ABSTRACT Heliophysics is a relatively new branch of physics that investigates the relationship be... more ABSTRACT Heliophysics is a relatively new branch of physics that investigates the relationship between the Sun and the other bodies of the solar system. To investigate such relationships, helio- physicists can rely on various tools developed by the community. Some of these tools are on-line catalogues that list events (such as Coronal Mass Ejections, CMEs) and their characteristics as they were observed on the surface of the Sun or on the other bodies of the Solar System. Other tools offer on-line data analysis and access to images and data catalogues. During their research, he- liophysicists often perform investigations that need to coordinate several of these services and to repeat these complex operations until the phenomena under investigation are fully analyzed. Heliophysicists combine the results of these services; this service orchestration is best suited for workflows. This approach has been investigated in the HELIO project. The HELIO project developed an infrastructure for a Virtual Observatory for Heliophysics and implemented service orchestration using TAVERNA workflows. HELIO developed a set of workflows that proved to be useful but lacked flexibility and re-usability. The TAVERNA workflows also needed to be executed directly in TAVERNA workbench, and this forced all users to learn how to use the workbench. Within the SCI-BUS and ER-FLOW projects, we have started an effort to re-think and re-design the heliophysics workflows with the aim of fostering re-usability and ease of use. We base our approach on two key concepts, that of meta-workflows and that of workflow interoperability. We have divided the produced workflows in three different layers. The first layer is Basic Workflows, developed both in the TAVERNA and WS- PGRADE languages. They are building blocks that users compose to address their scientific challenges. They implement well-defined Use Cases that usually involve only one service. The second layer is Science Workflows usually developed in TAVERNA. They implement Science Cases (the definition of a scientific challenge) by composing different Basic Workflows. The third and last layer,Iterative Science Workflows, is developed in WS-PGRADE. It executes sub-workflows (either Basic or Science Workflows) as parameter sweep jobs to investigate Science Cases on large multiple data sets. So far, this approach has proven fruitful for three Science Cases of which one has been completed and two are still being tested.
2014 6th International Workshop on Science Gateways, 2014
ABSTRACT With IWSG 2014 we featured the sixth edition of the workshop series IWSG (International ... more ABSTRACT With IWSG 2014 we featured the sixth edition of the workshop series IWSG (International Workshop on Science Gateways). IWSG 2014 attracted over 30 participants from nine different countries and 25 submissions, which consists of 17 full papers and eight abstracts. Accepted full papers resulted in 25-minutes presentations and abstracts in lightning talks of 15 minutes. While the 15 accepted full papers are published in these proceedings, the abstracts will be published on the website. Additionally, the workshop featured two high-profile keynotes and an outstanding panel discussion on "Science gateways in the cloud era" performed by research leaders from academia and industry. The topics of the proceedings include fundamental enhancements in general features of science gateways like extended developments of the security of science gateways as well as novel developments to support science gateway developers, new approaches in workflow management and use cases from diverse research communities.
Lecture Notes in Computer Science, 2007
Sixth International Symposium on Parallel and Distributed Computing (ISPDC'07), 2007
International Symposium on Parallel and Distributed Computing, 2007
Workshop on Applied Parallel Computing, 2006
One of the definitions of economy as ”the administration of the concerns and resources of any com... more One of the definitions of economy as ”the administration of the concerns and resources of any community or establishment with a view to orderly conduct and productiveness”[2] appears almost identical to the definition of the problem of resource allocation in grid computing. From an economic perspective, the grid itself coincided with the creation of an economy in the very moment that it enabled the exchange or sharing of resources between different owners. This consideration led us to envisage a very high-level resource brokerage architecture based on grid agents capable of implementing different social and economic interactions in grids.
Computer Communications and Networks, 2011
Workshop on Applied Parallel Computing, 2006
Existing data management solutions fail to adequately support data management needs at the inter-... more Existing data management solutions fail to adequately support data management needs at the inter-grid (interoperability) level. We describe a possible solution, a transparent grid filesystem, and consider in detail a challenging use case.
Journal of Grid Computing, 2010
This paper proposes an architecture for the back-end of a federated national datastore for use by... more This paper proposes an architecture for the back-end of a federated national datastore for use by academic research communities, developed by the e-INIS (Irish National e-InfraStructure) project, and describes in detail one member of the federation, the regional datastore at Trinity College Dublin. It builds upon existing infrastructure and services, including Grid-Ireland, the National Grid Initiative and EGEE, Europe’s leading
Heliophysics is the study of highly energetic events that originate on the sun and propogate thro... more Heliophysics is the study of highly energetic events that originate on the sun and propogate through the solar system. Such events can cause critical and possibly fatal disruption of the electromagnetic systems on spacecraft and on ground based structures such as electric power grids, so there is a clear need to understand the events in their totality as they propogate through space and time. This poses a fascinating eScience challenge since the data is gathered by many observatories and communities that have hitherto not needed to work together. We describe how we are developing an eScience infrastructure to make the discovery and analysis of such complex events possible for the communities of heliophysics. The new systematic and data-centric science which will develop from this will be a child of both the space and information ages.
Future Generation Computer Systems, 2013
Science Gateways for Distributed Computing Infrastructures, 2014
2014 6th International Workshop on Science Gateways, 2014
ABSTRACT Heliophysics is a relatively new branch of physics that investigates the relationship be... more ABSTRACT Heliophysics is a relatively new branch of physics that investigates the relationship between the Sun and the other bodies of the solar system. To investigate such relationships, helio- physicists can rely on various tools developed by the community. Some of these tools are on-line catalogues that list events (such as Coronal Mass Ejections, CMEs) and their characteristics as they were observed on the surface of the Sun or on the other bodies of the Solar System. Other tools offer on-line data analysis and access to images and data catalogues. During their research, he- liophysicists often perform investigations that need to coordinate several of these services and to repeat these complex operations until the phenomena under investigation are fully analyzed. Heliophysicists combine the results of these services; this service orchestration is best suited for workflows. This approach has been investigated in the HELIO project. The HELIO project developed an infrastructure for a Virtual Observatory for Heliophysics and implemented service orchestration using TAVERNA workflows. HELIO developed a set of workflows that proved to be useful but lacked flexibility and re-usability. The TAVERNA workflows also needed to be executed directly in TAVERNA workbench, and this forced all users to learn how to use the workbench. Within the SCI-BUS and ER-FLOW projects, we have started an effort to re-think and re-design the heliophysics workflows with the aim of fostering re-usability and ease of use. We base our approach on two key concepts, that of meta-workflows and that of workflow interoperability. We have divided the produced workflows in three different layers. The first layer is Basic Workflows, developed both in the TAVERNA and WS- PGRADE languages. They are building blocks that users compose to address their scientific challenges. They implement well-defined Use Cases that usually involve only one service. The second layer is Science Workflows usually developed in TAVERNA. They implement Science Cases (the definition of a scientific challenge) by composing different Basic Workflows. The third and last layer,Iterative Science Workflows, is developed in WS-PGRADE. It executes sub-workflows (either Basic or Science Workflows) as parameter sweep jobs to investigate Science Cases on large multiple data sets. So far, this approach has proven fruitful for three Science Cases of which one has been completed and two are still being tested.
2014 6th International Workshop on Science Gateways, 2014
ABSTRACT With IWSG 2014 we featured the sixth edition of the workshop series IWSG (International ... more ABSTRACT With IWSG 2014 we featured the sixth edition of the workshop series IWSG (International Workshop on Science Gateways). IWSG 2014 attracted over 30 participants from nine different countries and 25 submissions, which consists of 17 full papers and eight abstracts. Accepted full papers resulted in 25-minutes presentations and abstracts in lightning talks of 15 minutes. While the 15 accepted full papers are published in these proceedings, the abstracts will be published on the website. Additionally, the workshop featured two high-profile keynotes and an outstanding panel discussion on "Science gateways in the cloud era" performed by research leaders from academia and industry. The topics of the proceedings include fundamental enhancements in general features of science gateways like extended developments of the security of science gateways as well as novel developments to support science gateway developers, new approaches in workflow management and use cases from diverse research communities.
Lecture Notes in Computer Science, 2007
Sixth International Symposium on Parallel and Distributed Computing (ISPDC'07), 2007
International Symposium on Parallel and Distributed Computing, 2007
Workshop on Applied Parallel Computing, 2006
One of the definitions of economy as ”the administration of the concerns and resources of any com... more One of the definitions of economy as ”the administration of the concerns and resources of any community or establishment with a view to orderly conduct and productiveness”[2] appears almost identical to the definition of the problem of resource allocation in grid computing. From an economic perspective, the grid itself coincided with the creation of an economy in the very moment that it enabled the exchange or sharing of resources between different owners. This consideration led us to envisage a very high-level resource brokerage architecture based on grid agents capable of implementing different social and economic interactions in grids.
Computer Communications and Networks, 2011
Workshop on Applied Parallel Computing, 2006
Existing data management solutions fail to adequately support data management needs at the inter-... more Existing data management solutions fail to adequately support data management needs at the inter-grid (interoperability) level. We describe a possible solution, a transparent grid filesystem, and consider in detail a challenging use case.
Journal of Grid Computing, 2010
This paper proposes an architecture for the back-end of a federated national datastore for use by... more This paper proposes an architecture for the back-end of a federated national datastore for use by academic research communities, developed by the e-INIS (Irish National e-InfraStructure) project, and describes in detail one member of the federation, the regional datastore at Trinity College Dublin. It builds upon existing infrastructure and services, including Grid-Ireland, the National Grid Initiative and EGEE, Europe’s leading