The Array Control and Data Acquisition System of the Cherenkov Telescope Array (original) (raw)
Related papers
The Monitoring, Logging, and Alarm system for the Cherenkov Telescope Array
Proceedings of 37th International Cosmic Ray Conference — PoS(ICRC2021), 2021
We present the current development of the Monitoring, Logging and Alarm subsystems in the framework of the Array Control and Data Acquisition System (ACADA) for the Cherenkov Telescope Array (CTA). The Monitoring System (MON) is the subsystem responsible for monitoring and logging the overall array (at each of the CTA sites) through the acquisition of monitoring and logging information from the array elements. The MON allows us to perform a systematic approach to fault detection and diagnosis supporting corrective and predictive maintenance to minimize the downtime of the system. We present a unified tool for monitoring data items from the telescopes and other devices deployed at the CTA array sites. Data are immediately available for the operator interface and quick-look quality checks and stored for later detailed inspection. The Array Alarm System (AAS) is the subsystem that provides the service that gathers, filters, exposes, and persists alarms raised by both the ACADA processes and the array elements supervised by the ACADA system. It collects alarms from the telescopes, the array calibration, the environmental monitoring instruments and the ACADA systems. The AAS subsystem also creates new alarms based on the analysis and correlation of the system software logs and the status of the system hardware providing the filter mechanisms for all the alarms. Data from the alarm system are then sent to the operator via the human-machine interface.
A prototype for the real-time analysis of the Cherenkov Telescope Array
SPIE Proceedings, 2014
The Cherenkov Telescope Array (CTA) observatory will be one of the biggest ground-based very-high-energy (VHE) γray observatory. CTA will achieve a factor of 10 improvement in sensitivity from some tens of GeV to beyond 100 TeV with respect to existing telescopes. The CTA observatory will be capable of issuing alerts on variable and transient sources to maximize the scientific return. To capture these phenomena during their evolution and for effective communication to the astrophysical community, speed is crucial. This requires a system with a reliable automated trigger that can issue alerts immediately upon detection of γ-ray flares. This will be accomplished by means of a Real-Time Analysis (RTA) pipeline, a key system of the CTA observatory. The latency and sensitivity requirements of the alarm system impose a challenge because of the anticipated large data rate, between 0.5 and 8 GB/s. As a consequence, substantial efforts toward the optimization of highthroughput computing service are envisioned. For these reasons our working group has started the development of a prototype of the Real-Time Analysis pipeline. The main goals of this prototype are to test: (i) a set of frameworks and design patterns useful for the inter-process communication between software processes running on memory; (ii) the sustainability of the foreseen CTA data rate in terms of data throughput with different hardware (e.g. accelerators) and software configurations, (iii) the reuse of nonreal-time algorithms or how much we need to simplify algorithms to be compliant with CTA requirements, (iv) interface issues between the different CTA systems. In this work we focus on goals (i) and (ii).
EPJ Web of Conferences, 2021
The Cherenkov Telescope Array (CTA) is the next-generation instrument in the very-high energy gamma ray astronomy domain. It will consist of tens of Cherenkov telescopes deployed in 2 arrays at La Palma (Spain) and Paranal (ESO, Chile) respectively. Currently under construction, CTA will start operations around 2023 for a duration of about 30 years. During operations CTA is expected to produce about 2 PB of raw data per year plus 5-20 PB of Monte Carlo data. The global data volume to be managed by the CTA archive, including all versions and copies, is of the order of 100 PB with a smooth growing profile. The associated processing needs are also very high, of the order of hundreds of millions of CPU HS06 hours per year. In order to optimize the instrument design and study its performances, during the preparatory phase (2010-2017) and the current construction phase, the CTA consortium has run massive Monte Carlo productions on the EGI grid infrastructure. In order to handle these prod...
Big Data for the Real-Time Analysis of the Cherenkov Telescope Array Observatory
2019
Lo scopo di questo lavoro di tesi è quello di progettare e sviluppare un framework che supporti l'analisi in tempo reale nel contesto del Cherenkov Telescope Array (CTA). CTA è un consorzio internazionale che comprende 1420 membri provenienti da oltre 200 istituti da 31 Nazioni. CTA punta ad essere il più grande e più sensibile osservatorio ground-based di raggi gamma di prossima generazione in grado di gestire un'elevata quantità di dati e un'alta velocità di trasmissione, compresa tra i 0,5 e i 10 GB/s, con una rate di acquisizione nominale di 6 kHz. A tale riguardo, è stata sviluppata la RTAlib in grado di fornire un'API semplice e ad alte prestazioni per archiviare o fare caching dei dati generati durante la fase di ricostruzione e analisi. Per far fronte alle elevate velocità di trasmissione di CTA, la RTAlib sfrutta il multiprocesso, il multi-threading, le transazioni ed un accesso trasparente a MySQL o Redis per far fronte a diversi casi d’uso. Tutte queste fu...
An Innovative Science Gateway for the Cherenkov Telescope Array
Journal of Grid Computing, 2015
The Cherenkov Telescope Array (CTA) is currently building the next generation, ground-based, very high-energy gamma-ray instrumentation. CTA is expected to collect very large datasets (in the order of petabytes) which will have to be stored, managed and processed. This paper presents a graphical user interface built inside a science gateway aiming at providing CTA-users with a common working framework.
The Cherenkov Telescope Array Observatory: top level use cases
Software and Cyberinfrastructure for Astronomy IV, 2016
Today the scientific community is facing an increasing complexity of the scientific projects, from both a technological and a management point of view. The reason for this is in the advance of science itself, where new experiments with unprecedented levels of accuracy, precision and coverage (time and spatial) are realised. Astronomy is one of the fields of the physical sciences where a strong interaction between the scientists, the instrument and software developers is necessary to achieve the goals of any Big Science Project. The Cherenkov Telescope Array (CTA) will be the largest ground-based very high-energy gamma-ray observatory of the next decades. To achieve the full potential of the CTA Observatory, the system must be put into place to enable users to operate the telescopes productively. The software will cover all stages of the CTA system, from the preparation of the observing proposals to the final data reduction, and must also fit into the overall system. Scientists, engineers, operators and others will use the system to operate the Observatory, hence they should be involved in the design process from the beginning. We have organised a workgroup and a workflow for the definition of the CTA Top Level Use Cases in the context of the Requirement Management activities of the CTA Observatory. Scientists, instrument and software developers are collaborating and sharing information to provide a common and general understanding of the Observatory from a functional point of view. Scientists that will use the CTA Observatory will provide mainly Science Driven Use Cases, whereas software engineers will subsequently provide more detailed Use Cases, comments and feedbacks. The main purposes are to define observing modes and strategies, and to provide a framework for the flow down of the Use Cases and requirements to check missing requirements and the already developed Use-Case models at CTA subsystem level. Use Cases will also provide the basis for the definition of the Acceptance Test Plan for the validation of the overall CTA system. In this contribution we present the organisation and the workflow of the Top Level Use Cases workgroup.
Quality Assurance Plan for the SCADA System of the Cherenkov Telescope Array Observatory
2020
The Cherenkov Telescope Array is the future ground-based facility for gamma-ray astronomy at very-high energies. The CTA Observatory will comprise more than 100 telescopes and calibration devices that need to be centrally managed and synchronized to perform the required scientific and technical activities. The operation of the array requires a complex Supervisory Control and Data Acquisition (SCADA) system, named Array Control and Data Acquisition (ACADA), whose quality level is crucial for maximizing the efficiency of the CTA operations. In this contribution we aim to present the Quality Assurance (QA) strategy adopted by the ACADA team to fulfill the quality standards required for the creation and usage of ACADA software. We will describe the QA organization and planned activities, together with the quality models and the related metrics defined to comply with the required quality standards. We will describe the procedures, methods and tools which will be applied in order to guara...
Control Software for the SST-1M Small-Size Telescope prototype for the Cherenkov Telescope Array
Proceedings of 35th International Cosmic Ray Conference — PoS(ICRC2017)
The SST-1M is a 4-m Davies-Cotton atmospheric Cherenkov telescope optimized to provide gamma-ray sensitivity above a few TeV. The SST-1M is proposed as part of the Small-Size Telescope array for the Cherenkov Telescope Array (CTA), the first prototype has already been deployed. The SST-1M control software of all subsystems (active mirror control, drive system, safety system, photo-detection plane, DigiCam, CCD cameras) and the whole telescope itself (master controller) uses the standard software design proposed for all CTA telescopes based on the ALMA Common Software (ACS) developed to control the Atacama Large Millimeter Array (ALMA). Each subsystem is represented by a separate ACS component, which handles the communication to and the operation of the subsystem. Interfacing with the actual hardware is performed via the OPC UA communication protocol, supported either natively by dedicated industrial standard servers (PLCs) or separate service applications developed to wrap lower level protocols (e.g. CAN bus, camera slow control) into OPC UA. Early operations of the telescope without the camera were already carried out. The camera is fully assembled and is capable to perform data acquisition using artificial light source.