Integration of the Trigger and Data Acquisition systems in ATLAS (original) (raw)

The ATLAS Data Acquisition and High Level Trigger system

This paper describes the data acquisition and high level trigger system of the ATLAS experiment at the Large Hadron Collider at CERN, as deployed during Run 1. Data flow as well as control, configuration and monitoring aspects are addressed. An overview of the functionality of the system and of its performance is presented and design choices are discussed.

Recent updates of the Control and Configuration of the ATLAS Trigger and Data Acquisition System

The ATLAS experiment at the Large Hadron Collider at CERN relies on a complex and highly distributed Trigger and Data Acquisition (TDAQ) system [3] to gather and select particle collision data at unprecedented energy and rates. The Control and Configuration (CC) system is responsible for all the software required to configure and control the ATLAS data taking. This ranges from high level applications, such as the graphical user interfaces and the desktops used within the ATLAS control room, to low level packages, such as access, process and resource management. Currently the CC system is required to supervise more than 30000 processes running on more than 2000 computers. At these scales, issues such as access, process and resource management, distribution of configuration data and access to them, run control, diagnostic and especially error recovery become predominant to guarantee a high availability of the TDAQ system and minimize the dead time of the experiment. And it is indeed during the data taking activities that the CC system has shown its strength and maturity, featuring a great scalability against the always increasing number of software processes in the TDAQ system and implementing several automatic error recovery procedures in complex and sophisticated scenarios. This paper gives an overview of the new functionalities and recent upgrades of several CC system components, with special emphasis on speed and reliability improvements and on optimization of the user experience during operations.

Configuration and control of the ATLAS trigger and data acquisition

Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment, 2010

ATLAS is the biggest of the experiments aimed at studying high-energy particle interactions at the Large Hadron Collider (LHC). This paper describes the evolution of the Controls and Configuration system of the ATLAS Trigger and Data Acquisition (TDAQ) from the Technical Design Report (TDR) in 2003 to the first events taken at CERN with circulating beams in autumn 2008. The present functionality and performance and the lessons learned during the development are outlined. At the end we will also highlight some of the challenges which still have to be met by 2010, when the full scale of the trigger farm will be deployed.

Performance of the ATLAS Trigger System in 2010

2012

Abstract Proton–proton collisions at TeV and heavy ion collisions at TeV were produced by the LHC and recorded using the ATLAS experiment's trigger system in 2010. The LHC is designed with a maximum bunch crossing rate of 40 MHz and the ATLAS trigger system is designed to record approximately 200 of these per second.

An overview of the ATLAS high-level trigger dataflow and supervision

IEEE Transactions on Nuclear Science, 2000

The ATLAS high-level trigger (HLT) system provides software-based event selection after the initial LVL1 hardware trigger. It is composed of two stages, the LVL2 trigger and the event filter (EF). The LVL2 trigger performs event selection with optimized algorithms using selected data guided by Region of Interest pointers provided by the LVL1 trigger. Those events selected by LVL2 are built into complete events, which are passed to the EF for a further stage of event selection and classification using off-line algorithms. Events surviving the EF selection are passed for off-line storage. The two stages of HLT are implemented on processor farms. The concept of distributing the selection process between LVL2 and EF is a key element in the architecture, which allows it to be flexible to changes (luminosity, detector knowledge, background conditions, etc.) Although there are some differences in the requirements between these subsystems there are many commonalities. An overview of the dataflow (event selection) and supervision (control, configuration, monitoring) activities in the HLT is given, highlighting where commonalities between the two subsystems can be exploited and indicating where requirements dictate that implementations differ. An HLT prototype system has been built at CERN. Functional testing is being carried out in order to validate the HLT architecture.

The baseline dataflow system of the ATLAS trigger and DAQ

2003

In this paper the baseline design of the ATLAS High Level Trigger and Data Acquisition system with respect to the DataFlow aspects, as presented in the recently submitted ATLAS Trigger/DAQ/Controls Technical Design Report [1], is reviewed and recent results of testbed measurements and from modelling are discussed.

The electron/photon and tau/hadron Cluster Processor for the ATLAS First-Level Trigger - a Flexible Test System

1999

The electron/photon and tau/hadron first-level trigger system for ATLAS will receive digitised data from approximately 6400 calorimeter trigger towers, covering a pseudo-rapidity region of ± 2.5. The system will provide electron/photon and tau/hadron trigger multiplicity information to the Central Trigger Processor, and Region of Interest information for the second-level trigger. The system will also provide intermediate results to the DAQ system for monitoring and diagnostic purposes. The system consists of four different 9U-module designs and two different chip (ASIC/FPGA) designs. This paper will outline a flexible test system for evaluating various elements of the system, including data links, ASICs/FPGAs and individual modules.

An Overview of Algorithms for the ATLAS High Level Trigger

2003

Following rigorous software design and analysis methods, an object-based architecture has been developed to derive the second-and third-level trigger decisions for the future ATLAS detector at the LHC. The functional components within this system responsible for generating elements of the trigger decisions are algorithms running within the software architecture. Relevant aspects of the architecture are reviewed along with concrete examples of specific algorithms. †Presented by S. Armstrong on behalf of the ATLAS High Level Trigger Group [1] at

The electron/photon and tau/hadron Cluster Processor for the ATLAS First-Level Trigger-a Flexible Test System V. Perera, I. Brawn, J. Edwards, CNP Gee, …

Proceedings of the fifth …, 1999

The electron/photon and tau/hadron first-level trigger system for ATLAS will receive digitised data from approximately 6400 calorimeter trigger towers, covering a pseudo-rapidity region of ± 2.5. The system will provide electron/photon and tau/hadron trigger multiplicity information to the Central Trigger Processor, and Region of Interest information for the second-level trigger. The system will also provide intermediate results to the DAQ system for monitoring and diagnostic purposes. The system consists of four different 9U-module designs and two different chip (ASIC/FPGA) designs. This paper will outline a flexible test system for evaluating various elements of the system, including data links, ASICs/FPGAs and individual modules.

The ATLAS Data Acquisition and Trigger: concept, design and status

Nuclear Physics B - Proceedings Supplements, 2007

This article presents the base-line design and implementation of the ATLAS Trigger and Data Acquisition system, in particular the Data Flow and High Level Trigger components. The status of the installation and commissioning of the system is also presented.