The baseline dataflow system of the ATLAS trigger and DAQ (original) (raw)
Related papers
The ATLAS Data Acquisition and High Level Trigger system
This paper describes the data acquisition and high level trigger system of the ATLAS experiment at the Large Hadron Collider at CERN, as deployed during Run 1. Data flow as well as control, configuration and monitoring aspects are addressed. An overview of the functionality of the system and of its performance is presented and design choices are discussed.
Integration of the Trigger and Data Acquisition systems in ATLAS
2007
During 2006 and spring 2007, integration and commissioning of trigger and data acquisition (TDAQ) equipment in the ATLAS experimental area has progressed. Much of the work has focused on a final prototype setup consisting of around eighty computers representing a subset of the full TDAQ system. There have been a series of technical runs using this setup. Various tests have been run including ones where around 6k Level-1 pre-selected simulated proton-proton events have been processed in a loop mode through the trigger and dataflow chains. The system included the readout buffers containing the events, event building, second level and third level trigger algorithms. Quantities critical for the final system, such as event processing times, have been studied using different trigger algorithms as well as different dataflow components.
Configuration and control of the ATLAS trigger and data acquisition
Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment, 2010
ATLAS is the biggest of the experiments aimed at studying high-energy particle interactions at the Large Hadron Collider (LHC). This paper describes the evolution of the Controls and Configuration system of the ATLAS Trigger and Data Acquisition (TDAQ) from the Technical Design Report (TDR) in 2003 to the first events taken at CERN with circulating beams in autumn 2008. The present functionality and performance and the lessons learned during the development are outlined. At the end we will also highlight some of the challenges which still have to be met by 2010, when the full scale of the trigger farm will be deployed.
The Dataflow System of the ATLAS DAQ/EF”-1” Prototype Project
2000
Abstract In 1996 the Data Acquisition (DAQ) group of the ATLAS Collaboration started a project for the design and implementation of a full DAQ/Event Filter (EF) prototype, based on the Trigger/DAQ architecture described in the ATLAS Technical Proposal. The aim of this prototype was to allow for hardware and software technology investigations as well as their integration aspects in order to reach maturity for the final ATLAS DAQ system design. Being a pre-design prototype it is referred to as ATLAS DAQ Prototype”-1”.
Recent updates of the Control and Configuration of the ATLAS Trigger and Data Acquisition System
The ATLAS experiment at the Large Hadron Collider at CERN relies on a complex and highly distributed Trigger and Data Acquisition (TDAQ) system [3] to gather and select particle collision data at unprecedented energy and rates. The Control and Configuration (CC) system is responsible for all the software required to configure and control the ATLAS data taking. This ranges from high level applications, such as the graphical user interfaces and the desktops used within the ATLAS control room, to low level packages, such as access, process and resource management. Currently the CC system is required to supervise more than 30000 processes running on more than 2000 computers. At these scales, issues such as access, process and resource management, distribution of configuration data and access to them, run control, diagnostic and especially error recovery become predominant to guarantee a high availability of the TDAQ system and minimize the dead time of the experiment. And it is indeed during the data taking activities that the CC system has shown its strength and maturity, featuring a great scalability against the always increasing number of software processes in the TDAQ system and implementing several automatic error recovery procedures in complex and sophisticated scenarios. This paper gives an overview of the new functionalities and recent upgrades of several CC system components, with special emphasis on speed and reliability improvements and on optimization of the user experience during operations.
Performance of the ATLAS DAQ DataFlow system
2004
The baseline DAQ architecture of the ATLAS Experiment at LHC is introduced and its present implementation and the performance of the DAQ components as measured in a laboratory environment are summarized. It will be shown that the discrete event simulation model of the DAQ system, tuned using these measurements, does predict the behaviour of the prototype configurations well, after which, predictions for the final ATLAS system are presented. With the currently available hardware and software, a system using ~140 ROSs with 3GHz single cpu, ~100 SFIs with dual 2.4 GHz cpu and ~500 L2PUs with dual 3.06 GHz cpu can achieve the dataflow for 100 kHz Level 1 rate, with 97% reduction at Level 2 and 3 kHz event building rate. ATLAS DATAFLOW SYSTEM The 40 MHz collision rate at the LHC produces about 25 interactions per bunch crossing, resulting in terabytes of data per second, which has to be handled by the detector electronics and the trigger and DAQ system [1]. A Level1 (L1) trigger system b...
The COTS Approach to the Read-Out Crate in ATLAS DAQ Prototype -1
A prototyping project has been undertaken by the ATLAS DAQ and Event Filter group. The aim is to design and implement a fully functional vertical slice of the ATLAS DAQ and Event Filter with maximum use of Commercial Off The Shelf components (COTS). The Read-Out Crate is a modular component within the vertical slice whose principle functionality is to receive, buffer and forward detector data to the Event Filter systems via an event building network and to the Level 2 Trigger. As required by the project, the initial implementation is based on commercial components, namely VMEbus, PowerPC based single board computers and the Lynx-OS real-time operating system. The measured performance is compared to the results of a discrete event simulation of the Read-Out Crate using the PTOLEMY modelling tool. It has allowed us to model and study the Read-Out Crate performance based on a mixture of existing and forthcoming technologies, an example of the latter being VMEbus 2eSST, and different ar...
Computer Modeling the ATLAS Trigger/DAQ System Performance
2016
Copyright It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content licence (like