Real-time data processing in the ALICE High Level Trigger at the LHC (original) (raw)

Hadron collider triggers with offline-quality tracking at very high event rates

13th IEEE-NPSS Real Time Conference, 2003

We propose precise and fast-track reconstruction at hadron collider experiments, for use in online trigger decisions. We describe the features of fast-track (FTK), a highly parallel processor dedicated to the efficient execution of a fast-tracking algorithm. The hardware-dedicated structure optimizes speed and size; these parameters are evaluated for the ATLAS experiment. We discuss some applications of high-quality tracks available to the trigger logic at an early stage, by using the LHC environment as a benchmark. The most interesting application is online selection of b-quarks down to very low transverse momentum, providing interesting hadronic samples: examples are , potentially useful for jet calibration, and multi-b final states for supersymmetric Higgs searches. The paper is generated from outside the ATLAS experiment and has not been discussed by the ATLAS collaboration.

Hadron collider triggers with high-quality tracking at very high event rates

IEEE Transactions on Nuclear Science, 2004

We propose precise and fast-track reconstruction at hadron collider experiments, for use in online trigger decisions. We describe the features of fast-track (FTK), a highly parallel processor dedicated to the efficient execution of a fast-tracking algorithm. The hardware-dedicated structure optimizes speed and size; these parameters are evaluated for the ATLAS experiment. We discuss some applications of high-quality tracks available to the trigger logic at an early stage, by using the LHC environment as a benchmark. The most interesting application is online selection of b-quarks down to very low transverse momentum, providing interesting hadronic samples: examples are , potentially useful for jet calibration, and multi-b final states for supersymmetric Higgs searches. The paper is generated from outside the ATLAS experiment and has not been discussed by the ATLAS collaboration.

An FPGA-based track finder for the L1 trigger of the CMS experiment at the high luminosity LHC

2016 IEEE-NPSS Real Time Conference (RT), 2016

High-Luminosity LHC (HL-LHC). A crucial requirement of this upgrade is to provide the ability to reconstruct all charged particle tracks with transverse momentum above 2-3 GeV within 4 µs so they can be used in the Level-1 trigger decision. A concept for an FPGA-based track finder using a fully time-multiplexed architecture is presented, where track candidates are reconstructed using a projective binning algorithm based on the Hough Transform, followed by a combinatorial Kalman Filter. A hardware demonstrator using MP7 processing boards has been assembled to prove the entire system functionality, from the output of the tracker readout boards to the reconstruction of tracks with fitted helix parameters. It successfully operates on one eighth of the tracker solid angle acceptance at a time, processing events taken at 40 MHz, each with up to an average of 200 superimposed proton-proton interactions, whilst satisfying the latency requirement. The demonstrated trackreconstruction system, the chosen architecture, the achievements to date and future options for such a system will be discussed. K : Data reduction methods; Digital electronic circuits; Particle tracking detectors; Pattern recognition, cluster finding, calibration and fitting methods 1Corresponding author.

Online data compression in the ALICE O2 facility

Journal of Physics: Conference Series, 2017

The ALICE Collaboration and the ALICE O 2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects of the data handling concept are partial reconstruction of raw data organized in so called time frames, and based on that information reduction of the data rate without significant loss in the physics information. A production solution for data compression has been in operation for the ALICE Time Projection Chamber (TPC) in the ALICE High Level Trigger online system since 2011. The solution is based on reconstruction of space points from raw data. These so called clusters are the input for reconstruction of particle trajectories. Clusters are stored instead of raw data after a transformation of required parameters into an optimized format and subsequent lossless data compression techniques. With this approach, a reduction of 4.4 has been achieved on average. For Run 3, not only a significantly higher reduction is required but also improvements in the implementation of the actual algorithms. The innermost operations of the processing loop effectively need to be called up to O(10 11)/s to cope with the data rate. This can only be achieved in a parallel scheme and makes these operations candidates for optimization. The potential of template programming and static dispatch in a polymorphic implementation has been studied as an alternative to the commonly used dynamic dispatch at runtime. In this contribution we report on the development of a specific programming technique to efficiently combine compile time and runtime domains and present results for the speedup of the algorithm.

GPUs for fast triggering and pattern matching at the CERN experiment NA62

2009 IEEE Nuclear Science Symposium Conference Record (NSS/MIC), 2009

In high energy physics experiment the trigger system is crucial to reduce the quantity of data recorded on tape and the acquisition bandwidth requirements. This is particularly true in rare decays experiments. The NA62 experiment aims at measuring the Branching Ratio of K + → π + νν, predicted in the Standard Model (SM) at level of ∼ 10 −10. In this paper we describe the idea to use the commercial video card processor (GPU) to construct a fast and effective trigger system, both in hardware and software level. Due to the use of off the shelf technology, in continuous development for other purposes, the architecture described would be easily exported to other experiments, to build a versatile and fully customizable trigger system.

ALICE HLT High Speed Tracking on GPU

IEEE Transactions on Nuclear Science, 2000

The on-line event reconstruction in ALICE is performed by the High Level Trigger, which should process up to 2000 events per second in proton-proton collisions and up to 300 central events per second in heavy-ion collisions, corresponding to an input data stream of 30 GB/s. In order to fulfill the time requirements, a fast on-line tracker has been developed. The algorithm combines a Cellular Automaton method being used for a fast pattern recognition and the Kalman Filter method for fitting of found trajectories and for the final track selection.

The ALICE data acquisition system

Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 2014

In this paper we describe the design, the construction, the commissioning and the operation of the Data Acquisition (DAQ) and Experiment Control Systems (ECS) of the ALICE experiment at the CERN Large Hadron Collider (LHC). The DAQ and the ECS are the systems used respectively for the acquisition of all physics data and for the overall control of the experiment. They are two computing systems made of hundreds of PCs and data storage units interconnected via two networks. The collection of experimental data from the detectors is performed by several hundreds of high-speed optical links. We describe in detail the design considerations for these systems handling the extreme data throughput resulting from central lead ions collisions at LHC energy. The implementation of the resulting requirements into hardware (custom optical links and commercial computing equipment), infrastructure (racks, cooling, power distribution, control room), and software led to many innovative solutions which are described together with a presentation of all the major components of the systems, as currently realized. We also report on the performance achieved during the first period of data taking (from 2009 to 2013) often exceeding those specified in the DAQ Technical Design Report. & 2013 The Authors. Published by Elsevier B.V. collisions. This QGP is, in the standard Big Bang model, the state of matter which existed in the early universe from picoseconds to about 10 microseconds after the Big Bang. A precise determination of its properties would be a major achievement. The study of the QGP is performed by investigating the result of heavy ion collisions at a center-of-mass energy of 5.5 TeV per nucleon pair. Contents lists available at ScienceDirect

Fast TPC Online Tracking on GPUs and Asynchronous Data Processing in the ALICE HLT to facilitate Online Calibration

Journal of Physics: Conference Series, 2015

ALICE (A Large Heavy Ion Experiment) is one of the four major experiments at the Large Hadron Collider (LHC) at CERN, which is today the most powerful particle accelerator worldwide. The High Level Trigger (HLT) is an online compute farm of about 200 nodes, which reconstructs events measured by the ALICE detector in real-time. The HLT uses a custom online data-transport framework to distribute data and workload among the compute nodes. ALICE employs several calibration-sensitive subdetectors, e.g. the TPC (Time Projection Chamber). For a precise reconstruction, the HLT has to perform the calibration online. Onlinecalibration can make certain Offline calibration steps obsolete and can thus speed up Offline analysis. Looking forward to ALICE Run III starting in 2020, online calibration becomes a necessity. The main detector used for track reconstruction is the TPC. Reconstructing the trajectories in the TPC is the most compute-intense step during event reconstruction. Therefore, a fast tracking implementation is of great importance. Reconstructed TPC tracks build the basis for the calibration making a fast online-tracking mandatory. We present several components developed for the ALICE High Level Trigger to perform fast event reconstruction and to provide features required for online calibration. As first topic, we present our TPC tracker, which employs GPUs to speed up the processing, and which bases on a Cellular Automaton and on the Kalman filter. Our TPC tracking algorithm has been successfully used in 2011 and 2012 in the lead-lead and the proton-lead runs. We have improved it to leverage features of newer GPUs and we have ported it to support OpenCL, CUDA, and CPUs with a single common source code. This makes us vendor independent. As second topic, we present framework extensions required for online calibration. The extensions, however, are generic and can be used for other purposes as well. We have extended the framework to support asynchronous compute chains, which are required for long-running tasks required e.g. for online calibration. And we describe our method to feed in custom data sources in the data flow. These can be external parameters like environmental temperature required for calibration and these can also be used to feed back calibration results into the processing chain. Overall, the work presented in this contribution makes the ALICE HLT ready for online reconstruction and calibration for the LHC Run II starting in 2015.