ATLAS Trigger and Data Acquisition: Capabilities and commissioning

https://doi.org/10.1016/j.nima.2009.06.114Get rights and content

Abstract

The ATLAS trigger system is based on three levels of event selection that selects the physics of interest from an initial bunch crossing rate of 40 MHz to an output rate of ~200Hz compatible with the offline computing power and storage capacity. During nominal LHC operations at a luminosity of 1034cm-2s-1, decisions must be taken every 25 ns.

The LHC is expected to begin operations with a peak luminosity of 1031cm-2s-1 with far fewer number of bunches, but quickly ramp up to higher luminosities. Hence, the ATLAS Trigger and Data Acquisition system needs to adapt to the changing beam conditions preserving the interesting physics and detector requirements that may vary with these conditions.

Introduction

The ATLAS experiment is one of the four major experiments aimed at studying high-energy proton–proton collisions at the Large Hadron Collider (LHC) at CERN.

The Trigger and Data Acquisition system (TDAQ) is organized as a three level selection scheme: the Level 1 (LVL1) which is based on custom electronics, the Level 2 (LVL2) and the Event Filter (EF), jointly referred to as High Level Triggers (HLT), which are software-based. A distinguishing feature of the LVL2 is that its selection is based on data from specific regions of the detector defined per event by the LVL1 trigger, the so-called Regions of Interest (RoI). This minimizes the amount of data needed to calculate the trigger decisions thus reducing considerably the overall network data traffic. The initial LHC bunch crossing rate of 40 MHz must be reduced by the TDAQ to ~200Hz(~300MB/s) compatible with the offline computing power and storage capacity. The LHC is expected to begin its operation with a peak luminosity of 1031cm-2s-1 with a relatively small number of bunches, but quickly ramp up to higher luminosities. The deployed trigger selection has to adapt to the changing beam conditions while preserving the interesting physics and satisfying varying detector commissioning requirements.

During 2008, a few months of ATLAS cosmic data-taking and the first experience with the LHC circulating beams provided an unprecedented testbed for the evaluation of the performance of the ATLAS DataFlow, in terms of functionality, robustness and stability, as well as its integration with the offline data processing and management.

This paper presents an overview of the TDAQ system and the status of the preparation of the trigger menu for the early data-taking and reports on the usage of the DataFlow infrastructure during the ATLAS cosmic and single LHC beam data-taking.

Section snippets

DataFlow

The DataFlow system [1], [2] is responsible for the collection and the conveyance of event data from the detector electronics to the mass storage, while serving the HLT processing farms, and it is based on a push–pull architecture.

The principal components of the TDAQ systems are shown in Fig. 1. The movement of event data from detector to mass storage commences with the selection of events by the LVL1 trigger. For each accepted event the LVL1 trigger, via a dedicated data path, sends to the

Trigger

The ATLAS trigger [1], [2] is composed of three levels of event selection that must reduce the output event storage rate to ~200Hz (about 300MB/s) from an initial bunch crossing of 40 MHz. A large rejection against QCD processes is needed while maintaining high efficiency for low cross-section physics processes that include searches for new physics as shown in Fig. 3. The rate estimates discussed in the following are based on simulations and are subject to several source of uncertainties which

Trigger menu

A recipe for triggering on various physics processes is provided by trigger menus: tables of signatures that are fully specified by thresholds and selection criteria for various physics object at each of the three trigger levels.

A new signature, before being included into a trigger menu, is carefully evaluated: its physics goals (or commissioning or calibration), the efficiency and the background rejection it provides to meet these goals and the consumed bandwidth are taken into account.

The

Conclusions

ATLAS TDAQ system has been successfully used in cosmic and single LHC beam data-taking and the results obtained, backed up by complementary performance tests, have allowed to validate the architecture of the ATLAS DataFlow and to demonstrate that the system is robust, flexible and scalable enough to cope with the requirements of the ATLAS experiment.

The trigger menu for the LHC start-up phase is designed to enable the rapid commissioning and preparation for the high regime using low pT

References (5)

  • ATLAS Collaboration, Detector and Physics Performance Technical Design Report, CERN/LHCC/99-14/15,...
  • JINST

    (2008)
There are more references available in the full text version of this article.
1

On behalf of the ATLAS TDAQ Collaboration [5].

View full text