Modernising code plays a vital role in preparing for future upgrades to the LHC and the experiments. It is essential that software performance is continually increased by making use of modern coding techniques and tools, such as parallel programming languages, portability libraries, etc. It is also important to ensure that software fully exploits the features offered by modern hardware architecture, such as many-core platforms, acceleration coprocessors, and innovative heterogeneous combinations of CPUs, GPUs, FPGAs, or dedicated deep-learning architectures. At the same time, it is of paramount importance that physics performance is not compromised in its drive to ensure maximum efficiency.

 

Testbed for GPU-accelerated applications 

Project goal

The goal of this project is to adapt computing models and software to exploit fully the potential of GPUs. The project, which began in late 2018, consists of ten individual use cases.

The technical coordinators are as follows:

Andrea Bocci, Felice Pantaleo, Maurizio Pierini, Federico Carminati, Vincenzo Innocente, Marco Rovere, Jean-Roch Vlimant, Vladimir Gligorov, Daniel Campora, Riccardo De Maria, Adrian Oeftiger, Lotta Mether, Ian Fisk, Lorenzo Moneta, Jan Kieseler, and Sofia Vallecorsa.

R&D topic
Computing performance and software
Project coordinator(s)
Maria Girone
Team members
Mary Touranakou, Thong Nguyen, Javier Duarte, Olmo Cerri, Roel Aaij, Dorothea Vom Bruch, Blaise Raheem Delaney, Ifan Williams, Niko Neufeld, Viktor Khristenko, Florian Reiss, Guillermo Izquierdo Moreno, Luca Atzori, Miguel Fontes Medeiros
Collaborator liaison(s)
Cosimo Gianfreda (E4), Daniele Gregori (E4), Agnese Reina (E4), Piero Altoé (NVIDIA), Andreas Hehn (NVIDIA), Tom Gibbs (NVIDIA)

Collaborators

Project background

Heterogeneous computing architectures will play an important role in helping CERN address the computing demands of the HL-LHC.

Recent progress

This CERN openlab project supports several use cases at CERN. This section outlines the progress made in the two main use cases that were worked on in 2020.

Allen: a high-level trigger on GPUs for LHCb

Allen’ is an initiative to develop a complete high-level trigger (the first step of the data-filtering process following particle collisions) on GPUs for the LHCb experiment. It has benefitted from support through CERN openlab, including consultation from engineers at NVIDIA.

The new system processes 40 Tb/s, using around 350 of the latest generation NVIDIA GPU cards. Allen matches — from a physics point of view — the reconstruction performance for charged particles achieved on traditional CPUs. It has also been shown that Allen will not be I/O or memory limited. Plus, not only can it be used to perform reconstruction, but it can also take decisions about whether to keep or reject events.

A diverse range of algorithms have been implemented efficiently on Allen. This demonstrates the potential for GPUs not only to be used as accelerators, but also as complete and standalone data-processing solutions.

In May 2020, Allen was adopted by the LHCb collaboration as the new baseline first-level trigger for Run 3. The Technical Design Report for the system was approved in June. From the start, Allen has been designed to be a framework that can be used in a general manner for high-throughput GPU computing. A workshop was held with core Gaudi developers and members of the CMS and ALICE  experiments to discuss how best to integrate Allen into the wider software ecosystem beyond the LHCb experiment. The LHCb team working on Allen is currently focusing on commissioning the system for data-taking in 2022 (delayed from the original 2021 start date due to the COVID-19 pandemic). A readiness review is taking place in the first half of 2021.

End-to-end multi-particle reconstruction for the HGCal based on machine learning

The CMS High-Granularity Calorimeter (HGCal) will replace the end-cap calorimeters of the CMS detector for the operation of the High-Luminosity LHC. With about 2 million sensors and high lateral and transversal granularity, it provides huge potential for new physics discoveries. We aim to exploit this using end-to-end optimisable graph neural networks.

Profiting from new machine-learning concepts developed within the group and the CERN openlab collaboration with the Flatiron Institute in New York, US, we were able to develop and train a first

prototype for directly reconstructing incident particle properties from raw detector hits. Through our direct contact with NVIDIA, we were able to implement custom tensorflow GPU kernels. Together with the dedicated neural network structure, these enabled us to process the hits of an entire particle-collision event in one go on the GPU.

Next steps

Work related to each of the project’s use cases will continue in 2021.


Presentations

    A. Bocci, Towards a heterogeneous High Level Trigger farm for CMS (13 March). Presented at ACAT2019, Saas Fee, 2019. cern.ch/go/D9SF
    F. Pantaleo, Patatrack: accelerated Pixel Track reconstruction in CMS (2 April). Presented at Connecting the Dots 2019, Valencia, 2019. cern.ch/go/7D8W
    R. Kansal, Deep Graph Neural Networks for Fast HGCAL Simulation (13 August). Presented at CERN openlab summer-student lightning talk session, Geneva, 2019. cern.ch/go/qh6G
    A. Bocci, Heterogeneous reconstruction: combining an ARM processor with a GPU (4 November). Presented at CHEP2019, Adelaide, 2019. cern.ch/go/7bmH
    A. Bocci, Heterogeneous online reconstruction at CMS (7 November). Presented at 24th International Conference on Computing in High-Energy and Nuclear Physics (CHEP) 2019, Adelaide, 2019. cern.ch/go/l9JN

High-performance distributed caching technologies

Project goal

We’re exploring the suitability of a new infrastructure for key-value storage in the data-acquisition systems of particle-physics experiments. DAQDB (Data Acquisition Database) is a scalable and distributed key-value store that provides low-latency queries. It exploits Intel® Optane™ DC persistent memory, a cutting-edge non-volatile memory technology that could make it possible to decouple real-time data acquisition from asynchronous event selection

R&D topic
Computing performance and software
Project coordinator(s)
Giovanna Lehmann Miotto
Team members
Adam Abed Abud, Danilo Cicalese, Fabrice Le Goff, Remigius K Mommsen
Collaborator liaison(s)
Claudio Bellini, Aleksandra Jereczek, Grzegorz Jereczek, Jan Lisowiec, Maciej Maciejewski, Adrian Pielech, Jakub Radtke, Jakub Schmiegel, Malgorzata Szychowska, Norbert Szulc, Andrea Luiselli

Collaborators

Project background

Upgrades to the LHC mean that the data rates coming from the detectors will dramatically increase. Data will need to be buffered while waiting for systems to select interesting collision events for analysis. However, the current buffers at the readout nodes can only store a few seconds of data due to capacity constraints and the high cost of DRAM. It is therefore important to explore new, cost-effective solutions — capable of handling large amounts of data — that capitalise on emerging technologies.

Recent progress

We were able to test the first Intel Optane persistent-memory devices, enabling us to benchmark the behaviour of DAQDB on this new type of hardware. A testbed with four very powerful machines was set up, hosting Optane persistent memory and SSDs, and interconnected with a 100 Gbps network. The results are encouraging, but more work is needed to reach the performance and scalability goals required for the next generation of High-Luminosity LHC experiments (in particular ATLAS and CMS), as well as by the DUNE experiment.

Next steps

The project formally came to a close in 2019, but several developments and tests will continue in 2020. This will enable us to continue exploring how the new-storage technologies, and DAQDB, can be effectively used in data-acquisition systems.

Publications

    D. Cicalese et al., The design of a distributed key-value store for petascale hot storage in data acquisition systems. Published in EPJ Web Conf. 214, 2019. cern.ch/go/xf9H

Presentations

    M. Maciejewski, Persistent Memory based Key-Value Store for Data Acquisition Systems (25 September). Presented at IXPUG 2019 Annual Conference, Geneva, 2019. cern.ch/go/9cFB
    G. Jereczek, Let's get our hands dirty: a comprehensive evaluation of DAQDB, key-value store for petascale hot storage (5 November). Presented at the 4th International Conference on Computing in High-Energy and Nuclear Physics (CHEP), Adelaide, 2019. cern.ch/go/9cpL8
    J. Radtke, A Key-Value Store for Data Acquisition Systems (April). Presented at SPDK, PMDK and VTune(tm) Summit 04'19, Santa Clara, 2019. cern.ch/go/H6Rl
    G. Jereczek, The design of a distributed key-value store for petascale hot storage in data acquisition systems (12 July). Presented at 23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP), Sofia, 2018. cern.ch/go/6hcX
    J. M. Maciejewski, A key-value store for Data Acquisition Systems (12 September). Presented at ATLAS TDAQ week, Cracow, 2018.
    G. Jereczek, M. Maciejewski, Data Acquisition Database (12 November). Presented at The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC18), Dallas, 2018.
    M. Maciejewski, J. Radtke, The Design of Key-Value Store for Data Acquisition Systems (5 December). Presented at NMVe Developer Days, San Diego, 2018.