Modernising code plays a vital role in preparing for future upgrades to the LHC and the experiments. It is essential that software performance is continually increased by making use of modern coding techniques and tools, such as parallel programming languages, portability libraries, etc. It is also important to ensure that software fully exploits the features offered by modern hardware architecture, such as many-core platforms, acceleration coprocessors, and innovative heterogeneous combinations of CPUs, GPUs, FPGAs, or dedicated deep-learning architectures. At the same time, it is of paramount importance that physics performance is not compromised in its drive to ensure maximum efficiency.

 

High-performance distributed caching technologies

Project goal

We’re exploring the suitability of a new infrastructure for key-value storage in the data-acquisition systems of particle-physics experiments. DAQDB (Data Acquisition Database) is a scalable and distributed key-value store that provides low-latency queries. It exploits Intel® Optane™ DC persistent memory, a cutting-edge non-volatile memory technology that could make it possible to decouple real-time data acquisition from asynchronous event selection

R&D topic
Computing performance and software
Project coordinator(s)
Giovanna Lehmann Miotto
Team members
Adam Abed Abud, Danilo Cicalese, Fabrice Le Goff, Remigius K Mommsen
Collaborator liaison(s)
Claudio Bellini, Aleksandra Jereczek, Grzegorz Jereczek, Jan Lisowiec, Maciej Maciejewski, Adrian Pielech, Jakub Radtke, Jakub Schmiegel, Malgorzata Szychowska, Norbert Szulc, Andrea Luiselli

Collaborators

Project background

Upgrades to the LHC mean that the data rates coming from the detectors will dramatically increase. Data will need to be buffered while waiting for systems to select interesting collision events for analysis. However, the current buffers at the readout nodes can only store a few seconds of data due to capacity constraints and the high cost of DRAM. It is therefore important to explore new, cost-effective solutions — capable of handling large amounts of data — that capitalise on emerging technologies.

Recent progress

We were able to test the first Intel Optane persistent-memory devices, enabling us to benchmark the behaviour of DAQDB on this new type of hardware. A testbed with four very powerful machines was set up, hosting Optane persistent memory and SSDs, and interconnected with a 100 Gbps network. The results are encouraging, but more work is needed to reach the performance and scalability goals required for the next generation of High-Luminosity LHC experiments (in particular ATLAS and CMS), as well as by the DUNE experiment.

Next steps

The project formally came to a close in 2019, but several developments and tests will continue in 2020. This will enable us to continue exploring how the new-storage technologies, and DAQDB, can be effectively used in data-acquisition systems.

Publications

    D. Cicalese et al., The design of a distributed key-value store for petascale hot storage in data acquisition systems. Published in EPJ Web Conf. 214, 2019.cern.ch/go/xf9H

Presentations

    M. Maciejewski, Persistent Memory based Key-Value Store for Data Acquisition Systems (25 September). Presented at IXPUG 2019 Annual Conference, Geneva, 2019. cern.ch/go/9cFB
    G. Jereczek, Let's get our hands dirty: a comprehensive evaluation of DAQDB, key-value store for petascale hot storage (5 November). Presented at the 4th International Conference on Computing in High-Energy and Nuclear Physics (CHEP), Adelaide, 2019. cern.ch/go/9cpL8
    J. Radtke, A Key-Value Store for Data Acquisition Systems (April). Presented at SPDK, PMDK and VTune(tm) Summit 04'19, Santa Clara, 2019. cern.ch/go/H6Rl
    G. Jereczek, The design of a distributed key-value store for petascale hot storage in data acquisition systems (12 July). Presented at 23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP), Sofia, 2018. cern.ch/go/6hcX
    J. M. Maciejewski, A key-value store for Data Acquisition Systems (12 September). Presented at ATLAS TDAQ week, Cracow, 2018.
    G. Jereczek, M. Maciejewski, Data Acquisition Database (12 November). Presented at The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC18), Dallas, 2018.
    M. Maciejewski, J. Radtke, The Design of Key-Value Store for Data Acquisition Systems (5 December). Presented at NMVe Developer Days, San Diego, 2018.

Testbed for GPU-accelerated applications 

Project goal

The goal of this project is to adapt computing models and software to exploit fully the potential of GPUs. The project, which began in late 2018, consists of ten individual use cases. 

The technical coordinators are follows:

Andrea Bocci, Felice Pantaleo, Maurizio Pierini, Federico Carminati, Vincenzo Innocente, Marco Rovere, Jean-Roch Vlimant, Vladimir Gligorov, Daniel Campora, Riccardo De Maria, Adrian Oeftiger, Lotta Mether, Ian Fisk, Lorenzo Moneta, Sofia Vallecorsa.

R&D topic
Computing performance and software
Project coordinator(s)
Maria Girone
Team members
Mary Touranakou, Thong Nguyen, Javier Duarte, Olmo Cerri, Jan Kieseler, Roel Aaij, Dorothea Vom Bruch, Blaise Raheem Delaney, Ifan Williams, Niko Neufeld, Viktor Khristenko, Florian Reiss, Guillermo Izquierdo Moreno, Luca Atzori, Miguel Fontes Medeiros
Collaborator liaison(s)
Cosimo Gianfreda (E4), Daniele Gregori (E4), Agnese Reina (E4), Piero Altoé (NVIDIA), Andreas Hehn (NVIDIA), Tom Gibbs (NVIDIA).

Collaborators

Project background

Heterogeneous computing architectures will play an important role in helping CERN address the computing demands of the HL-LHC.

Recent progress

This section outlines the progress made in the six main use cases that were worked on in 2019, related to computing performance and software, as well as machine learning and data analytics.

1. Simulation of sparse data sets for realistic detector geometries

We are working to generate realistic detector geometries using deep generative models, such as adversarially-trained networks or variational autoencoders. To this end, we are investigating custom loss functions able to deal with the specificity of LHC data. We plan to optimise the model inference on GPUs, with a view to delivering a production-ready version of the model to the LHC experiments.

Work got underway in late 2019. We began designing and training the model on two benchmark data sets (the Modified National Institute of Standards and Technology data set and the LHC Jet data set) before moving to the actual problem: generation of detector hits in a realistic set-up. Once we have converged on the full model design and the custom set-up (loss function, etc.), we plan to move to a realistic data set and scale up the problem.

2.Patatrack’ software R&D incubator

The Patatrack initiative is focused on exploiting new hardware and software technologies for sustainable computing at the CMS experiment. During 2019, the Patatrack team demonstrated that it is possible to run some of its particle-collision reconstruction algorithms on NVIDIA GPUs. Doing so led to a computing performance increase of an order of magnitude.

The algorithms were initially developed using NVIDIA’s CUDA platform. They were then ported on an ad hoc basis to run on conventional CPUs, with identical results and close to native performance. When run on an NVIDIA Tesla T4 GPU, the algorithm achieves twice the performance compared to when it is run on a full dual socket Intel Xeon Skylake Gold node.

Performance portability will be explored during 2020. A comparison of the tested solutions — in terms of supported features, performance, ease of use, integration in wider frameworks, and future prospects — will also be carried out.

3. Benchmarking and optimisation of TMVA deep learning

ROOT is an important data-analysis framework used at CERN. This use case is focused on optimising training and evaluating the performance of the ROOT’s TMVA (Toolkit for Multivariate Data Analysis) on NVIDIA GPUs.

During 2019, thanks to the contribution of Joanna Niermann (a student supported by CERN openlab), a new implementation of the convolution operators of the TMVA was performed. This made use of NVIDIA’s cuDNN library and led to a significant boost in performance when training or evaluating deep-learning models. 

We also performed a comparison study with Keras and TensorFlow. This showed better computational performances for the TMVA implementation, especially when using smaller models. Our new implementation was released in the new ROOT version 6.20.

In addition, we developed new GPU implementations using cuDNN for recurrent neural networks. These also showed very good computational performance and will be integrated into the upcoming ROOT version 6.22.   

4. Distributed training

There is a growing need within the high-energy physics community for an HPC-ready turnkey solution for distributed training and optimisation of neural networks. We aim to build software with a streamlined user interface to federate various available frameworks for distributed training. The goal is to bring the most performance to the end user, possibly through a ‘training-as-a-service’ system for HPC.

In 2019, we prepared our software for neural-network training and optimisation. User-friendly and HPC-ready, it has been trialled at multiple institutions. There are, however, outstanding performance issues to be ironed out. We will also work in 2020 to build a community around our final product.

5. Integration of SixTrackLib and PyHEADTAIL

For optimal performance and hardware-utilisation, it is crucial to share the particle state in-place between the codes SixTrackLib (used for tracking single particles following collisions) and PyHEADTAIL (used for simulating macro-particle beam dynamics). This helps to avoid the memory and run-time costs of maintaining two copies on the GPU. The current implementation allows this by virtue of implicit context sharing, enabling seamless hand-off of control over the shared state between the two code-bases. After a first proof-of-concept implementation was created at an E4-NVIDIA hackathon in April 2019, the solution was refined and the API of the libraries was adapted to support this mode of operation.

Work carried out within this project made it possible for PyHEADTAIL to rely on SixTrackLib for high-performance tracking on GPUs, resulting in performance improvements of up to two-to-three orders of magnitude compared to state-of-the-art single-threaded CPU-based code. We also exposed SixTrackLib to new applications and use cases for particle tracking, which led to several improvements and bug fixes.

We are now working on further optimisation, as well as extending it to new applications. We will also work to expand the user community in 2020.

6. Allen: a high-level trigger on GPUs for LHCb

‘Allen’ is an initiative to develop a complete high-level trigger (the first step of the data-filtering process following particle collisions) on GPUs for the LHCb experiment. It has benefitted from support through CERN openlab, including consultation from engineers at NVIDIA.

The new system processes 40 Tb/s, using around 500 of the latest generation NVIDIA GPU cards. Allen matches — from a physics point of view — the reconstruction performance for charged particles achieved on traditional CPUs. It has also been shown that Allen will not be I/O or memory limited. Plus, not only can it be used to perform reconstruction, but it can also take decisions about whether to keep or reject events.

A diverse range of algorithms have been implemented efficiently on Allen. This demonstrates the potential for GPUs not only to be used as ‘accelerators’ in high-energy physics, but also as complete and standalone data-processing solutions.

Allen is now in the final stages of an LHCb collaboration review to decide whether it will be used as the new baseline solution for the next run of the LHC.

Next steps

As outlined above, work related to each of these use cases will continue in 2020. Work will also begin on a number of new use cases.


Presentations

    A. Bocci, Towards a heterogeneous High Level Trigger farm for CMS (13 March). Presented at ACAT2019, Saas Fee, 2019. cern.ch/go/D9SF
    F. Pantaleo, Patatrack: accelerated Pixel Track reconstruction in CMS (2 April). Presented at Connecting the Dots 2019, Valencia, 2019. cern.ch/go/7D8W
    R. Kansal, Deep Graph Neural Networks for Fast HGCAL Simulation (13 August). Presented at CERN openlab summer-student lightning talk session, Geneva, 2019. cern.ch/go/qh6G
    A. Bocci, Heterogeneous reconstruction: combining an ARM processor with a GPU (4 November). Presented at CHEP2019, Adelaide, 2019. cern.ch/go/7bmH
    A. Bocci, Heterogeneous online reconstruction at CMS (7 November). Presented at 24th International Conference on Computing in High-Energy and Nuclear Physics (CHEP) 2019, Adelaide, 2019. cern.ch/go/l9JN