Testbed for GPU-accelerated applications 

Project goal

The goal of this project is to adapt computing models and software to exploit fully the potential of GPUs. The project, which began in late 2018, consists of ten individual use cases. 

The technical coordinators are follows:

Andrea Bocci, Felice Pantaleo, Maurizio Pierini, Federico Carminati, Vincenzo Innocente, Marco Rovere, Jean-Roch Vlimant, Vladimir Gligorov, Daniel Campora, Riccardo De Maria, Adrian Oeftiger, Lotta Mether, Ian Fisk, Lorenzo Moneta, Sofia Vallecorsa.

R&D topic
Computing performance and software
Project coordinator(s)
Maria Girone
Team members
Mary Touranakou, Thong Nguyen, Javier Duarte, Olmo Cerri, Jan Kieseler, Roel Aaij, Dorothea Vom Bruch, Blaise Raheem Delaney, Ifan Williams, Niko Neufeld, Viktor Khristenko, Florian Reiss, Guillermo Izquierdo Moreno, Luca Atzori, Miguel Fontes Medeiros
Collaborator liaison(s)
Cosimo Gianfreda (E4), Daniele Gregori (E4), Agnese Reina (E4), Piero Altoé (NVIDIA), Andreas Hehn (NVIDIA), Tom Gibbs (NVIDIA).

Collaborators

Project background

Heterogeneous computing architectures will play an important role in helping CERN address the computing demands of the HL-LHC.

Recent progress

This section outlines the progress made in the six main use cases that were worked on in 2019, related to computing performance and software, as well as machine learning and data analytics.

1. Simulation of sparse data sets for realistic detector geometries

We are working to generate realistic detector geometries using deep generative models, such as adversarially-trained networks or variational autoencoders. To this end, we are investigating custom loss functions able to deal with the specificity of LHC data. We plan to optimise the model inference on GPUs, with a view to delivering a production-ready version of the model to the LHC experiments.

Work got underway in late 2019. We began designing and training the model on two benchmark data sets (the Modified National Institute of Standards and Technology data set and the LHC Jet data set) before moving to the actual problem: generation of detector hits in a realistic set-up. Once we have converged on the full model design and the custom set-up (loss function, etc.), we plan to move to a realistic data set and scale up the problem.

2.Patatrack’ software R&D incubator

The Patatrack initiative is focused on exploiting new hardware and software technologies for sustainable computing at the CMS experiment. During 2019, the Patatrack team demonstrated that it is possible to run some of its particle-collision reconstruction algorithms on NVIDIA GPUs. Doing so led to a computing performance increase of an order of magnitude.

The algorithms were initially developed using NVIDIA’s CUDA platform. They were then ported on an ad hoc basis to run on conventional CPUs, with identical results and close to native performance. When run on an NVIDIA Tesla T4 GPU, the algorithm achieves twice the performance compared to when it is run on a full dual socket Intel Xeon Skylake Gold node.

Performance portability will be explored during 2020. A comparison of the tested solutions — in terms of supported features, performance, ease of use, integration in wider frameworks, and future prospects — will also be carried out.

3. Benchmarking and optimisation of TMVA deep learning

ROOT is an important data-analysis framework used at CERN. This use case is focused on optimising training and evaluating the performance of the ROOT’s TMVA (Toolkit for Multivariate Data Analysis) on NVIDIA GPUs.

During 2019, thanks to the contribution of Joanna Niermann (a student supported by CERN openlab), a new implementation of the convolution operators of the TMVA was performed. This made use of NVIDIA’s cuDNN library and led to a significant boost in performance when training or evaluating deep-learning models. 

We also performed a comparison study with Keras and TensorFlow. This showed better computational performances for the TMVA implementation, especially when using smaller models. Our new implementation was released in the new ROOT version 6.20.

In addition, we developed new GPU implementations using cuDNN for recurrent neural networks. These also showed very good computational performance and will be integrated into the upcoming ROOT version 6.22.   

4. Distributed training

There is a growing need within the high-energy physics community for an HPC-ready turnkey solution for distributed training and optimisation of neural networks. We aim to build software with a streamlined user interface to federate various available frameworks for distributed training. The goal is to bring the most performance to the end user, possibly through a ‘training-as-a-service’ system for HPC.

In 2019, we prepared our software for neural-network training and optimisation. User-friendly and HPC-ready, it has been trialled at multiple institutions. There are, however, outstanding performance issues to be ironed out. We will also work in 2020 to build a community around our final product.

5. Integration of SixTrackLib and PyHEADTAIL

For optimal performance and hardware-utilisation, it is crucial to share the particle state in-place between the codes SixTrackLib (used for tracking single particles following collisions) and PyHEADTAIL (used for simulating macro-particle beam dynamics). This helps to avoid the memory and run-time costs of maintaining two copies on the GPU. The current implementation allows this by virtue of implicit context sharing, enabling seamless hand-off of control over the shared state between the two code-bases. After a first proof-of-concept implementation was created at an E4-NVIDIA hackathon in April 2019, the solution was refined and the API of the libraries was adapted to support this mode of operation.

Work carried out within this project made it possible for PyHEADTAIL to rely on SixTrackLib for high-performance tracking on GPUs, resulting in performance improvements of up to two-to-three orders of magnitude compared to state-of-the-art single-threaded CPU-based code. We also exposed SixTrackLib to new applications and use cases for particle tracking, which led to several improvements and bug fixes.

We are now working on further optimisation, as well as extending it to new applications. We will also work to expand the user community in 2020.

6. Allen: a high-level trigger on GPUs for LHCb

‘Allen’ is an initiative to develop a complete high-level trigger (the first step of the data-filtering process following particle collisions) on GPUs for the LHCb experiment. It has benefitted from support through CERN openlab, including consultation from engineers at NVIDIA.

The new system processes 40 Tb/s, using around 500 of the latest generation NVIDIA GPU cards. Allen matches — from a physics point of view — the reconstruction performance for charged particles achieved on traditional CPUs. It has also been shown that Allen will not be I/O or memory limited. Plus, not only can it be used to perform reconstruction, but it can also take decisions about whether to keep or reject events.

A diverse range of algorithms have been implemented efficiently on Allen. This demonstrates the potential for GPUs not only to be used as ‘accelerators’ in high-energy physics, but also as complete and standalone data-processing solutions.

Allen is now in the final stages of an LHCb collaboration review to decide whether it will be used as the new baseline solution for the next run of the LHC.

Next steps

As outlined above, work related to each of these use cases will continue in 2020. Work will also begin on a number of new use cases.


Presentations

    A. Bocci, Towards a heterogeneous High Level Trigger farm for CMS (13 March). Presented at ACAT2019, Saas Fee, 2019. cern.ch/go/D9SF
    F. Pantaleo, Patatrack: accelerated Pixel Track reconstruction in CMS (2 April). Presented at Connecting the Dots 2019, Valencia, 2019. cern.ch/go/7D8W
    R. Kansal, Deep Graph Neural Networks for Fast HGCAL Simulation (13 August). Presented at CERN openlab summer-student lightning talk session, Geneva, 2019. cern.ch/go/qh6G
    A. Bocci, Heterogeneous reconstruction: combining an ARM processor with a GPU (4 November). Presented at CHEP2019, Adelaide, 2019. cern.ch/go/7bmH
    A. Bocci, Heterogeneous online reconstruction at CMS (7 November). Presented at 24th International Conference on Computing in High-Energy and Nuclear Physics (CHEP) 2019, Adelaide, 2019. cern.ch/go/l9JN