Project goal

We are investigating the performance of distributed learning and low-latency inference of generative adversarial networks (GANs) for simulating particle collision events. The performance of a deep neural network is being evaluated on a cluster consisting of IBM Power CPUs (with GPUs) installed at CERN.

R&D topic
R&D Topic 3: Machine learning and data analytics
Project coordinator(s)
Maria Girone and Federico Carminati
Technical team members
Sofia Vallecorsa, Daniel Hugo Cámpora Pérez, Niko Neufeld
Collaborator liaison(s)
Eric Aquaronne, Lionel Clavien

Collaborators

Project background

GANs offer potential as a possible way of eliminating the need for classical Monte Carlo (MC) simulations in generating particle showers. Classical MC is computationally expensive, so this could be a way to improve the overall performance of simulations in high-energy physics.

Using the large data sets obtained from MC-simulated physics events, the GAN is able to learn to generate events that mimic these simulated events. Once an acceptable accuracy range is achieved, the trained GAN can replace the classical MC simulation code, with an inference invocation of the GAN.

Recent progress

In accordance with the concept of data-parallel distributed learning, we trained the GAN on a total of twelve GPUs, distributed over the three nodes that comprise the test Power cluster. Each GPU ingests a unique part of the physics data set for training the model. The neural network was implemented with a combination of software frameworks, optimised for Power architectures: Keras, TensorFlow, and Horovod. We used an MPI to distribute the workloads over the GPUs. As a result, we achieved great scaling performance, and we were able to improve the training time by an order of magnitude. With the trained model, we achieved a speedup of four orders of magnitude, compared to using classical MC simulation.

At the LHCb experiment, a convolutional neural networks was also tested as a way of identifying particles from a certain type of electromagnetic radiation trace observed in particular sub-detectors. We fed a dataset with more than 5 million MC-generated particles through a deep neural network consisting of 4 million parameters. We used the same cluster to accelerate our exploration of this approach, achieving promising results.

Next steps

At the LHCb experiment, work will continue to improve the particle-identification performance of our new approach by incorporating new parameters into the model. We are seeking to identify the topology of the neural network that will best suit our problem; collaborating closely with IBM is key to achieving this.

We will also prototype a deep-learning approach for the offline reconstruction of events at DUNE, a new neutrino experiment that will be built in the United States of America. We believe that IBM’s Power architecture could be well suited to handling the large amounts of raw data that will be generated by this experiment.


Presentations

    A. Hesam, Evaluating IBM POWER Architecture for Deep Learning in High-Energy Physics (23 January). Presented at CERN openlab Technical Workshop, Geneva, 2018. http://cern.ch/go/7BsK
    D. H. Cámpora Pérez, ML based RICH reconstruction (8 May). Presented at Computing Challenges meeting, Geneva, 2018. http://cern.ch/go/xwr7
    D. H. Cámpora Pérez, Millions of circles per second. RICH at LHCb at CERN (7 June). Presented as a seminar in the University of Seville, Seville, 2018.