Exploring accelerated machine learning for experiment data analytics

Project goal

The project has two threads, each investigating a unique use case for the Micron Deep Learning Accelerator (a modular FPGA-based architecture). The first thread relates to the development of a real-time streaming machine inference engine prototype for the level-1 trigger of the CMS experiment.

The second thread focuses on prototyping a particle-identification system based on deep learning for the DUNE experiment. DUNE is a leading-edge, international experiment for neutrino science and proton-decay studies. It will be built in the US and is scheduled to begin operation in the mid-2020s.

R&D topic
Machine learning and data analytics
Project coordinator(s)
Emilio Meschi, Paola Sala, Maria Girone
Team members
Thomas Owen James, Dejan Golubovic, Maurizio Pierini, Manuel Jesus Rodriguez, Anwesha Bhattacharya, Saul Alonso-Monsalve, Debdeep Paul, Niklas Böhm, Ema Puljak
Collaborator liaison(s)
Mark Hur, Stuart Grime, Michael Glapa, Eugenio Culurciello, Andre Chang, Marko Vitez, Dustin Werran, Aliasger Zaidy, Abhishek Chaurasia, Patrick Estep, Jason Adlard, Steve Pawlowski


Project background

The level-1 trigger of the CMS experiment selects relevant particle-collision events for further study, while rejecting 99.75% of collisions. This decision must be made with a fixed latency of a few microseconds. Machine-learning inference in FPGAs may be used to improve the capabilities of this system.

The DUNE experiment will consist of large arrays of sensors exposed to high-intensity neutrino beams. The use of convolutional neural networks has been shown to substantially boost particle-identification performance for such detectors. For DUNE, an FPGA solution is advantageous for processing ~ 5 TB/s of data.

Recent progress

For the CMS experiment, we studied in detail two potential use cases for a machine-learning approach using FPGAs. Data from Run 2 of the LHC was used to train a neural network. The goal of this is to improve the analysis potential of muon tracks from the level-1 trigger, as part of a 40 MHz ‘level-1 scouting’ data path. In addition, a convolutional neural network was developed for classifying and measuring energy showers for the planned high-granularity calorimeter upgrade of the CMS experiment. These networks were tested on the Micron FPGA hardware and were optimised for latency and precision.

For the DUNE part of the project, we tested the Micron inference engine and characterised its performance on existing software. Specifically, we tested it for running a neural network that can identify neutrino interactions in the DUNE detectors, based on simulated data. This enabled us to gain expertise with the board and fully understand its potential. The results of this benchmarking were presented at the 24th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2019).

Next steps

The CMS team will focus on preparing a full scouting system for Run 3 of the LHC. This will comprise a system of around five Micron co-processors, receiving data on high-speed optical links.

The DUNE team plans to set up the inference engine as a demonstrator within the data-acquisition system of the ProtoDUNE experiment (a prototype of DUNE that has been built at CERN). This will work to find regions of interest (i.e. high activity) within the detector, decreasing the amount of data that needs to be sent to permanent storage.


    M. J. R. Alonso, Fast inference using FPGAs food DUNE data reconstruction (7 November). Presented at 24th International Conference on Computing in High Energy and Nuclear Physics, Adelaide, 2019. cern.ch/go/bl7n
    M. J. R. Alonso, Prototyping of a DL-based Particle Identification System for the Dune Neutrino Detector (22 January). Presented at CERN openlab Technical Workshop, Geneva, 2020. cern.ch/go/zH8W
    T. O. James, FPGA-based Machine Learning Inference for CMS with the Micron Deep Learning Accelerator (22 January). Presented at CERN openlab Technical Workshop, Geneva, 2020. cern.ch/go/pM7P