The High-Luminosity LHC (HL-LHC) will produce a tenfold or more increase in the amount of data taken, exceeding what is expected to be achievable with a constant investment budget by several factors in both storage and computing capacity. Even corrected for the evolution of technology, this will result in a substantial shortage of computing resources.

Keeping with its mandate, CERN openlab is exploring new and innovative solutions to help physicists bridge this resource gap, which may otherwise impact on the HL-LHC experimental programme.

Following a successful workshop on quantum computing held at CERN in 2018, CERN openlab has started a number of projects in quantum computing that are at different stages of realisation.

 

Possible projects in quantum computing

Project goal

We present here a number of interesting projects in quantum computing that we have discussed with different partners, but for which there is still no formal collaboration. The objective is to attract interest from the user community and create a critical mass that could allow us to establish collaborations with users and vendors.

R&D topic
Quantum technologies
Project coordinator(s)
Sofia Vallecorsa
Team members
CERN openlab: Federico Carminati, Fabio Fracas │ EPFL: Su Yeon Chang

Collaborators

Project background

OpenQKD: Testbed for quantum key distribution

OpenQKD is a European Horizon 2020 project to test quantum key distribution (QKD) technology and to prepare a pan-European QKD deployment to protect European citizens and economies against the potential security threat posed by a quantum computer. The three-year project started in September 2019. With a budget of €18 million, 38 European partners are developing fibre-based and free-space QKD and deploying over 30 use cases at 16 sites across Europe. Among these, several use cases are planned in Geneva. One of them is the so-called Quantum Vault, which aims to protect digital assets against failures and attacks. As a proof of concept, the Quantum Vault is being realised in six Genevan data centres and telecom nodes. CERN openlab and the Poznan Supercomputing and Networking Center in Poland are supporing the Quantum Vault by hosting one node at the CERN Data Centre. It is planned to actively involve CERN openlab by running a proper use case taking advantage of the Quantum Vault infrastructure.

 

Quantum generative adversarial networks

Generative adversarial networks (GANs) are among the most interesting models in classical machine learning. GANs are an example of generative models, i.e. models that learn a hidden distribution from the training data set, and can sample new synthetic data. At CERN openlab, we have been investigating their use as an alternative to Monte Carlo simulation, obtaining remarkable results. Much faster than standard Monte Carlo algorithms, GANs can generate realistic synthetic data, while retaining a high level of accuracy (see our fast simulation project). Quantum GANs could have more representational power than classical GANs, making them better able to learn more complex distributions from smaller training data sets. We are now training a quantum GAN to generate images of a few pixels and we are investigating two possible approaches: a hybrid schema with a quantum generator learning the target PDF, using either a classical network or a vibrational quantum circuit as a discriminator (variational quantum generator), as well as a full quantum adversarial implementation (quGAN).

 

Track seeding optimisation

The Kalman filter is widely used in high-energy physics for track fitting of particle trajectories. It runs after an initial pattern-recognition step where detector ‘hits’ are clustered into subsets belonging to the same particle. Currently, several pattern-recognition approaches exist. Quantum computing can be used to reduce the initial combinatorial search using the ‘quantum approximate optimisation algorithm’ developed by researchers at the Massachusetts Institute of Technology (MIT) in Cambridge, US. We are now studying the application of what is known as the ‘variational-quantum-eigensolver algorithm’ and implementing it on Intel’s quantum simulator.

 

Quantum homomorphic encryption

The latest advances in machine learning and data analytics offer great potential for gaining new insights from medical data. However, data privacy is of paramount concern. Anonymisation via the removal of personal information is not an option, since medical records carry information that may allow the identification of the owner much more easily and securely than the name or the birth date. One possibility being studied is to encrypt sensitive information in such a way that makes data analytics possible without decryption. This is called homomorphic encryption. It is important to find an encryption strategy that is secure, while also ensuring it is possible to apply a large family of analytic algorithms to the data. While such encryption algorithms do exist, they require high-quality random numbers and they tend to be very demanding in terms of computing resources. Thus, this is a promising field of investigation for the utilisation of quantum computing.

The aims of the project are multiple: to transfer anonymised medical records protected by Quantum Keys, to develop a quantum homomorphic encryption (QHE) algorithm to apply on it and to analyse the data with QHE-friendly analysis tools (techniques based on machine learning or deep learning). The main project consists of four different parts, each realised in collaboration with different partners, both European and Korean: ID Quantique, Innosuisse (the Swiss Innovation Agency), Korea Institute of Science and Technology Information (KISTI), and the Seoul National University Bundang Hospital (SNUBH).

 

Quantum random number generator

We have recently established a collaboration with Cambridge Quantum Computing to test the performance of a new quantum random-number generators and study its integration within simulation software used in high-energy physics.

 

RandomPower: evaluating the impact of a low-cost, robust ‘true random power generator’

Researchers at the University of Insubria in Italy have invented a true random power generator (TRNG). This is based on the local analysis of the time series of endogenous self-amplified pulses in a specific silicon device. The principle has been validated with lab equipment and a low-cost, small-form-factor board has been developed and commissioned with the support of an ATTRACT project. The board can deliver a stream of unpredictable bits at frequencies currently up to 1Mbps for a single generator, with the possibility to be scaled up. Randomness has been qualified through a test suite from the US National Institute of Standards and Technology, as well as beyond this. Together with the CERN openlab, the University of Insubria intends to evaluate the impact of the TRNG availability in a set of use cases, as follows:

  • Modifying the Linux OS, replacing the embedded random number generation with the random power stream, in order to facilitate its adoption.
  • Comparing the outcome of the application of generative adversarial networks using training sets guided by pseudo random number generators (PRNG) or the random power TRNG.
  • Identifying classes of Monte Carlo simulations in high-energy physics where the use of PRNG could be particularly critical.

Moreover, the availability of a low-cost platform for high-quality random numbers may open up new possibilities in the use of homomorphic encryption, relevant for privacy-preserving data analysis; this will be thoroughly evaluated.

Recent progress

-

Next steps

In 2020, we will continue to assess the merits of each of these lines of investigation. We will also continue our discussions with a range of companies and research institutes to identify areas for mutually beneficial collaboration. Where appropriate, we will work to formalise the investigations into full stand-alone projects.

 

 

More information:

Der große Traum vom Superhirn (German) in Süddeutsche Zeitung

The past, present and future of computing in high-energy physics in physicsworld

Wisconsin Quantum Institute Awarded Grant to Advance Quantum Computing Machine Learning in HPCwire

CERN, IBM Collaborate on Quantum Computing in IBM Research Blog

Inside the High-Stakes Race to Make Quantum Computers Work in Wired

Exploring quantum computing for high-energy physics in CERN courier

Quantum thinking required in CERN courier

Publications

    E. F. Combarro, F. Carminati, S. Vallecorsa, et al., On protocols for increasing the uniformity of random bits generated with noisy quantum computers. Published in J Supercomput, 2021. cern.ch/go/9DGk
    M. Fernández-Pendás, E. F. Combarro, S. Vallecorsa, J. Ranilla, I. F. Rúa, A study of the performance of classical minimizers in the Quantum Approximate Optimization Algorithm. Published in Journal of Computational and Applied Mathematics, 2021. cern.ch/go/6mVk

Quantum optimisation for grid computing

Project goal

The goal of this project is to develop quantum algorithms to help optimise how data is distributed for storage in the Worldwide LHC Computing Grid (WLCG), which consists of 167 computing centres, spread across 42 countries. Initial work focuses on the specific case of the ALICE experiment. We are trying to determine the optimal storage, movement, and access patterns for the data produced by this experiment in quasi-real-time. This would improve resource allocation and usage, thus leading to increased efficiency in the broader data-handling workflow.

R&D topic
Quantum technologies
Project coordinator(s)
Sofia Vallecorsa
Team members
CERN: Sofia Vallecorsa, Fabio Fracas, Costin Grigoras, Latchezar Betev | Polytechnic Institute of Grenoble: Jaques Demongeot | Polytechnic University of Bucharest: Mircea-Marian Popa, Mihai Carabas, George Pantelimon Popescu

Collaborators

Project background

The WLCG has been essential to the success of the LHC’s scientific programme. It is used to store and analyse the data produced by the LHC experiments. Optimal usage of the grid’s resources is a major challenge: with the foreseen increase in the data produced by the LHC experiments, workflow optimisation — particularly for the data-placement strategy — becomes extremely important.

Simulating this complex and highly non-linear environment is very difficult; the complexity of the task goes beyond the capability of the computing hardware available today. Quantum computing could offer the possibility to address this. Our project, a collaboration with the Polytechnic Institute of Grenoble and the Polytechnic University of Bucharest, will develop quantum algorithms to optimise the storage distribution.

Recent progress

In May 2019, this project was awarded one-year funding under the European Union’s ATTRACT initiative. This project, which has the full title of ‘Quantum optimisation of Worldwide LHC Computing Grid data placement’ is one of 19 ATTRACT projects in which CERN is involved. One of the major challenges faced by this project is the difficulty of defining a suitable description of the data set extracted from monALISA, the monitoring and scheduling tool used by the ALICE experiment for grid operations. We have developed a hybrid quantum reinforcement-learning strategy based on the implementation of a quantum Boltzmann machine using the D-wave quantum annealer simulator available in the D-Wave Ocean software suite. We tested our approach with a simplified use case. In addition, we developed a LSTM (log short-term memory) neural network architecture; this is capable of simulating the monALISA I/O throughput with a high level of accuracy.

Next steps

In 2021, we plan to extend this approach to the optimisation of a linear accelerator beam configuration problem in collaboration with the CERN Beams and Accelerators department. We will compare the performance of our quantum model to a classical reinforcement learning approach. 


Presentations

    F. Carminati, Quantum Optimization of Worldwide LHC Computing Grid data placement (7 November). Presented at Conference on Computing in High Energy & Nuclear Physics (CHEP), Adelaide, 2019. cern.ch/go/nP9M
    M. Popa, F. Carminati, S. Vallercosa2, F. Fracas, C. Grigoraş, L. BetevM. Carabaş, G. P. Popescu, Quantum Optimization of Worldwide LHC Computing Grid data placement (22 January). Presented at CERN openlab Technical Workshop, Geneva, 2020. cern.ch/go/g7Mk

Quantum support vector machines for Higgs boson classification

Project goal

This project is investigating the use of quantum support vector machines (QSVMs) for the classification of particle collision events that produce a certain type of decay for the Higgs boson. Specifically, such machines are being used to identify instances where a Higgs boson fluctuates for a very short time into a top quark and a top anti-quark, before decaying into two photons. Understanding this process — known by physicists as ttH production — is challenging, because it is rare: only 1% of Higgs bosons are produced in association with two top quarks and, in addition, the Higgs and the top quarks decay into other particles in many complex ways, or modes.

R&D topic
Quantum technologies
Project coordinator(s)
Sau-Lan Wu, Ivano Tavernelli, Sofia Vallecorsa, Alberto Di Meglio
Team members
University of Wisconsin: Chen Zhou, Shaujun San, Wen Guan IBM Zurich: Panagiotis Barkoutsos, Jennifer Glick

Collaborators

Project background

QSVMs are among the most promising machine-learning algorithms for quantum computers. Initial quantum implementations have already shown performances comparable to their classical counterparts. QSVMs are considered suitable algorithms for early adoption on noisy, near-term quantum-computing devices. Several initiatives are studying and optimising input data representation and training strategies.

We are testing IBM’s QSVM algorithm at the ATLAS experiment. Today, identifying ttH-production events relies on classical support vector machines, as well as another machine-learning technique known as ‘boosted decision trees’. Classically, these methods are used to improve event selection and background rejection by analysing 47 high-level characteristic features.

Recent progress

Different quantum classifiers have been investigated, including QSVM and quantum kernel methods. We compared their performance to classical models in terms of classification accuracy, training dataset size and the number of input features. We studied different types of noise and how they affect the final performance. We also compared the results obtained on IBM quantum hardware for two different Higgs decay modes.

Preliminary results, obtained using the quantum simulator, show that the QSVM can achieve performance that is comparable to its classical counterpart in terms of accuracy, while also being much faster. In addition, a quantum neural network was also developed; its performance was compared with that of the other quantum methodologies.

Next steps

We will continue with the optimisation of the models developed so far. The goal is to improve their performance and increase the size of the problem in terms of number of qubits and training datasets. At the same time, we aim to reduce the classical computing resources needed for the simulation of quantum circuits, which today represent a bottleneck for most studies of this kind. This is a necessary step for creating quantum solutions suitable for more realistic problems. 


Presentations

    W. Guan, Application on LHC High Energy Physic data analysis with IBM Quantum Computing (March). Presented at 19th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT), Saas-Fee, 2019. cern.ch/go/6DnG
    S. L. Wu, Application of Quantum Machine Learning to High Energy Physics Analysis at LHC Using Quantum Computer Simulators and Quantum Computer Hardware. Published at the QuantHEP Seminar, 2020. cern.ch/go/wZW7

Quantum machine learning for supersymmetry searches

Project goal

The goal of this project is to develop quantum machine-learning algorithms for the analysis of particle collision data from the LHC experiments. The particular example chosen is the identification and classification of supersymmetry signals from the Standard Model background.

R&D topic
Quantum technologies
Project coordinator(s)
Koji Terashi (University of Tokyo)
Team members
Michiru Kaneda, Tomoe Kishimoto, Masahiko Saito, Ryu Sawada, Junici Tanaka, Federico Carminati, Sofia Vallecorsa, Fabio Fracas

Collaborators

Project background

The analysis of LHC data for the detection of effects beyond the Standard Model requires increasing levels of precision. Various machine-learning techniques are now part of the standard analysis toolbox for high-energy physics. Deep-learning algorithms are increasingly demonstrating their usefulness in various areas of analysis, thanks to their suitability for exploring a much larger dimensional space.

This seems close to an ideal area of application for quantum computing, which offers a parameter space that is potentially enormous, as well as a correspondingly large level of computational parallelism. Moreover, the quasi-optimal Gibbs sampling features of quantum computers may enhance the training of the deep-learning networks.

Recent progress

In 2020, the team in Tokyo pursued the study and characterisation of several quantum algorithms, in order to understand their applicability at different points in the data processing chain. One of the problems being addressed is the acceleration of the event-generation process. Event generators (in particular their integration step) are expected to be extremely demanding in terms of computation time, especially for the generation of multi-jet events at the experiments running in future on the High-Luminosity LHC.

In another line of work, we studied different quantum machine-learning models, such as quantum convolutional neural networks, as applied to both classical and quantum data.

With the limited qubit counts, connectivity and coherence times of present quantum computers, quantum circuit optimisation is crucial for making the best use of these devices. In addition to the algorithmic studies outlined above, we have studied a novel circuit-optimisation protocol, composed of two techniques:  pattern recognition of repeated sets of gates and reduction of circuit complexity by identifying computational basis states. The developed optimisation protocol demonstrates a significant gate reduction for a quantum algorithm used to simulate parton shower processes.

Next steps

Current results are very promising. However, they have been obtained using simulations that assume ideal quantum circuits, without taking into account effects such as noise, measurement errors, or limited coherence time. All of these issues can affect quantum hardware significantly, particularly noisy intermediate-scale quantum devices. Our next steps will include a detailed study of strategies for error mitigation.

Publications

    K. Terashi, M. Kaneda, T. Kishimoto, M. Saito, R. Sawada, J. Tanaka, Event Classification with Quantum Machine Learning in High-Energy Physics (3 January). Published in SpringerLink, 2021. cern.ch/go/9WmF
    W. Guan, G. Perdue, A. Pesah, M. Schuld, K. Terashi, S. Vallecorsa, J. Vlimant, Quantum machine learning in high energy physics (31 March). Published in IOP Science, 2021. cern.ch/go/6lVW
    W. Jang, K. Terashi, M. Saito, C. W. Bauer, B. Nachman, Y. Liyama, T. Kishimoto, R. Okubo, R. Sawada, J. Tanaka, Quantum Gate Pattern Recognition and Circuit Optimization for Scientific Applications (19 February). Published in arXiv.org, 2021. cern.ch/go/K9jM

Presentations

    W. Jang, K. Terashi, M. Saito, C. W. Bauer, B. Nachman, Y. Iiyama, T. Kishimoto, R. Okubo, R. Sawada, J. Tanaka,Quantum Gate Pattern Recognition and Circuit Optimization for Scientific Applications (20 May). Presented at 25th International Conference on Computing in High-Energy and Nuclear Physics, 2021.
    W. Jang , K. Terashi, M. Saito, Y. Iiyama, T. Kishimoto, R. Okubo, R. Sawada, J. Tanaka, Quantum Circuit Optimization for Scientific Applications (11 March). Presented at CERN openlab Technical Workshop, Geneva, 2021. cern.ch/go/lqp6

Quantum graph neural networks

Project goal

The goal of this project is to explore the feasibility of using quantum algorithms to help track the particles produced by collisions in the LHC more efficiently. This is particularly important as the rate of collisions is set to increase dramatically in the coming years.

R&D topic
Quantum technologies
Project coordinator(s)
Sofia Vallecorsa
Team members
Fabio Fracas, Cenk Tüysüz, Jean-Roch Vlimant, Bilge Demirkoz, Kristiane Novotny, Carla Rieger.
Collaborator liaison(s)
Daniel Dobos, Karolos Potamianos

Collaborators

Project background

The LHC at CERN is producing collisions at unprecedented collider energy. The hundreds of particles created during the collisions are recorded by large detectors composed of several sub-detectors. At the centre of these detectors there is usually a tracker detector, precisely recording the signal of the passage of charged particles through thin layers of active material. The trajectories of particles are bent by a magnetic field to allow the measurement of the particle momentum. There is an expected ten-fold increase in the number of tracks produced per bunch crossing after the high-luminosity upgrade of LHC.

Classical algorithms perform the reconstruction of the trajectory of charged particles; these make use of Kalman filter formalism. Although they are quite accurate, they scale worse than quadratically with the number of tracks. Several ways are explored to mitigate the increase in the computing needs, such as new detector layout, deep-learning and code parallelisation. Quantum computing has been shown to provide speed-ups for certain problems and different R&D initiatives are exploring how quantum tracking algorithms could exploit such capabilities. We are developing a quantum-based track-finding algorithm aimed at reducing the combinatorial background during the initial seeding stage for the Kalman filters. We are using the publicly available dataset designed for the Kaggle ‘TrackML’ challenge for this work.

Recent progress

We have developed a prototype quantum graph neural network (QGNN) algorithm for tracking the particles produced by collision events. The model uses a graph interpretation for trajectory reconstruction by representing detector hits with nodes in a graph and segments among hits as graph connections. The quantum model is based on cascade hierarchical classifiers, identifying nodes and segments belonging to particle tracks.

Several architectures have been investigated, ranging from tree tensor networks to multi-scale entanglement renormalization ansatz (MERA)  graphs, and the results were compared against classical graph neural networks (GNNs). We studied the way in which both quantum effects, such as entanglement, and the expressibility of quantum circuits impacts the final performance. In addition, a student project during the summer focused on the design and optimisation of a hybrid approach to data embedding, combining a classical multi-layer perceptron to a quantum circuit. The results were presented at several international conferences and workshops, including Connecting The Dots 2020 and IOP Quantum 2020.

In parallel to the high-energy physics use case, we also started investigating the application of this methodology to the collision-avoidance systems used for commercial air traffic.

Next steps

This work represents a first complete look at particle track reconstruction using a QGNNs. Our results show that the prototype can perform at a similar level to classical approaches. We have also learned about how scaling the network depth can improve its performance. However, it is known that there are many obstacles to the efficient training of extended QGNNs. Therefore, we are currently maintaining a conservative attitude towards the advantages offered by such networks.

Our plan is to improve the model by introducing better strategies for encoding information. We will also continue the analysis of the properties of the quantum circuits, in order to avoid the problems typically encountered when training hybrid models, such as the effect of vanishing gradients.

Publications

    C. Tüysüz, F. Carminati, B. Demirköz, D. Dobos, F. Fracas, K. Novotny, K. Potamianos, S. Vallecorsa, J.-R. Vlimant, A Quantum Graph Neural Network Approach to Particle Track Reconstruction. arXiv e-prints, p. arXiv:2007.06868 [cs.DC], 2020. cern.ch/go/6H88
    K. Novotny, C. Tüysüz, C. Rieger, D. Dobos, K. Potamianos, B. Demirköz, F. Carminati, S. Vallecorsa, J. R. Vlimant, F. Fracas, Quantum Track Reconstruction Algorithms for non-HEP applications. Published at Proceeding of science, 2020. cern.ch/go/cX7w
    C. Rieger, Exploring hybrid quantum-classical neural networks for particle tracking. Published at Zenodo, 2020. cern.ch/go/96wK
    C. Tüysüz, K. Novotny, C. Rieger, F. Carminati, B. Demirköz, D. Dobos, F. Fracas, K. Potamianos, S. Vallecorsa, J. R. Vlimant, Performance of Particle Tracking Using a Quantum Graph Neural Network. Published at Cornell University, 2021. cern.ch/go/J7sp
    C. Tüysüz, C. Rieger, K. Novotny, B. Demirköz, D. Dobos, K. Potamianos, S. Vallecorsa, J.R. Vlimant, R. Forster, Hybrid quantum classical graph neural networks for particle track reconstruction. Published at SpringerLink, 2021. cern.ch/go/cn9m

Presentations

    F. Carminati, Particle Track Reconstruction with Quantum Algorithms (7 November). Presented at Conference on Computing in High Energy & Nuclear Physics, Adelaide, 2019. cern.ch/go/7Ddm
    D. Dobos, HEP Graph Machine Learning for Industrial & Humanitarian Applications (26 November). Presented at Conference on HEPTECH AIME19 AI & ML, Budapest, 2019. cern.ch/go/9Rvl
    C. Tüysüz, A Quantum Graph Neural Network Approach to Particle Track Reconstruction (22 January). Presented at the CERN openlab Technical Workshop, CERN, 2020. cern.ch/go/6TMH
    C. Tüysüz, A Quantum Graph Neural Network Approach to Particle Track Reconstruction (20 April). Presented at the 6th International Workshop “Connecting the dots”, Princeton University, 2020. cern.ch/go/9Tdt
    C. Tüysüz, Quantum Graph Neural Networks for Track Reconstruction in Particle Physics and Beyond (22 October). Presented at 4th Inter-experiment Machine Learning Workshop, 2020. cern.ch/go/6QQT
    C. Tüysüz, Performance of Particle Tracking using a Quantum Graph Neural Network (8 October). Presented at BAŞARIM 2020 Konferansı, 2020. cern.ch/go/kn9v
    K. S. Novotny, Quantum Track Reconstruction Algorithms for non-HEP applications (29 July). Presented at the 40th International Conference on High Energy Physics, Prague, 2020. cern.ch/go/8HFG
    K. S. Novotny, Exploring (Quantum) Track Reconstruction Algorithms for non-HEP applications (21 April). Presented at the 6th International Workshop “Connecting the dots”, Princeton University, 2020. cern.ch/go/bMM
    C. Rieger, C. Tüysüz, K. Novotny, S. Vallecorsa, B. Demirköz, K. Potamianos, D. Dobos, J. Vlimant, R. Forster, Embedding of particle tracking data using hybrid quantum classical neural networks (20 May). Presented at the 25th International Conference on Computing in High-Energy and Nuclear Physics, 2021. cern.ch/go/6HRW
    C. Tüysüz, C. Rieger, Hybrid Quantum-Classical Graph Neural Networks for Track ReconstructioN (11 March). Presented at CERN openlab Technical Workshop, Geneva, 2021. cern.ch/go/9Cjh
    C. Tüysüz, C. Rieger, Hybrid Quantum-Classical Graph Neural Networks for Track ReconstructioN (26 April). Presented at Quantum Technology Initiative meeting, 2021. cern.ch/go/Qbx6
    C. Rieger (October). Presented at the IOP Quantum 2020 conference, 2020.

Quantum computing for simulation: investigating quantum generative adversarial networks and quantum random number generators

Project goal

The collaboration with Cambridge Quantum Computing (CQC) is investigating the advantages and challenges related to the integration of quantum computing into simulation workloads. This work is split into two main areas of R&D: (i) developing quantum generative adversarial networks (GANs) and (ii) testing the performance of quantum random number generators. This second area involves testing the performance of such generators with respect to modern pseudo-random number generators in the context of simulation software used in high-energy physics.The collaboration with Cambridge Quantum Computing (CQC) is investigating the advantages and challenges related to the integration of quantum computing into simulation workloads. This work is split into two main areas of R&D: (i) developing quantum generative adversarial networks (GANs) and (ii) testing the performance of quantum random number generators. This second area involves testing the performance of such generators with respect to modern pseudo-random number generators in the context of simulation software used in high-energy physics.

R&D topic
Quantum technologies
Project coordinator(s)
Sofia Vallecorsa
Team members
CERN: Su Yeon Chang | University of Oviedo: Elias Combarro | CQC: Simon McAdams, Ross Duncan, Mattia Fiorentini, Steven Herbert, Alec Edgington, Cameron Foreman, Florian Curchod, Chad Edwards, Kimberley Worral

Collaborators

Project background

Research in high-energy physics, as in many other scientific domains, makes extensive use of Monte Carlo calculations — both in theory and in experiments.

GANs are among the most interesting models in classical machine learning. GANs are an example of generative models (i.e. models that learn a hidden distribution from a training dataset) and can sample new synthetic data. We have been investigating their use as an alternative to Monte Carlo simulation, obtaining remarkable results. Much faster than standard Monte Carlo algorithms, GANs can generate realistic synthetic data, while retaining a high level of accuracy (see our fast simulation project). Quantum GANs could have more representational power than classical GANs, making them better able to learn more complex distributions from smaller training datasets. We are now training a quantum GAN to generate images of a few pixels. For this we are investigating two possible approaches:

  • A hybrid schema with a quantum generator learning the target PDF, using either a classical network or a variational quantum circuit as a discriminator (variational quantum generator).
  • A full quantum adversarial implementation (quGAN).

Monte Carlo methods make extensive use of random numbers. To ensure simulated data is unbiased, it is essential to understand the quality of the random number generators, the question of true randomness, and other related issues. Modern pseudo-random number generators exhibit optimal performance; we are studying the effect of replacing those with real quantum numbers in different Monte Carlo applications.

Recent progress

GANs are systems composed of two networks (a generator and a discriminator) that are trained one against the other in an adversarial fashion. We developed both full quantum GAN models and hybrid classical-quantum GAN models. In the latter kind, the generator is implemented as a variational quantum system and the discriminator is a classical neural network. We tested different configurations (in terms of number of qubits and circuit depth) in order to reproduce the energy pattern deposited along the longitudinal direction of a calorimeter.

In terms of testing quantum random generators, we investigated various Ising models and machine-learning applications. The results obtained did not demonstrate sufficient sensitivity to show relevant differences between pseudo and real quantum random numbers.

Next steps

The results we obtained in 2020 are extremely promising. In 2021, we will explore techniques to stabilise GAN training and increase the size of the simulated output. We will also explore the effect of noise- and error-mitigation strategies on the quantum and hybrid GAN prototypes.


Presentations

    S. Y. Chang, Quantum 2020 review (19-22 October). Presented at IOP Quantum, 2020.