Building on our work from 2018, we focused on optimising a more complex model that can simulate the effects of several particle types to within 5-10 % over a large energy range and for realistic kinematic conditions. The model is remarkably accurate: GANs can reproduce Monte Carlo predictions to within just a few percent.
Training time is, however, still a bottleneck for the meta-optimisation of the model. This includes not only the optimisation of the network weights, but also of the architecture and convergence parameters. Much of our work in 2019 concentrated on addressing this issue.
We followed up on the work, started in 2018, to develop distributed versions of our training code, both on GPUs and CPUs. We tested their performance and scalability in different environments, such as high-performance computing (HPC) clusters and clouds. The results are encouraging: we observed almost linear speed-up as the number of processors increased, with very limited degradation in the results.
We also began work to implement a genetic algorithm for optimisation. This simultaneously performs training and hyper-parameter optimisation of our network, making it easier to generalise our GAN to different detector geometries.
We will continue to investigate HPC training and will work on the optimisation of physics accuracy in the distributed training mode. We will also complete the development of the genetic-algorithm approach for hyper-parameter optimisation. We will also extend the tool to other types of detectors.
More broadly though, we now believe our model is mature enough to start planning its test integration with the classical approaches currently used by the LHC experiments. In addition, we will also extend the tool to cover other detectors not currently simulated.