High-performance cloud caching technologies

Project goal

We’re exploring the suitability of a new infrastructure for key-value storage in the data-acquisition systems of particle-physics experiments. DAQDB (Data Acquisition Database) is a scalable and distributed key-value store that provides low-latency queries. It exploits Intel Optane DC Persistent Memory, a cutting-edge non-volatile memory technology that could make it possible to decouple real-time data acquisition from asynchronous event selection.

R&D topic
Computing performance and software
Project coordinator(s)
Giovanna Lehmann Miotto
Team members
Danilo Cicalese, Fabrice Le Goff, Jeremy Love, Remigius K Mommsen
Collaborator liaison(s)
Grzegorz Jereczek, Maciej Maciejewski, Jakub Radtke, Jakub Schmiegel, Malgorzata Szychowska, Aleksandra Jereczek, Adrian Pielech, Claudio Bellini


Project background

Upgrades to the LHC mean that the data rates coming from the detectors will dramatically increase. Data will need to be buffered while waiting for systems to select interesting collision events for analysis. However, the current buffers at the readout nodes can only store a few seconds of data due to capacity constraints and high cost of DRAM. It is therefore important to explore new, cost-effective solutions — capable of handling large amounts of data — that capitalise on emerging technologies.

Recent progress

During 2018, we worked to assess the potential of this new approach. We collaborated closely with Intel on this, coordinating with them on key design choices for the project.

Dedicated servers are needed to make use of the newest hardware, such as persistent memory and NVMe SSDs. We set up the new hardware at CERN and integrated it into the existing data-acquisition software platforms for the ATLAS experiment. We then tested DAQDB thoroughly, providing feedback to the developers.

In addition, we explored a range of alternative solutions for modifying the current ATLAS data-acquisition dataflow and integrating the key-value store. We successfully integrated the data-acquisition software and were also able to separate the readout from the storage nodes, thus making them independent.

Next steps

In the first part of 2019, we will evaluate the system’s performance in a small-scale test. We will then focus our attention on assessing and improving performance in a more realistic scenario, with a large-scale experiment. We will test Intel® Optane™ DC memory technology and Intel® Optane™ DC SSD. In parallel, we will begin work to integrate the system into other experiments, such as CMS and ProtoDUNE.




    G. Jereczek, The design of a distributed key-value store for petascale hot storage in data acquisition systems (12 July). Presented at 23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP), Sofia, 2018. http://cern.ch/go/6hcX
    M. Maciejewski, A key-value store for Data Acquisition Systems (12 September). Presented at ATLAS TDAQ week, Cracow, 2018.
    G. Jereczek, M. Maciejewski, Data Acquisition Database (12 November). Presented at The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC18), Dallas, 2018.
    M. Maciejewski, J. Radtke, The Design of Key-Value Store for Data Acquisition Systems (5 December). Presented at NMVe Developer Days, San Diego, 2018.