The document summarizes the software development for the new data acquisition system of the COMPASS experiment at CERN. It describes the existing DAQ system and its limitations. The new system replaces readout buffers and event builders with custom FPGA-based hardware for improved performance and reliability. It discusses the hardware architecture, software requirements, and layered software architecture based on master and slave processes. Preliminary performance and stability tests of the system show promising results for deployment in 2014.
1. Software development for the C OMPASS
experiment
Martin Bodlák1 Vladimír Jarý1∗ Igor Konorov2
Alexander Mann2 Josef Nový1 Stephan Paul2
Miroslav Virius1
1 Faculty of Nuclear Sciences and Physical Engineering
Czech Technical University in Prague
2 Physik-Department
Technische Universität München
Conference “Tvorba softwaru 2012”
24th May 2012, Ostrava
∗
Vladimir.Jary@cern.ch
Vladimí Jarý et al. Software development for the C OMPASS experiment
2. Overview
1 Introduction
C OMPASS experiment
2 Current DAQ system
Architecture of the system
DATE package
3 Control and monitoring software for a new DAQ system
Motivation and requirements
Overview of the hardware architecture
Layers of the DAQ software
Implementation details
Performance tests
4 Conclusion and outlook
Vladimí Jarý et al. Software development for the C OMPASS experiment
3. COMPASS experiment
COMPASS: Common muon and proton apparatus for
structure and spectroscopy
experiment with a fixed target situated on the Super Proton
Synchrotron particle accelerator at CERN, [1]
scientific program approved in 1997 by CERN
experiments with hadron beam (glueballs, Primakoff
scattering, charmed hadrons,. . . )
experiments with muon beam (gluon contribution to the
nucleon spin, transverse spin structure of nucleons,. . . )
multiple types of polarized target
data taking started in 2002
plans at least until 2016 as COMPASS-II
3 programs: GPDs, Drell-Yan, Primakoff scattering
international project: cca 250 physicists from 11 countries
and 29 institutions
Vladimí Jarý et al. Software development for the C OMPASS experiment
4. COMPASS spectrometer
polarized target on the left, length approximately 50 m
COMPASS spectrometer, image taken from [1]
spectrometer consists of detectors:
1 measurement of deposited energy (calorimeters)
2 particle identification (RICH, muon filters)
3 particle tracking (wire chambers)
Vladimí Jarý et al. Software development for the C OMPASS experiment
5. Terminology
event: collection of data describing flight and interactions
of particle through the spectrometer
roles of the data acquisition system (DAQ):
1 reads data produced by detectors (readout)
2 assembles full events from fragment (event building)
3 sends events into a permanent storage (data logging)
4 enables configuration, control, and monitoring (run control)
5 preprocesses and filters data (e.g. track reconstruction,
online filter)
trigger : selects physically interesting events (or refuses
noninteresting events) in a high rate environment with
minimal latency
trigger efficiency : = Ngood(selected) /Ngood(produced) < 1
DAQ deadtime: D = timesystem is busy /timetotal
when system is busy, it cannot accept any other trigger
which leads to loss of data
Vladimí Jarý et al. Software development for the C OMPASS experiment
6. Overview of the TDAQ system
Structure of the trigger and data acquisition system according to [4]
Vladimí Jarý et al. Software development for the C OMPASS experiment
7. Current DAQ architecture
influenced by the cycle of the SPS particle accelerator:
12 s of accelaration, 4.8 s of extraction (spill/burst)
key aspects: multiple layers, parelelism, buffering
1 detector (frontend) electronics:
preamplify, digitize data
250000 data channels
2 concentrator modules (CATCH, GeSiCA):
perform readout (triggered by the Trigger Control System)
append subevent header
3 readout buffers: buffering subevents in spillbuffer PCI cards
makes use of the SPS cycle to reduce data rate to 1/3 of the
onspill rate, roughly stable data rate on the output
(derandomization)
4 event builders:
assemble full events from subevents
send full events to the permanent storage, store
metainformation about events into the Oracle DB
additional tasks (online filter, data quality monitoring)
Vladimí Jarý et al. Software development for the C OMPASS experiment
8. Current DAQ software
based on the ALICE DATE package[2]
DATE distinguishes two kinds of processors:
1 local data concentrators (LDCs)
perform readout of subevents, correspond to readout buffers
2 global data collectors (GDCs)
perform event building, correspond to event builders
requirements on the nodes:
1 all nodes must be x86 compatible
2 all nodes must be powered by GNU/Linux OS
3 all nodes must be connected to the network supporting the
TCP/IP stack
flexible system (fixed targer mode × collider mode)
scalable system (full scale LHC experiment × small
laboratory system with one processor)
performance:
40 GB/s readout
2.5 GB/s event building
1.25 GB/s storage
Vladimí Jarý et al. Software development for the C OMPASS experiment
9. Functionality
DATE provides:
1 readout, data flow
2 event building
3 run control
4 interactive configuration (based on the MySQL database)
5 event monitoring (COOOL)
6 data quality monitoring (MurphyTV )
7 information reporting (infoLogger, infoBrowser )
8 online filter (Cinderella)
9 load balancing (EDM, optional)
10 log book
11 ...
Vladimí Jarý et al. Software development for the C OMPASS experiment
10. Problems with existing DAQ system
Motivation
260 TB recorded during the 2002 Run, 508 TB during the
2004 Run, more than 2 PB during the 2010 Run
increasing number of detectors and detector channels,
trigger rate ⇒ increasing data rates
aging of the hardware ⇒ increasing failure rate of hardware
PCI technology deprecated
Main idea of the new system
replace ROBs and EVBs by custom FPGA-based HW
hardware based data flow control and event building
smaller number of components, higher reliability
Vladimí Jarý et al. Software development for the C OMPASS experiment
11. Overview of the hardware architecture
frontend electronics and concentrator modules unchanged
readout buffers and event builders replaced with custom
hardware:
Field Programmable Gate Array (FPGA) technology
FPGA card designed as a module for Advanced
Telecommunications Computing Architecture (ATCA) carrier
card, in total 8 carrier cards:
6 for data multiplexing
2 for event building
each carrier card equipped with 4 FPGA modules
different functionality, same firmware
FPGA card equipped with 4 GB of RAM, 16 serial links
(bandwidth 3.25 GB/s)
softcore processor on cards for powering control and
monitoring software, communication based on Ethernet
ROBs and EVBs will be used for computing farm
Vladimí Jarý et al. Software development for the C OMPASS experiment
12. Hardware architecture
Vladimí Jarý et al. Software development for the C OMPASS experiment
13. Requirements analysis
Requirements:
distributed system, communication based on TCP/IP
compatibility with Detector Control System
compatibility with software for physical analysis
remote control and monitoring
multiple user roles
real time not required
Decisions:
use the DIM library for communication
do not use the DATE package
possibly reuse some DATE components (COOOL,
MurphyTV )
keep data format unchanged
Vladimí Jarý et al. Software development for the C OMPASS experiment
14. Software architecture
Roles participating in the control and monitoring software
Vladimí Jarý et al. Software development for the C OMPASS experiment
15. Roles participating in the software
1 Master process
controls slave processes
receives commands from GUI
authenticate and authorize users
reads and writes configuration to online database
2 Slave processes
monitor and control the hardware
receive configuration information and commands from the
master process
3 GUI
receives information about health of the system from the
master process
sends commands to the master process that distributes
these commands to the slave processes
4 Message logger: collects messages produced by other
processes and stores them into database
5 Message browser: displays the messages produced by
other process
Vladimí Jarý et al. Software development for the C OMPASS experiment
16. Implementation details
communication between nodes based on the DIM library
implementation in Qt framework
slave processes implemented in C++ language, without Qt
scripting in Python (e.g. starting of the slave processes)
MySQL database (compatibility with Detector Control
System and DATE)
complex system ⇒ describe behavior of the master and
the slave processes by state machines
Vladimí Jarý et al. Software development for the C OMPASS experiment
17. State machines
State machine describing behavior of the master process
Vladimí Jarý et al. Software development for the C OMPASS experiment
18. DIM Library[3]
developed for the DELPHI experiment at CERN
asynchronous one-to-many communication in
heterogenous network environment [3]
based on the TCP/IP
interfaces to C, C++, Python, Java languages
communication between servers (publishers) and clients
(subscribers) through DIM Name Server (DNS)
types of messages:
services updated at regular intervals
services updated on demand
commands
Vladimí Jarý et al. Software development for the C OMPASS experiment
19. DIM Name Server
Position of the DIM Name Server
Vladimí Jarý et al. Software development for the C OMPASS experiment
20. Evaluation of the system
Test scenario:
number of nodes: 2 - 16
message size: 100 B - 500 kB
C OMPASS internal network during winter shutdown (Gigabit
Ethernet)
standard x86 compatible hardware (event builders)
Tests performed:
performance
is system able to update information about status of
hardware every 100 ms?
stability
Vladimí Jarý et al. Software development for the C OMPASS experiment
21. Results of the performance tests
Transfer speed as a function of size of the message
Vladimí Jarý et al. Software development for the C OMPASS experiment
22. Results of the stability tests
Stability of the software in time
Vladimí Jarý et al. Software development for the C OMPASS experiment
23. Summary and outlook
1 Analysis of the existing data acquisition system
based on the DATE package
scalability issues, deprecated technologies (PCI bus)
2 Development of control and monitoring software for new
DAQ architecture
analysis of requirements on software
description of the hardware architecture
definition of roles and behavior of the system
implementation
performance tests
3 Goals:
to test system on the real hardware
to have fully functional system in 2013
to deploy the system in 2014
Vladimí Jarý et al. Software development for the C OMPASS experiment
24. The bibliography
P. Abbon et al.: The COMPASS experiment at CERN, In:
Nucl. Instrum. Methods Phys. Res., A 577, 3 (2007) pp.
455–518. See also the COMPASS homepage at
http://wwwcompass.cern.ch
T. Anticic et al. (ALICE DAQ Project): ALICE DAQ and ECS
User’s Guide, CERN EDMS 616039, January 2006
C. Gaspar: Distributed Information Management System
[online]. 2011. Available at: http://dim.web.cern.ch
W. Vandeli: Introduction to Data Acquisition, In: Internation
School of Trigger and Data Acquisition, Roma, February
2011
Acknowledgement
This work has been supported by the MŠMT grants LA08015
and SGS 11/167.
Vladimí Jarý et al. Software development for the C OMPASS experiment