SlideShare une entreprise Scribd logo
1  sur  24
Télécharger pour lire hors ligne
Software development for the C OMPASS
                  experiment

       Martin Bodlák1 Vladimír Jarý1∗                        Igor Konorov2
       Alexander Mann2 Josef Nový1                           Stephan Paul2
                      Miroslav Virius1
           1 Faculty   of Nuclear Sciences and Physical Engineering
                       Czech Technical University in Prague
                              2 Physik-Department

                         Technische Universität München


              Conference “Tvorba softwaru 2012”
                   24th May 2012, Ostrava

∗
    Vladimir.Jary@cern.ch
                         Vladimí Jarý et al.   Software development for the C OMPASS experiment
Overview


   1   Introduction
           C OMPASS experiment
   2   Current DAQ system
           Architecture of the system
           DATE package
   3   Control and monitoring software for a new DAQ system
           Motivation and requirements
           Overview of the hardware architecture
           Layers of the DAQ software
           Implementation details
           Performance tests
   4   Conclusion and outlook



                       Vladimí Jarý et al.   Software development for the C OMPASS experiment
COMPASS experiment

    COMPASS: Common muon and proton apparatus for
    structure and spectroscopy
    experiment with a fixed target situated on the Super Proton
    Synchrotron particle accelerator at CERN, [1]
    scientific program approved in 1997 by CERN
        experiments with hadron beam (glueballs, Primakoff
        scattering, charmed hadrons,. . . )
        experiments with muon beam (gluon contribution to the
        nucleon spin, transverse spin structure of nucleons,. . . )
        multiple types of polarized target
    data taking started in 2002
    plans at least until 2016 as COMPASS-II
        3 programs: GPDs, Drell-Yan, Primakoff scattering
    international project: cca 250 physicists from 11 countries
    and 29 institutions

                    Vladimí Jarý et al.   Software development for the C OMPASS experiment
COMPASS spectrometer
    polarized target on the left, length approximately 50 m




           COMPASS spectrometer, image taken from [1]


    spectrometer consists of detectors:
      1   measurement of deposited energy (calorimeters)
      2   particle identification (RICH, muon filters)
      3   particle tracking (wire chambers)
                     Vladimí Jarý et al.   Software development for the C OMPASS experiment
Terminology

     event: collection of data describing flight and interactions
     of particle through the spectrometer
     roles of the data acquisition system (DAQ):
       1   reads data produced by detectors (readout)
       2   assembles full events from fragment (event building)
       3   sends events into a permanent storage (data logging)
       4   enables configuration, control, and monitoring (run control)
       5   preprocesses and filters data (e.g. track reconstruction,
           online filter)
     trigger : selects physically interesting events (or refuses
     noninteresting events) in a high rate environment with
     minimal latency
     trigger efficiency :      = Ngood(selected) /Ngood(produced) < 1
     DAQ deadtime: D = timesystem is busy /timetotal
           when system is busy, it cannot accept any other trigger
           which leads to loss of data
                       Vladimí Jarý et al.   Software development for the C OMPASS experiment
Overview of the TDAQ system




  Structure of the trigger and data acquisition system according to [4]


                       Vladimí Jarý et al.   Software development for the C OMPASS experiment
Current DAQ architecture

     influenced by the cycle of the SPS particle accelerator:
     12 s of accelaration, 4.8 s of extraction (spill/burst)
     key aspects: multiple layers, parelelism, buffering
       1   detector (frontend) electronics:
                preamplify, digitize data
                250000 data channels
       2   concentrator modules (CATCH, GeSiCA):
                perform readout (triggered by the Trigger Control System)
                append subevent header
       3   readout buffers: buffering subevents in spillbuffer PCI cards
                makes use of the SPS cycle to reduce data rate to 1/3 of the
                onspill rate, roughly stable data rate on the output
                (derandomization)
       4   event builders:
                assemble full events from subevents
                send full events to the permanent storage, store
                metainformation about events into the Oracle DB
                additional tasks (online filter, data quality monitoring)

                        Vladimí Jarý et al.   Software development for the C OMPASS experiment
Current DAQ software
     based on the ALICE DATE package[2]
     DATE distinguishes two kinds of processors:
      1   local data concentrators (LDCs)
              perform readout of subevents, correspond to readout buffers
      2   global data collectors (GDCs)
              perform event building, correspond to event builders
     requirements on the nodes:
      1   all nodes must be x86 compatible
      2   all nodes must be powered by GNU/Linux OS
      3   all nodes must be connected to the network supporting the
          TCP/IP stack
     flexible system (fixed targer mode × collider mode)
     scalable system (full scale LHC experiment × small
     laboratory system with one processor)
     performance:
          40 GB/s readout
          2.5 GB/s event building
          1.25 GB/s storage
                      Vladimí Jarý et al.   Software development for the C OMPASS experiment
Functionality

  DATE provides:
    1   readout, data flow
    2   event building
    3   run control
    4   interactive configuration (based on the MySQL database)
    5   event monitoring (COOOL)
    6   data quality monitoring (MurphyTV )
    7   information reporting (infoLogger, infoBrowser )
    8   online filter (Cinderella)
    9   load balancing (EDM, optional)
   10   log book
   11   ...


                         Vladimí Jarý et al.   Software development for the C OMPASS experiment
Problems with existing DAQ system


  Motivation
      260 TB recorded during the 2002 Run, 508 TB during the
      2004 Run, more than 2 PB during the 2010 Run
      increasing number of detectors and detector channels,
      trigger rate ⇒ increasing data rates
      aging of the hardware ⇒ increasing failure rate of hardware
      PCI technology deprecated
  Main idea of the new system
      replace ROBs and EVBs by custom FPGA-based HW
      hardware based data flow control and event building
      smaller number of components, higher reliability



                     Vladimí Jarý et al.   Software development for the C OMPASS experiment
Overview of the hardware architecture

     frontend electronics and concentrator modules unchanged
     readout buffers and event builders replaced with custom
     hardware:
         Field Programmable Gate Array (FPGA) technology
         FPGA card designed as a module for Advanced
         Telecommunications Computing Architecture (ATCA) carrier
         card, in total 8 carrier cards:
             6 for data multiplexing
             2 for event building
             each carrier card equipped with 4 FPGA modules
             different functionality, same firmware
         FPGA card equipped with 4 GB of RAM, 16 serial links
         (bandwidth 3.25 GB/s)
         softcore processor on cards for powering control and
         monitoring software, communication based on Ethernet
     ROBs and EVBs will be used for computing farm

                    Vladimí Jarý et al.   Software development for the C OMPASS experiment
Hardware architecture




                Vladimí Jarý et al.   Software development for the C OMPASS experiment
Requirements analysis

  Requirements:
      distributed system, communication based on TCP/IP
      compatibility with Detector Control System
      compatibility with software for physical analysis
      remote control and monitoring
      multiple user roles
      real time not required
  Decisions:
      use the DIM library for communication
      do not use the DATE package
      possibly reuse some DATE components (COOOL,
      MurphyTV )
      keep data format unchanged

                      Vladimí Jarý et al.   Software development for the C OMPASS experiment
Software architecture




      Roles participating in the control and monitoring software



                      Vladimí Jarý et al.   Software development for the C OMPASS experiment
Roles participating in the software
    1   Master process
              controls slave processes
              receives commands from GUI
              authenticate and authorize users
              reads and writes configuration to online database
    2   Slave processes
              monitor and control the hardware
              receive configuration information and commands from the
              master process
    3   GUI
              receives information about health of the system from the
              master process
              sends commands to the master process that distributes
              these commands to the slave processes
    4   Message logger: collects messages produced by other
        processes and stores them into database
    5   Message browser: displays the messages produced by
        other process
                          Vladimí Jarý et al.   Software development for the C OMPASS experiment
Implementation details



     communication between nodes based on the DIM library
     implementation in Qt framework
         slave processes implemented in C++ language, without Qt
     scripting in Python (e.g. starting of the slave processes)
     MySQL database (compatibility with Detector Control
     System and DATE)
     complex system ⇒ describe behavior of the master and
     the slave processes by state machines




                     Vladimí Jarý et al.   Software development for the C OMPASS experiment
State machines




      State machine describing behavior of the master process


                     Vladimí Jarý et al.   Software development for the C OMPASS experiment
DIM Library[3]


     developed for the DELPHI experiment at CERN
     asynchronous one-to-many communication in
     heterogenous network environment [3]
     based on the TCP/IP
     interfaces to C, C++, Python, Java languages
     communication between servers (publishers) and clients
     (subscribers) through DIM Name Server (DNS)
     types of messages:
         services updated at regular intervals
         services updated on demand
         commands




                     Vladimí Jarý et al.   Software development for the C OMPASS experiment
DIM Name Server




            Position of the DIM Name Server




               Vladimí Jarý et al.   Software development for the C OMPASS experiment
Evaluation of the system


  Test scenario:
      number of nodes: 2 - 16
      message size: 100 B - 500 kB
      C OMPASS internal network during winter shutdown (Gigabit
      Ethernet)
      standard x86 compatible hardware (event builders)
  Tests performed:
      performance
           is system able to update information about status of
           hardware every 100 ms?
      stability



                       Vladimí Jarý et al.   Software development for the C OMPASS experiment
Results of the performance tests




        Transfer speed as a function of size of the message

                    Vladimí Jarý et al.   Software development for the C OMPASS experiment
Results of the stability tests




                 Stability of the software in time

                   Vladimí Jarý et al.   Software development for the C OMPASS experiment
Summary and outlook

   1   Analysis of the existing data acquisition system
           based on the DATE package
           scalability issues, deprecated technologies (PCI bus)
   2   Development of control and monitoring software for new
       DAQ architecture
           analysis of requirements on software
           description of the hardware architecture
           definition of roles and behavior of the system
           implementation
           performance tests
   3   Goals:
           to test system on the real hardware
           to have fully functional system in 2013
           to deploy the system in 2014



                       Vladimí Jarý et al.   Software development for the C OMPASS experiment
The bibliography
     P. Abbon et al.: The COMPASS experiment at CERN, In:
     Nucl. Instrum. Methods Phys. Res., A 577, 3 (2007) pp.
     455–518. See also the COMPASS homepage at
     http://wwwcompass.cern.ch
     T. Anticic et al. (ALICE DAQ Project): ALICE DAQ and ECS
     User’s Guide, CERN EDMS 616039, January 2006
     C. Gaspar: Distributed Information Management System
     [online]. 2011. Available at: http://dim.web.cern.ch
     W. Vandeli: Introduction to Data Acquisition, In: Internation
     School of Trigger and Data Acquisition, Roma, February
     2011
  Acknowledgement
  This work has been supported by the MŠMT grants LA08015
  and SGS 11/167.
                     Vladimí Jarý et al.   Software development for the C OMPASS experiment

Contenu connexe

Tendances

DEEP-mon: Dynamic and Energy Efficient Power monitoring for container-based i...
DEEP-mon: Dynamic and Energy Efficient Power monitoring for container-based i...DEEP-mon: Dynamic and Energy Efficient Power monitoring for container-based i...
DEEP-mon: Dynamic and Energy Efficient Power monitoring for container-based i...NECST Lab @ Politecnico di Milano
 
SC'18 BoF Presentation
SC'18 BoF PresentationSC'18 BoF Presentation
SC'18 BoF Presentationrcastain
 
DPDK layer for porting IPS-IDS
DPDK layer for porting IPS-IDSDPDK layer for porting IPS-IDS
DPDK layer for porting IPS-IDSVipin Varghese
 
MARC ONERA Toulouse2012 Altreonic
MARC ONERA Toulouse2012 AltreonicMARC ONERA Toulouse2012 Altreonic
MARC ONERA Toulouse2012 AltreonicEric Verhulst
 
ebpf and IO Visor: The What, how, and what next!
ebpf and IO Visor: The What, how, and what next!ebpf and IO Visor: The What, how, and what next!
ebpf and IO Visor: The What, how, and what next!Affan Syed
 
Introduction to DPDK
Introduction to DPDKIntroduction to DPDK
Introduction to DPDKKernel TLV
 
Achieving Performance Isolation with Lightweight Co-Kernels
Achieving Performance Isolation with Lightweight Co-KernelsAchieving Performance Isolation with Lightweight Co-Kernels
Achieving Performance Isolation with Lightweight Co-KernelsJiannan Ouyang, PhD
 
Biosystem prosedur
Biosystem prosedurBiosystem prosedur
Biosystem prosedurIs Arum
 
Introduction to eBPF and XDP
Introduction to eBPF and XDPIntroduction to eBPF and XDP
Introduction to eBPF and XDPlcplcp1
 
Shoot4U: Using VMM Assists to Optimize TLB Operations on Preempted vCPUs
Shoot4U: Using VMM Assists to Optimize TLB Operations on Preempted vCPUsShoot4U: Using VMM Assists to Optimize TLB Operations on Preempted vCPUs
Shoot4U: Using VMM Assists to Optimize TLB Operations on Preempted vCPUsJiannan Ouyang, PhD
 
An Essential Relationship between Real-time and Resource Partitioning
An Essential Relationship between Real-time and Resource PartitioningAn Essential Relationship between Real-time and Resource Partitioning
An Essential Relationship between Real-time and Resource PartitioningYoshitake Kobayashi
 
Network Stack in Userspace (NUSE)
Network Stack in Userspace (NUSE)Network Stack in Userspace (NUSE)
Network Stack in Userspace (NUSE)Hajime Tazaki
 
EBPF and Linux Networking
EBPF and Linux NetworkingEBPF and Linux Networking
EBPF and Linux NetworkingPLUMgrid
 
AI is Impacting HPC Everywhere
AI is Impacting HPC EverywhereAI is Impacting HPC Everywhere
AI is Impacting HPC Everywhereinside-BigData.com
 

Tendances (20)

DEEP-mon: Dynamic and Energy Efficient Power monitoring for container-based i...
DEEP-mon: Dynamic and Energy Efficient Power monitoring for container-based i...DEEP-mon: Dynamic and Energy Efficient Power monitoring for container-based i...
DEEP-mon: Dynamic and Energy Efficient Power monitoring for container-based i...
 
DPDK In Depth
DPDK In DepthDPDK In Depth
DPDK In Depth
 
No[1][1]
No[1][1]No[1][1]
No[1][1]
 
SC'18 BoF Presentation
SC'18 BoF PresentationSC'18 BoF Presentation
SC'18 BoF Presentation
 
Dynamic user trace
Dynamic user traceDynamic user trace
Dynamic user trace
 
Debug generic process
Debug generic processDebug generic process
Debug generic process
 
DPDK layer for porting IPS-IDS
DPDK layer for porting IPS-IDSDPDK layer for porting IPS-IDS
DPDK layer for porting IPS-IDS
 
MARC ONERA Toulouse2012 Altreonic
MARC ONERA Toulouse2012 AltreonicMARC ONERA Toulouse2012 Altreonic
MARC ONERA Toulouse2012 Altreonic
 
ebpf and IO Visor: The What, how, and what next!
ebpf and IO Visor: The What, how, and what next!ebpf and IO Visor: The What, how, and what next!
ebpf and IO Visor: The What, how, and what next!
 
Introduction to DPDK
Introduction to DPDKIntroduction to DPDK
Introduction to DPDK
 
Understanding DPDK
Understanding DPDKUnderstanding DPDK
Understanding DPDK
 
Achieving Performance Isolation with Lightweight Co-Kernels
Achieving Performance Isolation with Lightweight Co-KernelsAchieving Performance Isolation with Lightweight Co-Kernels
Achieving Performance Isolation with Lightweight Co-Kernels
 
Biosystem prosedur
Biosystem prosedurBiosystem prosedur
Biosystem prosedur
 
Introduction to eBPF and XDP
Introduction to eBPF and XDPIntroduction to eBPF and XDP
Introduction to eBPF and XDP
 
Shoot4U: Using VMM Assists to Optimize TLB Operations on Preempted vCPUs
Shoot4U: Using VMM Assists to Optimize TLB Operations on Preempted vCPUsShoot4U: Using VMM Assists to Optimize TLB Operations on Preempted vCPUs
Shoot4U: Using VMM Assists to Optimize TLB Operations on Preempted vCPUs
 
An Essential Relationship between Real-time and Resource Partitioning
An Essential Relationship between Real-time and Resource PartitioningAn Essential Relationship between Real-time and Resource Partitioning
An Essential Relationship between Real-time and Resource Partitioning
 
Network Stack in Userspace (NUSE)
Network Stack in Userspace (NUSE)Network Stack in Userspace (NUSE)
Network Stack in Userspace (NUSE)
 
LVTS Projects
LVTS ProjectsLVTS Projects
LVTS Projects
 
EBPF and Linux Networking
EBPF and Linux NetworkingEBPF and Linux Networking
EBPF and Linux Networking
 
AI is Impacting HPC Everywhere
AI is Impacting HPC EverywhereAI is Impacting HPC Everywhere
AI is Impacting HPC Everywhere
 

Similaire à Software development for the COMPASS experiment

Linux-Based Data Acquisition and Processing On Palmtop Computer
Linux-Based Data Acquisition and Processing On Palmtop ComputerLinux-Based Data Acquisition and Processing On Palmtop Computer
Linux-Based Data Acquisition and Processing On Palmtop ComputerIOSR Journals
 
Linux-Based Data Acquisition and Processing On Palmtop Computer
Linux-Based Data Acquisition and Processing On Palmtop ComputerLinux-Based Data Acquisition and Processing On Palmtop Computer
Linux-Based Data Acquisition and Processing On Palmtop ComputerIOSR Journals
 
BARCoMmS Ground Station Testing System
BARCoMmS Ground Station Testing SystemBARCoMmS Ground Station Testing System
BARCoMmS Ground Station Testing SystemRiley Waite
 
OSGi: Best Tool In Your Embedded Systems Toolbox
OSGi: Best Tool In Your Embedded Systems ToolboxOSGi: Best Tool In Your Embedded Systems Toolbox
OSGi: Best Tool In Your Embedded Systems ToolboxBrett Hackleman
 
YOW2018 Cloud Performance Root Cause Analysis at Netflix
YOW2018 Cloud Performance Root Cause Analysis at NetflixYOW2018 Cloud Performance Root Cause Analysis at Netflix
YOW2018 Cloud Performance Root Cause Analysis at NetflixBrendan Gregg
 
EclipseEmbeddedDay2009-OSGi: Best Tool In Your Embedded Systems Toolbox
EclipseEmbeddedDay2009-OSGi: Best Tool In Your Embedded Systems ToolboxEclipseEmbeddedDay2009-OSGi: Best Tool In Your Embedded Systems Toolbox
EclipseEmbeddedDay2009-OSGi: Best Tool In Your Embedded Systems ToolboxBrett Hackleman
 
Alice data acquisition
Alice data acquisitionAlice data acquisition
Alice data acquisitionBertalan EGED
 
Akshay Sanjay Kale Resume LinkedIn
Akshay Sanjay Kale Resume LinkedInAkshay Sanjay Kale Resume LinkedIn
Akshay Sanjay Kale Resume LinkedInAkshay Kale
 
BWC Supercomputing 2008 Presentation
BWC Supercomputing 2008 PresentationBWC Supercomputing 2008 Presentation
BWC Supercomputing 2008 Presentationlilyco
 
OGCE TeraGrid 2010 Science Gateway Tutorial Intro
OGCE TeraGrid 2010 Science Gateway Tutorial IntroOGCE TeraGrid 2010 Science Gateway Tutorial Intro
OGCE TeraGrid 2010 Science Gateway Tutorial Intromarpierc
 
Extending the life of your device (firmware updates over LoRa) - LoRa AMM
Extending the life of your device (firmware updates over LoRa) - LoRa AMMExtending the life of your device (firmware updates over LoRa) - LoRa AMM
Extending the life of your device (firmware updates over LoRa) - LoRa AMMJan Jongboom
 
Interactive Data Analysis for End Users on HN Science Cloud
Interactive Data Analysis for End Users on HN Science CloudInteractive Data Analysis for End Users on HN Science Cloud
Interactive Data Analysis for End Users on HN Science CloudHelix Nebula The Science Cloud
 

Similaire à Software development for the COMPASS experiment (20)

Eclipse RT Day
Eclipse RT DayEclipse RT Day
Eclipse RT Day
 
Linux-Based Data Acquisition and Processing On Palmtop Computer
Linux-Based Data Acquisition and Processing On Palmtop ComputerLinux-Based Data Acquisition and Processing On Palmtop Computer
Linux-Based Data Acquisition and Processing On Palmtop Computer
 
Linux-Based Data Acquisition and Processing On Palmtop Computer
Linux-Based Data Acquisition and Processing On Palmtop ComputerLinux-Based Data Acquisition and Processing On Palmtop Computer
Linux-Based Data Acquisition and Processing On Palmtop Computer
 
BARCoMmS Ground Station Testing System
BARCoMmS Ground Station Testing SystemBARCoMmS Ground Station Testing System
BARCoMmS Ground Station Testing System
 
OSGi: Best Tool In Your Embedded Systems Toolbox
OSGi: Best Tool In Your Embedded Systems ToolboxOSGi: Best Tool In Your Embedded Systems Toolbox
OSGi: Best Tool In Your Embedded Systems Toolbox
 
TransPAC3/ACE Measurement & PerfSONAR Update
TransPAC3/ACE Measurement & PerfSONAR UpdateTransPAC3/ACE Measurement & PerfSONAR Update
TransPAC3/ACE Measurement & PerfSONAR Update
 
YOW2018 Cloud Performance Root Cause Analysis at Netflix
YOW2018 Cloud Performance Root Cause Analysis at NetflixYOW2018 Cloud Performance Root Cause Analysis at Netflix
YOW2018 Cloud Performance Root Cause Analysis at Netflix
 
The campus NMS tool NAV
The campus NMS tool NAVThe campus NMS tool NAV
The campus NMS tool NAV
 
EclipseEmbeddedDay2009-OSGi: Best Tool In Your Embedded Systems Toolbox
EclipseEmbeddedDay2009-OSGi: Best Tool In Your Embedded Systems ToolboxEclipseEmbeddedDay2009-OSGi: Best Tool In Your Embedded Systems Toolbox
EclipseEmbeddedDay2009-OSGi: Best Tool In Your Embedded Systems Toolbox
 
Alice data acquisition
Alice data acquisitionAlice data acquisition
Alice data acquisition
 
Akshay Sanjay Kale Resume LinkedIn
Akshay Sanjay Kale Resume LinkedInAkshay Sanjay Kale Resume LinkedIn
Akshay Sanjay Kale Resume LinkedIn
 
Overview of the Data Processing Error Analysis System (DPEAS)
Overview of the Data Processing Error Analysis System (DPEAS)Overview of the Data Processing Error Analysis System (DPEAS)
Overview of the Data Processing Error Analysis System (DPEAS)
 
BWC Supercomputing 2008 Presentation
BWC Supercomputing 2008 PresentationBWC Supercomputing 2008 Presentation
BWC Supercomputing 2008 Presentation
 
veera (updated)
veera (updated)veera (updated)
veera (updated)
 
OGCE TeraGrid 2010 Science Gateway Tutorial Intro
OGCE TeraGrid 2010 Science Gateway Tutorial IntroOGCE TeraGrid 2010 Science Gateway Tutorial Intro
OGCE TeraGrid 2010 Science Gateway Tutorial Intro
 
Extending the life of your device (firmware updates over LoRa) - LoRa AMM
Extending the life of your device (firmware updates over LoRa) - LoRa AMMExtending the life of your device (firmware updates over LoRa) - LoRa AMM
Extending the life of your device (firmware updates over LoRa) - LoRa AMM
 
Ig3514391443
Ig3514391443Ig3514391443
Ig3514391443
 
cpc-152-2-2003
cpc-152-2-2003cpc-152-2-2003
cpc-152-2-2003
 
Interactive Data Analysis for End Users on HN Science Cloud
Interactive Data Analysis for End Users on HN Science CloudInteractive Data Analysis for End Users on HN Science Cloud
Interactive Data Analysis for End Users on HN Science Cloud
 
Prasad_Meduri
Prasad_MeduriPrasad_Meduri
Prasad_Meduri
 

Software development for the COMPASS experiment

  • 1. Software development for the C OMPASS experiment Martin Bodlák1 Vladimír Jarý1∗ Igor Konorov2 Alexander Mann2 Josef Nový1 Stephan Paul2 Miroslav Virius1 1 Faculty of Nuclear Sciences and Physical Engineering Czech Technical University in Prague 2 Physik-Department Technische Universität München Conference “Tvorba softwaru 2012” 24th May 2012, Ostrava ∗ Vladimir.Jary@cern.ch Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 2. Overview 1 Introduction C OMPASS experiment 2 Current DAQ system Architecture of the system DATE package 3 Control and monitoring software for a new DAQ system Motivation and requirements Overview of the hardware architecture Layers of the DAQ software Implementation details Performance tests 4 Conclusion and outlook Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 3. COMPASS experiment COMPASS: Common muon and proton apparatus for structure and spectroscopy experiment with a fixed target situated on the Super Proton Synchrotron particle accelerator at CERN, [1] scientific program approved in 1997 by CERN experiments with hadron beam (glueballs, Primakoff scattering, charmed hadrons,. . . ) experiments with muon beam (gluon contribution to the nucleon spin, transverse spin structure of nucleons,. . . ) multiple types of polarized target data taking started in 2002 plans at least until 2016 as COMPASS-II 3 programs: GPDs, Drell-Yan, Primakoff scattering international project: cca 250 physicists from 11 countries and 29 institutions Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 4. COMPASS spectrometer polarized target on the left, length approximately 50 m COMPASS spectrometer, image taken from [1] spectrometer consists of detectors: 1 measurement of deposited energy (calorimeters) 2 particle identification (RICH, muon filters) 3 particle tracking (wire chambers) Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 5. Terminology event: collection of data describing flight and interactions of particle through the spectrometer roles of the data acquisition system (DAQ): 1 reads data produced by detectors (readout) 2 assembles full events from fragment (event building) 3 sends events into a permanent storage (data logging) 4 enables configuration, control, and monitoring (run control) 5 preprocesses and filters data (e.g. track reconstruction, online filter) trigger : selects physically interesting events (or refuses noninteresting events) in a high rate environment with minimal latency trigger efficiency : = Ngood(selected) /Ngood(produced) < 1 DAQ deadtime: D = timesystem is busy /timetotal when system is busy, it cannot accept any other trigger which leads to loss of data Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 6. Overview of the TDAQ system Structure of the trigger and data acquisition system according to [4] Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 7. Current DAQ architecture influenced by the cycle of the SPS particle accelerator: 12 s of accelaration, 4.8 s of extraction (spill/burst) key aspects: multiple layers, parelelism, buffering 1 detector (frontend) electronics: preamplify, digitize data 250000 data channels 2 concentrator modules (CATCH, GeSiCA): perform readout (triggered by the Trigger Control System) append subevent header 3 readout buffers: buffering subevents in spillbuffer PCI cards makes use of the SPS cycle to reduce data rate to 1/3 of the onspill rate, roughly stable data rate on the output (derandomization) 4 event builders: assemble full events from subevents send full events to the permanent storage, store metainformation about events into the Oracle DB additional tasks (online filter, data quality monitoring) Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 8. Current DAQ software based on the ALICE DATE package[2] DATE distinguishes two kinds of processors: 1 local data concentrators (LDCs) perform readout of subevents, correspond to readout buffers 2 global data collectors (GDCs) perform event building, correspond to event builders requirements on the nodes: 1 all nodes must be x86 compatible 2 all nodes must be powered by GNU/Linux OS 3 all nodes must be connected to the network supporting the TCP/IP stack flexible system (fixed targer mode × collider mode) scalable system (full scale LHC experiment × small laboratory system with one processor) performance: 40 GB/s readout 2.5 GB/s event building 1.25 GB/s storage Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 9. Functionality DATE provides: 1 readout, data flow 2 event building 3 run control 4 interactive configuration (based on the MySQL database) 5 event monitoring (COOOL) 6 data quality monitoring (MurphyTV ) 7 information reporting (infoLogger, infoBrowser ) 8 online filter (Cinderella) 9 load balancing (EDM, optional) 10 log book 11 ... Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 10. Problems with existing DAQ system Motivation 260 TB recorded during the 2002 Run, 508 TB during the 2004 Run, more than 2 PB during the 2010 Run increasing number of detectors and detector channels, trigger rate ⇒ increasing data rates aging of the hardware ⇒ increasing failure rate of hardware PCI technology deprecated Main idea of the new system replace ROBs and EVBs by custom FPGA-based HW hardware based data flow control and event building smaller number of components, higher reliability Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 11. Overview of the hardware architecture frontend electronics and concentrator modules unchanged readout buffers and event builders replaced with custom hardware: Field Programmable Gate Array (FPGA) technology FPGA card designed as a module for Advanced Telecommunications Computing Architecture (ATCA) carrier card, in total 8 carrier cards: 6 for data multiplexing 2 for event building each carrier card equipped with 4 FPGA modules different functionality, same firmware FPGA card equipped with 4 GB of RAM, 16 serial links (bandwidth 3.25 GB/s) softcore processor on cards for powering control and monitoring software, communication based on Ethernet ROBs and EVBs will be used for computing farm Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 12. Hardware architecture Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 13. Requirements analysis Requirements: distributed system, communication based on TCP/IP compatibility with Detector Control System compatibility with software for physical analysis remote control and monitoring multiple user roles real time not required Decisions: use the DIM library for communication do not use the DATE package possibly reuse some DATE components (COOOL, MurphyTV ) keep data format unchanged Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 14. Software architecture Roles participating in the control and monitoring software Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 15. Roles participating in the software 1 Master process controls slave processes receives commands from GUI authenticate and authorize users reads and writes configuration to online database 2 Slave processes monitor and control the hardware receive configuration information and commands from the master process 3 GUI receives information about health of the system from the master process sends commands to the master process that distributes these commands to the slave processes 4 Message logger: collects messages produced by other processes and stores them into database 5 Message browser: displays the messages produced by other process Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 16. Implementation details communication between nodes based on the DIM library implementation in Qt framework slave processes implemented in C++ language, without Qt scripting in Python (e.g. starting of the slave processes) MySQL database (compatibility with Detector Control System and DATE) complex system ⇒ describe behavior of the master and the slave processes by state machines Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 17. State machines State machine describing behavior of the master process Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 18. DIM Library[3] developed for the DELPHI experiment at CERN asynchronous one-to-many communication in heterogenous network environment [3] based on the TCP/IP interfaces to C, C++, Python, Java languages communication between servers (publishers) and clients (subscribers) through DIM Name Server (DNS) types of messages: services updated at regular intervals services updated on demand commands Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 19. DIM Name Server Position of the DIM Name Server Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 20. Evaluation of the system Test scenario: number of nodes: 2 - 16 message size: 100 B - 500 kB C OMPASS internal network during winter shutdown (Gigabit Ethernet) standard x86 compatible hardware (event builders) Tests performed: performance is system able to update information about status of hardware every 100 ms? stability Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 21. Results of the performance tests Transfer speed as a function of size of the message Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 22. Results of the stability tests Stability of the software in time Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 23. Summary and outlook 1 Analysis of the existing data acquisition system based on the DATE package scalability issues, deprecated technologies (PCI bus) 2 Development of control and monitoring software for new DAQ architecture analysis of requirements on software description of the hardware architecture definition of roles and behavior of the system implementation performance tests 3 Goals: to test system on the real hardware to have fully functional system in 2013 to deploy the system in 2014 Vladimí Jarý et al. Software development for the C OMPASS experiment
  • 24. The bibliography P. Abbon et al.: The COMPASS experiment at CERN, In: Nucl. Instrum. Methods Phys. Res., A 577, 3 (2007) pp. 455–518. See also the COMPASS homepage at http://wwwcompass.cern.ch T. Anticic et al. (ALICE DAQ Project): ALICE DAQ and ECS User’s Guide, CERN EDMS 616039, January 2006 C. Gaspar: Distributed Information Management System [online]. 2011. Available at: http://dim.web.cern.ch W. Vandeli: Introduction to Data Acquisition, In: Internation School of Trigger and Data Acquisition, Roma, February 2011 Acknowledgement This work has been supported by the MŠMT grants LA08015 and SGS 11/167. Vladimí Jarý et al. Software development for the C OMPASS experiment