SlideShare une entreprise Scribd logo
1  sur  18
Télécharger pour lire hors ligne
CINECA
             for HCP and e-infrastructures
                         e-

                                 June 5, 2012




           Casalecchio di Reno (BO)
           Via Magnanelli 6/3, 40033 Casalecchio di Reno | 051 6171411 | www.cineca.it




© CINECA
Next future


 •     MIUR Ministry entrustment for the incorporation into CINECA of the CILEA Consortium in
       Milan and CASPUR Consortium in Rome




 •     Create a unique entity that for critical mass will reinforce the Italian commitment for the
       development of e-infrastrcuture in Italy and in Europe for HCP and HCP technologies,
       scientific data repository and management, cloud computing for industries and Public
       administration, for the development of computing intensive and data intensive methods for
       science and engineering

 •     From beginning of 2013
        – Unique offer for open access of integrated tier0 and tier1 HCP national infrastructure
        – Unique offer of education and training activities under the umbrella of PRACE Training
           advanced center action
        – Integrated help desk and scale up process for HCP users support




© CINECA
Partnership agreement with




 DEMOCRITOS will expand the current activities towards computational molecular biophysics and new areas of
 materials science. This rapidly advancing field requires the deployment of dedicated hardware and human
 resources in a similarly way as large facilities are necessary to even conceive certain experimental enterprises.
 In order to optimize the use of available resources, as well as the synergies that DEMOCRITOS will create, our
 action will be deployed by implementing a 'distributed virtual laboratory'. Such a laboratory will be strongly
 supported by SISSA and by CINECA.


  CNR NANO
  Istituto Nanoscienze is a new institute of CNR devoted to frontier research in nanoscience and
  nanotechnology, established in 2010 from three former laboratories of INFM: NEST-Pisa, NNL-Lecce, and
  S3-Modena. CNR NANO interests range from fundamental science to emerging technologies as well as
  applied projects of direct industrial and societal interest.




   Partner CASPUR. The goal of the EPIGENOMICS FLAGSHIP PROJECT (Progetto Bandiera Epigenomica
   2012 - 2015) is to understand, using a wide range of experimental models and approaches, how epigenetic
   mechanisms regulate biological processes, determine phenotypic variation and contribute to the onset
   and progression of diseases.


© CINECA
Partnership agreement with


  The INFN - the National Institute of Nuclear Physics - is an organization dedicated to the
  study of the fundamental constituents of matter, and conducts theoretical and experimental
  research in the fields of sub nuclear, nuclear, and astroparticle physics. Fundamental
  research in these areas requires the use of cutting-edge technologies and instrumentation,
  which the INFN develops both in its own laboratories and in collaboration with the world of
  industry. These activities are conducted in close collaboration with the academic world.

  With CINECA is in place a framework agreement for the development of applications for
  theoretical simulations of physical phenomena in the domain of fundamental constituents of
  matter.

  Particularly development joint activity will be conducted in the development of new prototype
  solution for advanced computing based on massively parallel accelerated architecture.

  INFN will co-invest for the development of the HCP infrastructure in CINECA looking head in
  the prospective of the follower of the FERMI system.




© CINECA
Partnership agreement with


 European Laboratory for Non-Linear Spectroscopy is a European reference point for research
 with light waves, based on a fundamental multi-disciplinary approach. LENS since its birth in
 1991 is a center of excellence of the University of Florence.

 Objectives of the partnership:
 data processing, data mining, knowledge extraction/acceleration, “critical mission system
 management, assistance guided decision

 Data mining Connectome Project (Consortium Connettoma)
 Data mining cardiology (predictive cardiopatics e-networks)
 Data mining neuro-omics (Besta)
 Data mining Human Brain Project
 Data mining pharmaceutical data (Novartis)
 Stream Computing
 DeepQA (Watson)




© CINECA
Specific agreement with LENS for the Human Brain Project




                           Scale bars: 1 mm

© CINECA
Partnership agreement with




                     HPC Data Centric Italian Exascale Lab
           Initial activities
           •      Validation of the new “EURORA” PRACE Prototype architecture.
           •      Evaluation of new programming models (ArBB, TBB, Cilk, OpenMP).
           •      Evaluation of MIC-to-MIC MPI communications.
           •      Evaluation and comparison between MIC to GPU
           •      Porting of full applications:
                    •     SISSA QuantumEspresso, Gromacs,
                    •     ICTP HP-SEE
                    •     OGS Solid Earth model and Ocean/biochemistry circulation model
                    •     INFN QCD, Lattice model, grid model
           •      Hot Water Cooling efficiency
           •      Specific measurement of power consumption by specific sensors
           •      Independent Power consumption measurements for each main component of the
                  system
           •      Evaluation of TCO and total energy efficiency

           Funded by the MIUR Minister with 1 Million Euro for the participation of Italy to the PRACE
               action on top of the fund assured to CINECA. All the partners cofound in similar
               amount either in cash of in kind.




© CINECA
Added value pay per use agreement with:
   - INAF, ESA, ASI for the post processing of the observed data of the Plank Mission and Gaia Mission;
   - IIT for the Italian Drug Discovery Network initiative lead by IIT
   - INGV for the EU ESFRI RI project EPOS lead by INGV




© CINECA
The partecipation to the EU funded projects




                                 Hosting Member                 Principal Infrastructure provider
  Project coordinator



                                         HPC for Molecular modeling

                                   V-MusT
European           Virtual archaeology                 Handbook of HPC e-Science
Middleware                                             Infrastructure Allocation Reviewing,
Infrastrcucture                                        Selection and Management
                          HPC for Geophysic




   Towards EXASCALE R&D (3 projects)                                  Latin American


© CINECA
Drug design




                                      Computational fluid dynamic
    • Production management




                                                                    Meteo climatology
    • Application development




           APA Stereo 3D Movie
           Award Siggraph Asia 2011

© CINECA
HCP infrastructure for technical computing and added value service
  for industries

                                                                                                   Special Appl. (june
  Logical Name     Production (April 2010)    Production (April 2011)   Production (April 2012)
                                                                                                         2008)

      Model           IBM LS21 / HS21               HP Proliant            IBM LS21 / HS21             IBM HS21

   Architecture         Linux Cluster              Linux Cluster            Linux Cluster             Linux Cluster

                                                                         Intel Sandybridge 8c
   Processor       Intel Nehalem Qc 2.6 Ghz    Westmere Ec 2.6 Ghz                                Intel Xeon Qc 2.8 Ghz
                                                                                2.8 Ghz

    # of core               10240                     15360                      2560                     3280

    # of node               1280                       1280                      320                      512

    # of rack                25                         25                        4                        12

    Total RAM            45 Tera Byte              45 Tera Byte              10 Tera Byte              4 Tera Byte

                                                                                                  CISCO Infiniband SDR
 Interconnection   CISCO Infiniband SDR 4x Mellanox Infiniband QDR 4x     Infiniband QDR 4x
                                                                                                           4x

Operating System           RedHat                    RedHat                    RedHat                   RedHat

   Total Power          ~ 800 Kwatts               ~ 800 Kwatts              ~ 200 Kwatts             ~ 380 Kwatts

Peak Performance       > 100 Tera Flops          > 120 Tera Flops          > 57 Tera Flops            40 Tera Flops




© CINECA
CINECA's first step into multi Pflop/s

             Procurement 2008 terms of reference
   100 Tflop/s system at initial 2009 with option to upgrade to
    Pflop/s size
   a prototype to demonstrate and facilitate technological
    transition
   1 PF in 2012 at a not to exceed price

                 2009 IBM selected vendor
   100Tflops SP6 + BlueGene/P proto
       12 SP6 p575 frames + I/O nodes + 6 DCS9900 FC
        Storage servers
       1 BlueGene/P rack + I/O nodes + 1 DCS9900 FC
        Storage server
Ongoing transition

               BlueGene/Q option (2.1 PFlop/s)

   Reuse of the water cooling infrastructure + free cooling
       Better PUE: from 1.3 (worst case measured) to 1.15
        (expected)

   Reuse of the power distribution infrastructure with same
    power consumption (~900Kw) – UPSs and generators
    included

   Rough transition (stop services during installation)
       room issue: no separate room space to host the new
        system during transition (due to other businesses)
Fulfill the PRACE commitment

                Main (technological) requirements

   Transition to water cooling (better PUE)
         the 100 Tflop/s plant should be reused for the next (current
         ) technological step (for a system of 2 PF peak)

   Power consumption: 100Tflop/s hits the Megawatt (not over)
       reuse of UPSs, power panels etc. and same or lower power
        consumption for the next (current) technological step

   Smooth transition
      business continuity
      data migration
Hardware: the big blocks picture

“FERMI” BlueGene/Q @CINECA




                                                                                              I/O nodes
                                                               BlueGene/Q                                 High perf
                                                                 2 Pflop/s                                 Storage
                                         b                                                                3 PBytes
                                      su
                                  Job
   Back-end/Data 10Gbit                                     InfiniBand Interconnect                       InfiniBand

                              Fermi
Front-end/Login


                                                ub
                                              bs            Hybrid iDataPlex
                                            Jo




                                                                                      I/O
                                                              151 Tflop/s
                                                                                             Fibre
           SSH




                   CINECA
                   Network            PLX                  InfiniBand Interconnect          Channel




                                                                 TBD
                                                     I/O
            RR
       T/ et
          GA
     AN ern




                                                               High Cap
   GE Int




                             HPC                                REPO
                      ols    Data                              >3PBytes
                 rotoc       Portal
              taP
            Da
HPC infrastructure for scientific computing


             Logical Name      FERMI (June 2012)        SCS PLX ( june 2011)

                Model             IBM BG / Q              IBM IDATAPLEX

             Architecture            MPP                    Linux Cluster

              Processor       IBM Power2A 1,6 GHz     Intel Westmere Ec 2.4 Ghz

               # of core            163840                      3288

               # of node            10240           274 + 548 Nvidia Fermi GPGPU

               # of rack              10                         14

              Total RAM          163 Tera Byte             ~ 16 Tera Byte

            Interconnection      IBM 5D Torus              Qlogiq QDR 4x

           Operating System         RedHat                    RedHat

             Total Power         ~ 1000 Kwatts              ~ 350 Kwatts

           Peak Performance    ~ 2100 Tera Flops          ~350 Tera Flops




© CINECA
Storage infrastructure



                             Frontend                        LAN / SAN
             System       bandwidth (GB/s)   Capacity (TB)     tech.     Disks Tech.

           2 x S2A9500          3,2              140         FCP 4Gb/s       FC

           4 x S2A9500          3,2              140         FCP 4Gb/s       FC

           6 x DCS9900          5,0              540         FCP 8Gb/s      SATA

           4 x DCS9900          5,0              720         FCP 4Gb/s      SATA

           3 x DCS9900          5,0              1500        FCP 4Gb/s      SATA

            Hitachi Ds          3,2              360         FCP 4Gb/s      SATA
                                                                                       June
           4 x SFA10000        10,0              2700          QDR          SATA       2012

             1 x 5100           3,2               66         FCP 8Gb/s       FC
                                                                                       June
           3 x SFA12000        40,0              3600          QDR          SATA       2012

                                               >9,7 PB


© CINECA
BlueGene/Q “FERMI” Italian PRACE tier0
                FERMI”




    Technology architecture: Blue Gene / Q, 2.1PF
     peak; 10 full rack, 163.840 core processor, 1GB of
     memory RAM per core.
    Storage system: 3PByte of scratch capacity with a
     front end bandwidth throughput in excess of 120
     GByte /sec.
    File system: GPFS.
    Dedicated Cluster front-end for storage system
     management
    Full production July 2012.




© CINECA

Contenu connexe

Tendances

Backplane Technology Overview for AdvancedTCA
Backplane Technology Overview for AdvancedTCABackplane Technology Overview for AdvancedTCA
Backplane Technology Overview for AdvancedTCA
huichenphd
 
Energy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic TuningEnergy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic Tuning
inside-BigData.com
 

Tendances (20)

Leonid sheremetov
Leonid sheremetovLeonid sheremetov
Leonid sheremetov
 
VVC tutorial at ICIP 2020 together with Benjamin Bross
VVC tutorial at ICIP 2020 together with Benjamin BrossVVC tutorial at ICIP 2020 together with Benjamin Bross
VVC tutorial at ICIP 2020 together with Benjamin Bross
 
FIBRE & beyond
FIBRE & beyondFIBRE & beyond
FIBRE & beyond
 
Multi-GPU FFT Performance on Different Hardware
Multi-GPU FFT Performance on Different HardwareMulti-GPU FFT Performance on Different Hardware
Multi-GPU FFT Performance on Different Hardware
 
Overview of HPC Interconnects
Overview of HPC InterconnectsOverview of HPC Interconnects
Overview of HPC Interconnects
 
State of ARM-based HPC
State of ARM-based HPCState of ARM-based HPC
State of ARM-based HPC
 
Backplane Technology Overview for AdvancedTCA
Backplane Technology Overview for AdvancedTCABackplane Technology Overview for AdvancedTCA
Backplane Technology Overview for AdvancedTCA
 
CUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computingCUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computing
 
Energy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic TuningEnergy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic Tuning
 
VVC tutorial at ICME 2020 together with Benjamin Bross
VVC tutorial at ICME 2020 together with Benjamin BrossVVC tutorial at ICME 2020 together with Benjamin Bross
VVC tutorial at ICME 2020 together with Benjamin Bross
 
FPL'2014 - FlexTiles Workshop - 6 - FlexTiles Embedded FPGA Accelerators
FPL'2014 - FlexTiles Workshop - 6 - FlexTiles Embedded FPGA AcceleratorsFPL'2014 - FlexTiles Workshop - 6 - FlexTiles Embedded FPGA Accelerators
FPL'2014 - FlexTiles Workshop - 6 - FlexTiles Embedded FPGA Accelerators
 
NNSA Explorations: ARM for Supercomputing
NNSA Explorations: ARM for SupercomputingNNSA Explorations: ARM for Supercomputing
NNSA Explorations: ARM for Supercomputing
 
08 Supercomputer Fugaku
08 Supercomputer Fugaku08 Supercomputer Fugaku
08 Supercomputer Fugaku
 
Versatile Video Coding: Compression Tools for UHD and 360° Video
Versatile Video Coding: Compression Tools for UHD and 360° VideoVersatile Video Coding: Compression Tools for UHD and 360° Video
Versatile Video Coding: Compression Tools for UHD and 360° Video
 
FPL'2014 - FlexTiles Workshop - 7 - FlexTiles Emulation platform
FPL'2014 - FlexTiles Workshop - 7 - FlexTiles Emulation platformFPL'2014 - FlexTiles Workshop - 7 - FlexTiles Emulation platform
FPL'2014 - FlexTiles Workshop - 7 - FlexTiles Emulation platform
 
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
 
Introduction to Serial RapidIO® (SRIO) by IDT
Introduction to Serial RapidIO® (SRIO) by IDTIntroduction to Serial RapidIO® (SRIO) by IDT
Introduction to Serial RapidIO® (SRIO) by IDT
 
Embedded Electronics for Telecom DSP
Embedded Electronics for Telecom DSPEmbedded Electronics for Telecom DSP
Embedded Electronics for Telecom DSP
 
Hardware & Software Platforms for HPC, AI and ML
Hardware & Software Platforms for HPC, AI and MLHardware & Software Platforms for HPC, AI and ML
Hardware & Software Platforms for HPC, AI and ML
 
Latest Technologies in Production & Broadcasting
Latest  Technologies in Production & BroadcastingLatest  Technologies in Production & Broadcasting
Latest Technologies in Production & Broadcasting
 

Similaire à CINECA for HCP and e-infrastructures infrastructures

Infn os summit_2015_v1.0
Infn os summit_2015_v1.0Infn os summit_2015_v1.0
Infn os summit_2015_v1.0
caifti
 
Arm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC, Big Data, & AI
Arm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC, Big Data, & AIArm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC, Big Data, & AI
Arm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC, Big Data, & AI
inside-BigData.com
 

Similaire à CINECA for HCP and e-infrastructures infrastructures (20)

Designing HPC Architectures at the Barcelona Supercomputing Center
Designing HPC Architectures at the Barcelona Supercomputing CenterDesigning HPC Architectures at the Barcelona Supercomputing Center
Designing HPC Architectures at the Barcelona Supercomputing Center
 
Infn os summit_2015_v1.0
Infn os summit_2015_v1.0Infn os summit_2015_v1.0
Infn os summit_2015_v1.0
 
Arm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC, Big Data, & AI
Arm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC, Big Data, & AIArm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC, Big Data, & AI
Arm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC, Big Data, & AI
 
Grid Computing In Israel
Grid Computing  In IsraelGrid Computing  In Israel
Grid Computing In Israel
 
Hai Tao at AI Frontiers: Deep Learning For Embedded Vision System
Hai Tao at AI Frontiers: Deep Learning For Embedded Vision SystemHai Tao at AI Frontiers: Deep Learning For Embedded Vision System
Hai Tao at AI Frontiers: Deep Learning For Embedded Vision System
 
El nuevo superordenador Mare Nostrum y el futuro procesador europeo
El nuevo superordenador Mare Nostrum y el futuro procesador europeoEl nuevo superordenador Mare Nostrum y el futuro procesador europeo
El nuevo superordenador Mare Nostrum y el futuro procesador europeo
 
European Processor Initiative & RISC-V
European Processor Initiative & RISC-VEuropean Processor Initiative & RISC-V
European Processor Initiative & RISC-V
 
European Processor Initiative & RISC-V
European Processor Initiative & RISC-VEuropean Processor Initiative & RISC-V
European Processor Initiative & RISC-V
 
Available HPC Resources at CSUC
Available HPC Resources at CSUCAvailable HPC Resources at CSUC
Available HPC Resources at CSUC
 
Leonid sheremetov
Leonid sheremetovLeonid sheremetov
Leonid sheremetov
 
Japan's post K Computer
Japan's post K ComputerJapan's post K Computer
Japan's post K Computer
 
Software Stacks to enable SDN and NFV
Software Stacks to enable SDN and NFVSoftware Stacks to enable SDN and NFV
Software Stacks to enable SDN and NFV
 
6Tisch telecom_bretagne_2016
6Tisch telecom_bretagne_20166Tisch telecom_bretagne_2016
6Tisch telecom_bretagne_2016
 
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...
 
AI Bridging Cloud Infrastructure (ABCI) and its communication performance
AI Bridging Cloud Infrastructure (ABCI) and its communication performanceAI Bridging Cloud Infrastructure (ABCI) and its communication performance
AI Bridging Cloud Infrastructure (ABCI) and its communication performance
 
Web Services for the Internet of Things
Web Services for the Internet of ThingsWeb Services for the Internet of Things
Web Services for the Internet of Things
 
OpenACC and Open Hackathons Monthly Highlights: July 2022.pptx
OpenACC and Open Hackathons Monthly Highlights: July 2022.pptxOpenACC and Open Hackathons Monthly Highlights: July 2022.pptx
OpenACC and Open Hackathons Monthly Highlights: July 2022.pptx
 
“Trends in Neural Network Topologies for Vision at the Edge,” a Presentation ...
“Trends in Neural Network Topologies for Vision at the Edge,” a Presentation ...“Trends in Neural Network Topologies for Vision at the Edge,” a Presentation ...
“Trends in Neural Network Topologies for Vision at the Edge,” a Presentation ...
 
Fast, Scalable Quantized Neural Network Inference on FPGAs with FINN and Logi...
Fast, Scalable Quantized Neural Network Inference on FPGAs with FINN and Logi...Fast, Scalable Quantized Neural Network Inference on FPGAs with FINN and Logi...
Fast, Scalable Quantized Neural Network Inference on FPGAs with FINN and Logi...
 
RECAP at ETSI Experiential Network Intelligence (ENI) Meeting
RECAP at ETSI Experiential Network Intelligence (ENI) MeetingRECAP at ETSI Experiential Network Intelligence (ENI) Meeting
RECAP at ETSI Experiential Network Intelligence (ENI) Meeting
 

Plus de Cineca

open data for openminds
open data for openmindsopen data for openminds
open data for openminds
Cineca
 
Eunis 2010 Cineca University Enterprise Portal
Eunis 2010 Cineca University Enterprise PortalEunis 2010 Cineca University Enterprise Portal
Eunis 2010 Cineca University Enterprise Portal
Cineca
 
Eunis2010 kion communication_builder
Eunis2010 kion communication_builderEunis2010 kion communication_builder
Eunis2010 kion communication_builder
Cineca
 
Planning and Controlling for Higher Education Institutions: Processes, Method...
Planning and Controlling for Higher Education Institutions: Processes, Method...Planning and Controlling for Higher Education Institutions: Processes, Method...
Planning and Controlling for Higher Education Institutions: Processes, Method...
Cineca
 

Plus de Cineca (20)

ReportHPC Cineca 2022-2023
ReportHPC Cineca 2022-2023ReportHPC Cineca 2022-2023
ReportHPC Cineca 2022-2023
 
REPORT_HPC_2021-22.pdf
REPORT_HPC_2021-22.pdfREPORT_HPC_2021-22.pdf
REPORT_HPC_2021-22.pdf
 
Report HPC
Report HPCReport HPC
Report HPC
 
Cineca HPC Annual Report 2020-2021
Cineca HPC Annual Report 2020-2021 Cineca HPC Annual Report 2020-2021
Cineca HPC Annual Report 2020-2021
 
Report HPC 2019 2020
Report HPC 2019 2020Report HPC 2019 2020
Report HPC 2019 2020
 
Gli strumenti social e l’usabilità al servizio della scuola: da MatchPoint a ...
Gli strumenti social e l’usabilità al servizio della scuola: da MatchPoint a ...Gli strumenti social e l’usabilità al servizio della scuola: da MatchPoint a ...
Gli strumenti social e l’usabilità al servizio della scuola: da MatchPoint a ...
 
Koha & SBN Linee guida di una integrazione
Koha & SBN Linee guida di una integrazioneKoha & SBN Linee guida di una integrazione
Koha & SBN Linee guida di una integrazione
 
The Big Iron, presentazione di Andrea Mattasoglio
The Big Iron, presentazione di Andrea MattasoglioThe Big Iron, presentazione di Andrea Mattasoglio
The Big Iron, presentazione di Andrea Mattasoglio
 
The Big Iron, presentazione di Giovanni Erbacci
The Big Iron, presentazione di Giovanni ErbacciThe Big Iron, presentazione di Giovanni Erbacci
The Big Iron, presentazione di Giovanni Erbacci
 
open data for openminds
open data for openmindsopen data for openminds
open data for openminds
 
Hpc europa2 call
Hpc europa2 callHpc europa2 call
Hpc europa2 call
 
Workshop HPC 4Proteomics and course Bioinformatics for Proteomics
Workshop HPC 4Proteomics and course Bioinformatics for ProteomicsWorkshop HPC 4Proteomics and course Bioinformatics for Proteomics
Workshop HPC 4Proteomics and course Bioinformatics for Proteomics
 
Cineca's HPC Courses
Cineca's HPC CoursesCineca's HPC Courses
Cineca's HPC Courses
 
7th Advanced School of Parallel Computing
7th Advanced School of Parallel Computing7th Advanced School of Parallel Computing
7th Advanced School of Parallel Computing
 
7th Advanced School of Computer Graphics
7th Advanced School of Computer Graphics7th Advanced School of Computer Graphics
7th Advanced School of Computer Graphics
 
20th Summer School of Parallel Computing
20th Summer School of Parallel Computing20th Summer School of Parallel Computing
20th Summer School of Parallel Computing
 
Workshop on Visualization of Large Scientific Data
 Workshop on Visualization of Large Scientific Data Workshop on Visualization of Large Scientific Data
Workshop on Visualization of Large Scientific Data
 
Eunis 2010 Cineca University Enterprise Portal
Eunis 2010 Cineca University Enterprise PortalEunis 2010 Cineca University Enterprise Portal
Eunis 2010 Cineca University Enterprise Portal
 
Eunis2010 kion communication_builder
Eunis2010 kion communication_builderEunis2010 kion communication_builder
Eunis2010 kion communication_builder
 
Planning and Controlling for Higher Education Institutions: Processes, Method...
Planning and Controlling for Higher Education Institutions: Processes, Method...Planning and Controlling for Higher Education Institutions: Processes, Method...
Planning and Controlling for Higher Education Institutions: Processes, Method...
 

Dernier

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 

Dernier (20)

GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 

CINECA for HCP and e-infrastructures infrastructures

  • 1. CINECA for HCP and e-infrastructures e- June 5, 2012 Casalecchio di Reno (BO) Via Magnanelli 6/3, 40033 Casalecchio di Reno | 051 6171411 | www.cineca.it © CINECA
  • 2. Next future • MIUR Ministry entrustment for the incorporation into CINECA of the CILEA Consortium in Milan and CASPUR Consortium in Rome • Create a unique entity that for critical mass will reinforce the Italian commitment for the development of e-infrastrcuture in Italy and in Europe for HCP and HCP technologies, scientific data repository and management, cloud computing for industries and Public administration, for the development of computing intensive and data intensive methods for science and engineering • From beginning of 2013 – Unique offer for open access of integrated tier0 and tier1 HCP national infrastructure – Unique offer of education and training activities under the umbrella of PRACE Training advanced center action – Integrated help desk and scale up process for HCP users support © CINECA
  • 3. Partnership agreement with DEMOCRITOS will expand the current activities towards computational molecular biophysics and new areas of materials science. This rapidly advancing field requires the deployment of dedicated hardware and human resources in a similarly way as large facilities are necessary to even conceive certain experimental enterprises. In order to optimize the use of available resources, as well as the synergies that DEMOCRITOS will create, our action will be deployed by implementing a 'distributed virtual laboratory'. Such a laboratory will be strongly supported by SISSA and by CINECA. CNR NANO Istituto Nanoscienze is a new institute of CNR devoted to frontier research in nanoscience and nanotechnology, established in 2010 from three former laboratories of INFM: NEST-Pisa, NNL-Lecce, and S3-Modena. CNR NANO interests range from fundamental science to emerging technologies as well as applied projects of direct industrial and societal interest. Partner CASPUR. The goal of the EPIGENOMICS FLAGSHIP PROJECT (Progetto Bandiera Epigenomica 2012 - 2015) is to understand, using a wide range of experimental models and approaches, how epigenetic mechanisms regulate biological processes, determine phenotypic variation and contribute to the onset and progression of diseases. © CINECA
  • 4. Partnership agreement with The INFN - the National Institute of Nuclear Physics - is an organization dedicated to the study of the fundamental constituents of matter, and conducts theoretical and experimental research in the fields of sub nuclear, nuclear, and astroparticle physics. Fundamental research in these areas requires the use of cutting-edge technologies and instrumentation, which the INFN develops both in its own laboratories and in collaboration with the world of industry. These activities are conducted in close collaboration with the academic world. With CINECA is in place a framework agreement for the development of applications for theoretical simulations of physical phenomena in the domain of fundamental constituents of matter. Particularly development joint activity will be conducted in the development of new prototype solution for advanced computing based on massively parallel accelerated architecture. INFN will co-invest for the development of the HCP infrastructure in CINECA looking head in the prospective of the follower of the FERMI system. © CINECA
  • 5. Partnership agreement with European Laboratory for Non-Linear Spectroscopy is a European reference point for research with light waves, based on a fundamental multi-disciplinary approach. LENS since its birth in 1991 is a center of excellence of the University of Florence. Objectives of the partnership: data processing, data mining, knowledge extraction/acceleration, “critical mission system management, assistance guided decision Data mining Connectome Project (Consortium Connettoma) Data mining cardiology (predictive cardiopatics e-networks) Data mining neuro-omics (Besta) Data mining Human Brain Project Data mining pharmaceutical data (Novartis) Stream Computing DeepQA (Watson) © CINECA
  • 6. Specific agreement with LENS for the Human Brain Project Scale bars: 1 mm © CINECA
  • 7. Partnership agreement with HPC Data Centric Italian Exascale Lab Initial activities • Validation of the new “EURORA” PRACE Prototype architecture. • Evaluation of new programming models (ArBB, TBB, Cilk, OpenMP). • Evaluation of MIC-to-MIC MPI communications. • Evaluation and comparison between MIC to GPU • Porting of full applications: • SISSA QuantumEspresso, Gromacs, • ICTP HP-SEE • OGS Solid Earth model and Ocean/biochemistry circulation model • INFN QCD, Lattice model, grid model • Hot Water Cooling efficiency • Specific measurement of power consumption by specific sensors • Independent Power consumption measurements for each main component of the system • Evaluation of TCO and total energy efficiency Funded by the MIUR Minister with 1 Million Euro for the participation of Italy to the PRACE action on top of the fund assured to CINECA. All the partners cofound in similar amount either in cash of in kind. © CINECA
  • 8. Added value pay per use agreement with: - INAF, ESA, ASI for the post processing of the observed data of the Plank Mission and Gaia Mission; - IIT for the Italian Drug Discovery Network initiative lead by IIT - INGV for the EU ESFRI RI project EPOS lead by INGV © CINECA
  • 9. The partecipation to the EU funded projects Hosting Member Principal Infrastructure provider Project coordinator HPC for Molecular modeling V-MusT European Virtual archaeology Handbook of HPC e-Science Middleware Infrastructure Allocation Reviewing, Infrastrcucture Selection and Management HPC for Geophysic Towards EXASCALE R&D (3 projects) Latin American © CINECA
  • 10. Drug design Computational fluid dynamic • Production management Meteo climatology • Application development APA Stereo 3D Movie Award Siggraph Asia 2011 © CINECA
  • 11. HCP infrastructure for technical computing and added value service for industries Special Appl. (june Logical Name Production (April 2010) Production (April 2011) Production (April 2012) 2008) Model IBM LS21 / HS21 HP Proliant IBM LS21 / HS21 IBM HS21 Architecture Linux Cluster Linux Cluster Linux Cluster Linux Cluster Intel Sandybridge 8c Processor Intel Nehalem Qc 2.6 Ghz Westmere Ec 2.6 Ghz Intel Xeon Qc 2.8 Ghz 2.8 Ghz # of core 10240 15360 2560 3280 # of node 1280 1280 320 512 # of rack 25 25 4 12 Total RAM 45 Tera Byte 45 Tera Byte 10 Tera Byte 4 Tera Byte CISCO Infiniband SDR Interconnection CISCO Infiniband SDR 4x Mellanox Infiniband QDR 4x Infiniband QDR 4x 4x Operating System RedHat RedHat RedHat RedHat Total Power ~ 800 Kwatts ~ 800 Kwatts ~ 200 Kwatts ~ 380 Kwatts Peak Performance > 100 Tera Flops > 120 Tera Flops > 57 Tera Flops 40 Tera Flops © CINECA
  • 12. CINECA's first step into multi Pflop/s Procurement 2008 terms of reference  100 Tflop/s system at initial 2009 with option to upgrade to Pflop/s size  a prototype to demonstrate and facilitate technological transition  1 PF in 2012 at a not to exceed price 2009 IBM selected vendor  100Tflops SP6 + BlueGene/P proto  12 SP6 p575 frames + I/O nodes + 6 DCS9900 FC Storage servers  1 BlueGene/P rack + I/O nodes + 1 DCS9900 FC Storage server
  • 13. Ongoing transition BlueGene/Q option (2.1 PFlop/s)  Reuse of the water cooling infrastructure + free cooling  Better PUE: from 1.3 (worst case measured) to 1.15 (expected)  Reuse of the power distribution infrastructure with same power consumption (~900Kw) – UPSs and generators included  Rough transition (stop services during installation)  room issue: no separate room space to host the new system during transition (due to other businesses)
  • 14. Fulfill the PRACE commitment Main (technological) requirements  Transition to water cooling (better PUE)  the 100 Tflop/s plant should be reused for the next (current ) technological step (for a system of 2 PF peak)  Power consumption: 100Tflop/s hits the Megawatt (not over)  reuse of UPSs, power panels etc. and same or lower power consumption for the next (current) technological step  Smooth transition  business continuity  data migration
  • 15. Hardware: the big blocks picture “FERMI” BlueGene/Q @CINECA I/O nodes BlueGene/Q High perf 2 Pflop/s Storage b 3 PBytes su Job Back-end/Data 10Gbit InfiniBand Interconnect InfiniBand Fermi Front-end/Login ub bs Hybrid iDataPlex Jo I/O 151 Tflop/s Fibre SSH CINECA Network PLX InfiniBand Interconnect Channel TBD I/O RR T/ et GA AN ern High Cap GE Int HPC REPO ols Data >3PBytes rotoc Portal taP Da
  • 16. HPC infrastructure for scientific computing Logical Name FERMI (June 2012) SCS PLX ( june 2011) Model IBM BG / Q IBM IDATAPLEX Architecture MPP Linux Cluster Processor IBM Power2A 1,6 GHz Intel Westmere Ec 2.4 Ghz # of core 163840 3288 # of node 10240 274 + 548 Nvidia Fermi GPGPU # of rack 10 14 Total RAM 163 Tera Byte ~ 16 Tera Byte Interconnection IBM 5D Torus Qlogiq QDR 4x Operating System RedHat RedHat Total Power ~ 1000 Kwatts ~ 350 Kwatts Peak Performance ~ 2100 Tera Flops ~350 Tera Flops © CINECA
  • 17. Storage infrastructure Frontend LAN / SAN System bandwidth (GB/s) Capacity (TB) tech. Disks Tech. 2 x S2A9500 3,2 140 FCP 4Gb/s FC 4 x S2A9500 3,2 140 FCP 4Gb/s FC 6 x DCS9900 5,0 540 FCP 8Gb/s SATA 4 x DCS9900 5,0 720 FCP 4Gb/s SATA 3 x DCS9900 5,0 1500 FCP 4Gb/s SATA Hitachi Ds 3,2 360 FCP 4Gb/s SATA June 4 x SFA10000 10,0 2700 QDR SATA 2012 1 x 5100 3,2 66 FCP 8Gb/s FC June 3 x SFA12000 40,0 3600 QDR SATA 2012 >9,7 PB © CINECA
  • 18. BlueGene/Q “FERMI” Italian PRACE tier0 FERMI”  Technology architecture: Blue Gene / Q, 2.1PF peak; 10 full rack, 163.840 core processor, 1GB of memory RAM per core.  Storage system: 3PByte of scratch capacity with a front end bandwidth throughput in excess of 120 GByte /sec.  File system: GPFS.  Dedicated Cluster front-end for storage system management  Full production July 2012. © CINECA