The document summarizes the available high performance computing (HPC) resources at CSUC. It describes the hardware facilities including the Canigó and Pirineus II clusters, totaling over 3,000 cores and 317 TFlops of processing power. The working environment uses Linux and Slurm for job scheduling. Development tools like compilers, MPI libraries, and math libraries are available to support modeling and simulation work.
2. Summary
• Who are we?
• High performance computing at CSUC
• Hardware facilities
• Working environment
• Development environment
3. Summary
• Who are we?
• High performance computing at CSUC
• Hardware facilities
• Working environment
• Development environment
4. What is the CSUC?
• CSUC is a public consortium born from the
join of CESCA and CBUC
• Institutions part of the consortium:
• Associated institutions:
6. Summary
• Who are we?
• High performance computing at CSUC
• Hardware facilities
• Working environment
• Development environment
7. HPC matters
• Nowadays simulation is a fundamental tool to
solve and uderstand problems in science and
engineering
Theory
SimulationExperiment
8. HPC role in science and engineering
• HPC allows the researchers to solve problems
that otherwise cannot be afforded
• Numerical simulationsare used in a wide
variety of fields like:
Chemistry and
materials sciences
Life and health
sciences
Mathematics,
physics and
engineering
Astronomy, space
and Earth sciences
9. Main applications per knowledge area
Chemistry
and materials
science
Vasp
Siesta
Gaussian
ADF
CP2K
Life and
health
sciences
Amber
Gromacs
NAMD
Schrödinger
VMD
Mathematics,
physics and
engineering
OpenFOAM
FDS
Code Aster
Paraview
Astronomy
and Earth
sciences
WRF
WPS
10. Software available
• In the following link you can find a detailed list
of the software
installed: https://confluence.csuc.cat/display/
HPCKB/Installed+software
• If you don't find your application ask for it to
the support team and we will be happy to
install it for you or help you in the installation
process
11. Demography of the service: users
• 32 research projects from 14 different
institutions are using our HPC service.
• These projects are distributedin:
– 10 Large HPC projects (> 500.000 UC)
– 1 Medium HPC project (250.000 UC)
– 21 Small HPC projects (≤ 100.000 UC)
20. Canigó
• Shared memory
machines (2 nodes)
• 33.18 Tflop/s peak
performance (16,59
per node)
• 384 cores (8 cpus Intel
SP Platinum 8168 per
node)
• Frequency of 2,7 GHz
• 4,6 TB main memory
per node
• 20 TB disk storage
21. 4 nodes with 2 x GPGPU
• 48 cores (2x Intel SP Platinum 8168,
2.7 GHz)
• 192 GB main memory
• 4.7 Tflop/s per GPGPU
4 Intel KNL nodes
• 1 x Xeon-Phi 7250 (68 cores @
1.5 GHz, 4 hw threads)
• 384 GB main memory per node
• 3.5 Tflop/s per node
Pirineus II
22. Standard nodes (44 nodes)
• 48 cores (2x Intel SP Platinum
6148, 2.7 GHz)
• 192 GB main memory (4 GB/core)
• 4 TB disk storage per node
High memory nodes (6 nodes)
• 48 cores (2x Intel SP Platinum 6148, 2.7 GHz)
• 384 GB main memory (8 GB/core)
• 4 TB disk storage per node
Pirineus II
23. New high performace scratch system
• New high performance storage available
based on BeeGFS
• 180 TB total space available
• Very high read / write speed
• Infiniband HDR direct connection (100 Gbps)
between the BeeGFS cluster and the compute
nodes.
27. Summary
• Who are we?
• High performance computing at CSUC
• Hardware facilities
• Working environment
• Development environment
28. Working environment
• The working environment is shared between
all the users of the service.
• Each machine is managed by GNU/Linux
operating system (Red Had).
• Computational resources are managed by the
Slurm Workload manager.
• Compilers and development tools availble:
Intel, GNU and PGI
29. Batch manager: Slurm
• Slurm manages the available resources in
order to have an optimal distributionbetween
all the jobs in the system
• Slurm assign different priority to each job
depending on a lot of factors
… more on this after the coffee!
30. Storage units
(*) There is a limit per project depending on the number of users, the project
quotas are 4, 8 and 16 GB for 5, 10 and 20 users respectively. We are
working in improving this limits right now.
31. How to access to our services?
• You can apply for a RES (red española de
supercomputación) project asking to work at
CSUC (in pirineus II or canigo). More information
about this on https://www.res.es/es/acceso-a-la-
res
• If you are not granted with a RES project or you
are not interested in applying for it you can still
work with us. More info
in https://www.csuc.cat/ca/supercomputacio/sol
licitud-d-us
32. HPC Service price
Academic project¹
Initial block
- Group I: 500.000 UC 8.333,33 €
- Group II: 250.000 UC 5.555,55 €
- Group III: 100.000 UC 3.333,33 €
Additional 50.000 UC block
- When you have paid for 500.000 UC 280 €/block
- When you have paid for 250.000 UC 1.100 €/block
- When you have paid for 100.000 UC 1.390 €/block
DGR discount for catalan academic
groups
-10 %
¹10% discount for Catalan entities due to the funding that we receive from DGR
33. Accounting HPC resources
• In order to quantify the used resources we introduce the UC
as a unit.
• UC: Computational unit. It is defined as UC =
HC(Computacional Hour) x factor
– For standard nodes, 1HC = 1UC. Factor = 1.
– For GPU nodes, 1HC = 1UC. Factor = 1. (*)
– For KNL nodes, 1HC = 0,5 UC. Factor = 0,5. (**)
– Per a canigó (SMP), 1HC = 2UC. Factor = 2
(*) You need to allocate a full socket (24 cores) at minimum
(**) You need to allocate the full node (72 cores)
34. Choosing your architecture: HPC
partitions // queues
• We have 4 partitions available for the users:
std, gpu, knl, mem workingon standard, gpu,
knl or shared memory nodes.
• Initially the user can only use std partition but
if any user wants to use a different
architecture only need to request permission
and it will be granted.
… more on this later...
38. Summary
• Who are we?
• High performance computing at CSUC
• Hardware facilities
• Working environment
• Development environment
39. Development tools @ CSUC HPC
• Compilers available for the users:
– Intel compilers
– PGI compilers
– GNU compilers
• MPI libraries:
– Open MPI
– Intel MPI
– MPICH
– MVAPICH
40. Development tools @ CSUC HPC
• Intel Advisor, VTune, ITAC, Inspector
• Scalasca
• Mathematical libraries:
– Intel MKL
– Lapack
– Scalapack
– FFTW
• If you need anything that is not installed let us
know