This document provides an overview of the CERN School of Computing (CSC) held in August 2010. It discusses the organization of the CSC which covers introductory physics computing, common tools and techniques like ROOT, and data analysis. It also covers various base computing technologies like computer architecture, creating secure software, virtualization, and networking quality of service. Finally, it discusses data technologies and storage systems. The overall goal of the CSC is to provide students and researchers an overview of computing technologies relevant to particle physics experiments and concepts in particle physics.
2. What is CSC?
CERN school of Computing
For postgraduate students and research workers
To give an overview of some computing
technologies involved in particle physics and
some concepts concerning this kind of physics
2
3. Organization
Physics Base Data
Computing Technologies Technologies
- Computer Architecture
- Intro to physics
and Performance Tuning
computing
- Creating Secure
- Tools and techniques
ROOT
Software - Data Technologies
- Tools and techniques
ROOT
- Virtualization
- Data Analysis
- Networking QoS and
Performance
3
4. Organization
Physics
Computing
Physics Base Data
Computing Technologies Technologies
- Intro to physics
- IntroComputer Architecture
- to physics
and Performance Tuning
computing computing Secure
- Creating
- Tools and techniques
- ROOT - ROOTSoftware - Data Technologies
- Virtualization
- Data Analysis
- ToolsNetworking QoS and
- and techniques
Performance
- Data Analysis
3
5. General introduction to
Physics Computing
Software and hardware components required
for the processing of the experimental data,
from the source to the physics analysis
The main goal is data reduction:
Very high event rate (40MHz)
Event size (>10MB)
Large background
4
6. General introduction to
Physics Computing
Online processing
Trigger: Event selection
Data acquisition: Interface to detector HW
Monitoring
Control
5
7. General introduction to
Physics Computing
Subdetector at CMS
https://cms.web.cern.ch/cms/Resources/Website/Media/Videos/Animations/files/CMS_Slice.gif
6
8. General introduction to Introduction to Physics Computi
CERN School of Computing 2010,
Physics Computing
CMS L1 trigger example
back-to-back opposite sign isol
CMS Level 1 trigger
muons
input rate 40MHz output rate
30-100kHz
2 detector systems: muons/
calorimeters
High level filter
input rate 30-100kHz
output rate 100-150Hz CSC 2010
29 Rudi Frühwirth, HEPHY Vienna
CSC 2010 Physics Computing General Introduction to Phys
7 197
9. General introduction to Introduction to Physics Computi
CERN School of Computing 2010,
Physics Computing
CMS L1 trigger example
back-to-back opposite sign isol
CMS Level 1 trigger
muons
input rate 40MHz output rate
30-100kHz
2 detector systems: muons/
calorimeters
High level filter
input rate 30-100kHz
output rate 100-150Hz CSC 2010
29 Rudi Frühwirth, HEPHY Vienna
Raw Data sent to Physics Computing 325MB/s
CSC 2010
Tier-0 farm General Introduction to Phys
7 197
10. General introduction to
Physics Computing
Offline processing
Calibration
Alignment
Event Reconstruction
Simulation
Physics analysis
8
11. Introduction to Physics Computing
General introduction to
CERN School of Computing 2010, Uxbridge
Physics Computing
Silicon Tracker calibration
Incoming particle creates electric
Offline processing strips or p
charge in
g p pixels
Calibration
Alignment
Event Reconstruction
Simulation
Incoming particle
CSC 2010
Physics analysis
45 Rudi Frühwirth, HEPHY Vienna
CSC 2010 Physics Computing General Introduction to Physics Computing Lect
8 213
12. General introduction to
Physics Computing
Offline processing
Calibration
Alignment
Event Reconstruction
Simulation
Physics analysis
8
13. General introduction to
Physics Computing
Introduction to Physics Computing
CERN School of Computing 2010, Uxbridge
Neutral particles (ctd)
Offline processing
Calibration
Alignment
Event Reconstruction
Simulation
Physics analysis
CSC 2010 Rudi Frühwirth, HEPHY Vienna
61
8
14. General introduction to
Physics Computing
Offline processing
Calibration
Alignment
Event Reconstruction
Simulation
Physics analysis
8
15. ROOT
It is an object-oriented program and library
developed by CERN for particle physics analysis.
Developed in 1995, but from 2003 written in C++.
What does it provides:
Data storage, access and query system.
Statistical analysis algorithms.
Scientific visualization: 2D, 3D, PDF, LateX
Geometrical modeler
PROOF parallel query engine
9
16. ROOT
It is an object-oriented program and library
developed by CERN for particle physics analysis.
Developed in 1995, but from 2003 written in C++.
What does it provides:
Data storage, access and query system.
Statistical analysis algorithms.
Scientific visualization: 2D, 3D, PDF, LateX
Geometrical modeler
PoD
PROOF parallel query engine
9
21. Tools and Techniques
Software design and modern tools for Physics
Computing.
As individual
Testing: Junit, CppUnit
Memory related problems - allocation, memory leaks - malloc,
MALLOC_CHECK, memprof, ccmalloc, etc.
Performance tools: perfAnal.
As part of large code projects
Controlling and versioning: CVN, SVN
Releases and configuration management of systems: CMS
11
22. Organization
Physics Base Data
Computing Technologies Technologies
- Computer Architecture
- Intro to physics
and Performance Tuning
computing
- Creating Secure
- Tools and techniques
Software - Data Techonlogies
- ROOT
- Virtualization
- Data Analysis
- Networking QoS and
Performance
12
23. Organization
Physics Base Data
Computing
Base Technologies
Technologies Technologies
- Computer Architecture
- Intro to physics
computing - Computer Secure Tuning
and Performance
- Creating
Architecture and
- Tools and techniques
- ROOT
Performance Tuning - Data Techonlogies
Software
- Virtualization
- Data Analysis - Creating Secure and
- Networking QoS
Software
- Virtualization
Performance
- Networking QoS and
Performance
12
24. Computer Architecture
Seven dimensions of performance
Computer Architecture and Performance Tuning
and performance tuning
Computer Architecture and Performance Tuning
nsions of performance First three dimensions:
Superscalar Pipelining
p g
imensions: Pipelining
Pipelining
p g
Computational width/SIMD
Introduction to processor layout.
chitecture and Performance Tuning
Superscalar
performance
al width/SIMD dimension is a “pseudo”
Next
dimension: SIMD width
s:is a “pseudo”dimensions of performance
7Hardware multithreading
Superscalar
Multithreading
n
SIMD width Nodes
Pipelining
p g
ast three dimensions: Multithreading
ultithreadingLast t ee d e s o s
Multiple cores
Nodes
D so s
e
ensions: Sockets
Multiple sockets
Superscalar
s
do” Multiple compute nodesSockets
kets SIMD width Multicore
19 Multithreading
pute nodes= Single Instruction Multiple Data
SIMD Sverre Jarp - CERN
NodesOverall impact of programming styles and compilers
Multicore
Data
CSC 2010 Base CERN
Sverre Jarp -
Technologies Computer Architecture and Performance Tuning Lecture 1 & 2
817
s Metrics to define application performance: CPI, #branch
Computer Architecture and Performance Tuning Lecture 1 & 2
Sockets
817
instructions, mispredicted branches, #SSE instructions, fails cache.
Multicore
Jarp - CERN
Performance monitoring with pfmon and Perfmon2
chitecture and Performance Tuning Lecture 1 & 2
17 13
26. Network QoS and
performance
RSVP / NSIS protocols (simplified)
Base Technologies / Networking QoS and Performance
QoS options: Flow
RES
R
R
R Flow
senderTechnologies / Networking QoS and Performance RESP Receiver
Base
Diffserv PrincipleNSIS/RSVP
R
Base Technologies / Networking QoS and Performance
MPLS
NSIS/RSVP Priority Mark Priority traffic P2 Priority traffic P1
RESERVE control message sent periodically byflowing
inserted Create a “circuit” Traffic source
before Pkts enter the (called MPLS path) R over th
the
“QoS core”
MPLS path
Diffserv
Regular
traffic
Force all traffic with R
Simple examination “Marked” destination
of mark provides
R same
priority receiver replies with a RESPONSE control message
R
packets
same Qos
requirement
MPLS RESPONSE reserve resources on Rthe route back
to follow the same
DiffServ
path
MPLS
MPLS path
if RESERVE not repeated after time-out, resources released
TCP, UDP and RTP protocols in real-time
37 François Fluckiger – CERN
CSC 2010 Base Technologies Networking QoS and PerformanceFrançois Fluckiger – 2
42 Lecture 1 and CERN
CSC 2010 Base Technologies Networking QoS and Performance Lecture 1 and 2
28 972 François Fluckiger – CERN
streaming traffic over the Internet
977
CSC 2010 Base Technologies Networking QoS and Performance
963
15
27. Virtualization
Virtualization refers to technologies designed to
provide a layer of abstraction between
computer hardware systems and the software
running on them.
16
28. Virtualization
Memory
Resource Virt. mem
Network
Storage
Virtualization
Platform OS level
Partial
Full virtualization
Application Paravirtualization
HW assisted
17
29. Virtualization: Introduction to virtualization technology
Hypervisor Architecture
Virtualization
A technique that all (software
based) virtualization solutions
Platform virtualization
use is ring deprivileging:
the It p
operating system that runs
g y
hides the physical
originally on ring of is computing
characteristics
0 a moved to
another less privileged ring like
platform from the users
ring 1.
This allows the (hypervisor or l
Thi Host softwareVMM to control
ll th t t
the guest OS access to
VMM) creates a simulated
resources.
computer environment, a
It avoids one guestfor itskicking
virtual machine, OS guest
another out of memory, or a
OS.
guest OS controlling the
hardware directly.
18
33. Virtualization
Why?
Server consolidation
Isolated sandboxes per user. Running
untrusted applications will not risk the entire
box
Provisioning with no need of up-front
purchase
22
34. Virtualization
...Why?
Disaster recovery: the restarting and
relocating of a VM is faster
Developing: being able to run on different
platforms
Easier management, it is easier to automate,
easier to scale the number of VMs up and
down
23
36. Virtualization
Use Case: Cloud Computing
Get services on demand over the network
Service: Software, Platform or Infrastructure
25
37. Virtualization: Application of the virtualization technology
Virtualization
Rethinking Application Deployment
Use Case: CernVM Virtual Machine
Application
mphasis in the ‘Application’
Virtual appliance
Libraries
The application dictates the platform
and not the contrary
Runs on any virtualization platform and Tools
provides consistent and effortless
pplication (e.g. of experiment SW
installation simulation) is Databases
undled with its libraries, services OS
nd bitsConfiguration of a CernVM image for a
of OS
Self-contained, self-describing, deployment ready
specific experiment such as ALICE or
LHCb and run some experiment specific
What makes the Application ready to run in any target
application
xecution environment?
e.g. Traditional, Grid, Cloud26
38. and group to ‘alice’ (we will need this for the next p
Virtualization
27
39. Organization
Physics Base Data
Computing Technologies Technologies
- Computer Architecture
- Intro to physics
and Performance Tuning
computing
- Creating Secure
- Tools and techniques
Software - Data Technologies
- ROOT
- Virtualization
- Data Analysis
- Networking QoS and
Performance
28
40. Organization
Physics Data Technologies
Base Data
Computing Technologies Technologies
- Computer Architecture
- Intro to physics
and Performance Tuning
computing
- Creating Secure
- Tools and techniques
Software - Data Technologies
- ROOT - Data Technologies
- Virtualization
- Data Analysis
- Networking QoS and
Performance
28
41. Data technologies
Storage Technologies
Physical and logical connectivity
Complexity
Hardware
Components
CPU, disk, memory,
Storage Technologies PC, disk server
motherboard
O
Network,
Storage devices Cluster,
Interconnects
Local fabric
RAID
Wide area network
World Wide G
Cluster Man
File Systems (local, 5 Bernd Panzer-Steindel - CERN
network and cluster) CSC 2010 Data Technologies Storage Techn
1019
And many other
concepts..
29
42. Data technologies
Storage Technologies
Physical and logical connectivity
Complexity
Hardware
Components
CPU, disk, memory,
Storage Technologies PC, disk server
motherboard
O
Network,
Storage devices Cluster,
Interconnects
Local fabric
RAID
Wide area network
World Wide G
Cluster Man
File Systems (local, 5 Bernd Panzer-Steindel - CERN
network and cluster) CSC 2010 Data Technologies Storage Techn
1019
And many other
concepts..
29
43. Data technologies
Storage Technologies
Physical and logical connectivity
Complexity
Hardware
Components
CPU, disk, memory,
Storage Technologies PC, disk server
motherboard
O
Network,
Storage devices Cluster,
Interconnects
Local fabric
RAID
Wide area network
World Wide G
Cluster Man
File Systems (local,systems I
Cluster file
Storage Technologies
5 Bernd Panzer-Steindel - CERN
network and cluster) Aggregation of local file systems and
Server nodes
Clients
CSC 2010 Data Technologies Storage Techn
1019
Meta-data server is the new important component
Mapping of files to locations
And many other
Data base implementation (Oracle, MySQL, ….)
Data base
Control data flow between the clients and the
concepts..
Meta-data server
Data flow directly between clients and disk server
Server
S
Two types of implementations :
1. Device driver implementation via the virtual file system
the application accesses the data via a file system syntax
th li ti th d t i fil t t
mount point, looks like a local file system, same commands (ls, rm, mkdir, etc.)
2. Translation of application IO commands ( p , read, write, seek, close) via
pp (open, , , , )
special IO library linked into the executable. Special commands for ls/rm/mkdir …
42 Bernd Panzer-Steindel - CERN
29
44. If you are interested:
http://www-linux.gsi.de/~amontiel/CSC2010.pdf.gz
30
the school has been run for more than 30 years, i attended the 33rd edition. \nOrganized since 1970. Director Francois Fluckiger.\nWe were about 50 students. \nThe teachers are experts in the field, normally from CERN collaboration institutions and also past students.\n2 weeks long \n 2-weeks long, from 8.30 to 19\n
The lectures were divided into theoretical and practical sessions. \nI would be impossible to mention everything in this presentation, so i will just mention some keywords and main concepts learnt for each module\n
The lectures were divided into theoretical and practical sessions. \nI would be impossible to mention everything in this presentation, so i will just mention some keywords and main concepts learnt for each module\n
\n
Decisions quick, crucial, data discard lost forever\nTrigger\nDAQ\nMonitoring the detector status, the DAQ performance , the trigger performance, data quality check\nControl: Configure system, Start/Stop data taking, initiate special runs, upload trigger tables\n
Inner Tracker (pixels + strips): momentum + position of charged tracks\nElectromagnetic calorimeter: energy of photons, electrons and positrons.\nHadron calorimeter: energy of charged neutral hadrons\nMuon system: momentum and position of muons\n
the trigger conditions are defined by the physics of the experiment\n\nCalorimeter trigger:  Two types of calorimeters: hadronic,\nelectromagnetic\n Local: Computes energy deposits\n Regional: Finds candidates for electrons, photons, jets, isolated hadrons; computes transverse energy sums\n Global: Sorts candidates in all categories, does total and missing transverse energy sums, computes jet multiplicities for different thresholds\n\n Muon trigger:  Three types of muon detectors\n Local: Finds track segments\n Regional: Finds tracks\n Global: Combines information from all regional triggers, selects best four muons, provides energy and direction\n\n Global trigger:\n Final decision logic\n 28 input channels (muons, jets, electrons, photons, total/missing ET)\n 128 trigger algorithms running in parallel  128 decision bits  Apply conditions (thresholds, windows, deltas)  Check isolation bits  Apply topology criteria (close/opposite)\n\n\n
Until here online, everything was about data taken in real time. \nNow offline. COMPRESSING\nCalibration: convert the raw data, analogical or digital, to physical quantities, like energy or position. Silicon Tracker calibration-> Solve inverse problem: reconstruct crossing point from charge distribution and crossing angle. Very detector dependent.using partile track data to check if all the current detector settings and postition and stuff is correct\nAlignment: find out precise detector positions\nEvent reconstruction: reconstruct particle tracks and vertices off all particle trajectories participating in one event. \nfind out which particles have been created where and with which momentum. Many can be observed directly. Some are short-lived and have to be reconstructed from their decay products.The difficulties: background from low-momentum, additional background from other interactions, energy loss.\nSimulation: simulate trayectories of particles where u take into account interactions that u cannot calculate\nNeed: - optimization of detector in design phase. \n- testing , validation, optimization of trigger and reconstruction algorithm.\n- compute acceptance corrections.\n generate artificial events resembling real data as closely as possible.\nPhysics analisys: ROOT\n
Until here online, everything was about data taken in real time. \nNow offline. COMPRESSING\nCalibration: convert the raw data, analogical or digital, to physical quantities, like energy or position. Silicon Tracker calibration-> Solve inverse problem: reconstruct crossing point from charge distribution and crossing angle. Very detector dependent.using partile track data to check if all the current detector settings and postition and stuff is correct\nAlignment: find out precise detector positions\nEvent reconstruction: reconstruct particle tracks and vertices off all particle trajectories participating in one event. \nfind out which particles have been created where and with which momentum. Many can be observed directly. Some are short-lived and have to be reconstructed from their decay products.The difficulties: background from low-momentum, additional background from other interactions, energy loss.\nSimulation: simulate trayectories of particles where u take into account interactions that u cannot calculate\nNeed: - optimization of detector in design phase. \n- testing , validation, optimization of trigger and reconstruction algorithm.\n- compute acceptance corrections.\n generate artificial events resembling real data as closely as possible.\nPhysics analisys: ROOT\n
Until here online, everything was about data taken in real time. \nNow offline. COMPRESSING\nCalibration: convert the raw data, analogical or digital, to physical quantities, like energy or position. Silicon Tracker calibration-> Solve inverse problem: reconstruct crossing point from charge distribution and crossing angle. Very detector dependent.using partile track data to check if all the current detector settings and postition and stuff is correct\nAlignment: find out precise detector positions\nEvent reconstruction: reconstruct particle tracks and vertices off all particle trajectories participating in one event. \nfind out which particles have been created where and with which momentum. Many can be observed directly. Some are short-lived and have to be reconstructed from their decay products.The difficulties: background from low-momentum, additional background from other interactions, energy loss.\nSimulation: simulate trayectories of particles where u take into account interactions that u cannot calculate\nNeed: - optimization of detector in design phase. \n- testing , validation, optimization of trigger and reconstruction algorithm.\n- compute acceptance corrections.\n generate artificial events resembling real data as closely as possible.\nPhysics analisys: ROOT\n
Until here online, everything was about data taken in real time. \nNow offline. COMPRESSING\nCalibration: convert the raw data, analogical or digital, to physical quantities, like energy or position. Silicon Tracker calibration-> Solve inverse problem: reconstruct crossing point from charge distribution and crossing angle. Very detector dependent.using partile track data to check if all the current detector settings and postition and stuff is correct\nAlignment: find out precise detector positions\nEvent reconstruction: reconstruct particle tracks and vertices off all particle trajectories participating in one event. \nfind out which particles have been created where and with which momentum. Many can be observed directly. Some are short-lived and have to be reconstructed from their decay products.The difficulties: background from low-momentum, additional background from other interactions, energy loss.\nSimulation: simulate trayectories of particles where u take into account interactions that u cannot calculate\nNeed: - optimization of detector in design phase. \n- testing , validation, optimization of trigger and reconstruction algorithm.\n- compute acceptance corrections.\n generate artificial events resembling real data as closely as possible.\nPhysics analisys: ROOT\n
Data storage: sofisticated data structures optimized for Write once read many (WROM) Ttrees\nMathematical library with advance algorithms statistics useful for simulation\nrendering openGL\n– geometrical modeller –Allows visualization of detector geometries\n - PROOF parallel query engine:start analysis locally ("client"),\n• PROOF distributes data and code, • lets CPUs ("workers") run the analysis, • collects and combines (merges) data, • shows analysis results locally\nPoD which allows starting a PROOF cluster at user request on any resource management system.\ndoesn’t require administrator privileges\n\n
- histogramming and graphing to visualize and analyze distributions and functions,\n - curve fitting (regression analysis) and minimization of functionals,\n - statistics tools used for data analysis,\n - matrix algebra,\n - four-vector computations, as used in high energy physics,\n - standard mathematical functions,\n - multivariate data analysis, e.g. using neural networks,\naccess to distributed data (in the context of the Grid),\ndistributed computing, to parallelize data analyses,\npersistence and serialization of objects, which can cope with changes in class definitions of persistent data,\naccess to databases,\n3D visualizations (geometry)\ncreating files in various graphics formats, like PostScript, JPEG, SVG,\ninterfacing Python and Ruby code in both directions,\ninterfacing Monte Carlo event generators.\n\n
- histogramming and graphing to visualize and analyze distributions and functions,\n - curve fitting (regression analysis) and minimization of functionals,\n - statistics tools used for data analysis,\n - matrix algebra,\n - four-vector computations, as used in high energy physics,\n - standard mathematical functions,\n - multivariate data analysis, e.g. using neural networks,\naccess to distributed data (in the context of the Grid),\ndistributed computing, to parallelize data analyses,\npersistence and serialization of objects, which can cope with changes in class definitions of persistent data,\naccess to databases,\n3D visualizations (geometry)\ncreating files in various graphics formats, like PostScript, JPEG, SVG,\ninterfacing Python and Ruby code in both directions,\ninterfacing Monte Carlo event generators.\n\n
- histogramming and graphing to visualize and analyze distributions and functions,\n - curve fitting (regression analysis) and minimization of functionals,\n - statistics tools used for data analysis,\n - matrix algebra,\n - four-vector computations, as used in high energy physics,\n - standard mathematical functions,\n - multivariate data analysis, e.g. using neural networks,\naccess to distributed data (in the context of the Grid),\ndistributed computing, to parallelize data analyses,\npersistence and serialization of objects, which can cope with changes in class definitions of persistent data,\naccess to databases,\n3D visualizations (geometry)\ncreating files in various graphics formats, like PostScript, JPEG, SVG,\ninterfacing Python and Ruby code in both directions,\ninterfacing Monte Carlo event generators.\n\n
- histogramming and graphing to visualize and analyze distributions and functions,\n - curve fitting (regression analysis) and minimization of functionals,\n - statistics tools used for data analysis,\n - matrix algebra,\n - four-vector computations, as used in high energy physics,\n - standard mathematical functions,\n - multivariate data analysis, e.g. using neural networks,\naccess to distributed data (in the context of the Grid),\ndistributed computing, to parallelize data analyses,\npersistence and serialization of objects, which can cope with changes in class definitions of persistent data,\naccess to databases,\n3D visualizations (geometry)\ncreating files in various graphics formats, like PostScript, JPEG, SVG,\ninterfacing Python and Ruby code in both directions,\ninterfacing Monte Carlo event generators.\n\n
- histogramming and graphing to visualize and analyze distributions and functions,\n - curve fitting (regression analysis) and minimization of functionals,\n - statistics tools used for data analysis,\n - matrix algebra,\n - four-vector computations, as used in high energy physics,\n - standard mathematical functions,\n - multivariate data analysis, e.g. using neural networks,\naccess to distributed data (in the context of the Grid),\ndistributed computing, to parallelize data analyses,\npersistence and serialization of objects, which can cope with changes in class definitions of persistent data,\naccess to databases,\n3D visualizations (geometry)\ncreating files in various graphics formats, like PostScript, JPEG, SVG,\ninterfacing Python and Ruby code in both directions,\ninterfacing Monte Carlo event generators.\n\n
Also ROOT as a big software project inside a collaboration, needed to follow some GOOD PRACTICES to be able to develop in such a environment. \n\n
The lectures were divided into theoretical and practical sessions. \nI would be impossible to mention everything in this presentation, so i will just mention some keywords and main concepts learnt for each module\n
The lectures were divided into theoretical and practical sessions. \nI would be impossible to mention everything in this presentation, so i will just mention some keywords and main concepts learnt for each module\n
Superscalar:executes more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to redundant functional units on the processor,the CPU checks for dependencies between instructions\nPipelining: instructions are divided into stages and we can execute several instructions at a time.\nSIMD:using vectors\nHW multithreading: \nStreaming SIMD Extensions packed vectors, registers with 128bits\nImpact of programming styles: taking into account the memory hierarchy and the fails in memory. Use vectorization. GNU compiler and Intel compiler: automatic autovector and autoparallelization. \n
Threat modeling: what threats will the system face? – what could go wrong? – how could the system be attacked and by whom?\nRisk assessment: how much to worry about them?\n– calculate or estimate potential loss and its likelihood\nSecurity is a process, not a product . It should be present in any stage of SW development.\nHUMAN FACTOR\n\n
Quality of Service\nto provide different priority to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow\n\nRSVP:Resource Reservation Protocol, is the mechanism defined by the Integrated Services for reserving resources in the network. It is called a signaling protocol, because its aim is to signal to the network that a given flow is going to require certain guarantees for latencies and loss ratio, if the flow respects a certain bit rate.\nNSIS:Next Step in Signaling (NSIS)\n\nDiffserv, which stands for Differentiated Services, is another recent technique aiming at overcoming the problem of heavy classification -that is the process for routers of knowing which service class a packet belongs to. The idea it to "mark" the packets with and indication of their priority in order to avoid having routers examining multiple fields. This mark is called a "differentiated mark", or a Diffserv Code Point (DSCP) and serves to map to a differentiated treatment to be applied to the packet.\n\nMPLS stands for Multi-Protocol Label Switching. LABEL in the header of the packet , NETWORKING NODES = neither pure IP routers nor pure switches, rather hybrid objects which try and combine the good points of both systems. FAST FORWARDING DECISION. MPLS routers are provided with forwarding tables that map the incoming label to an outgoing link.\n\nTransmission Control Protocol\nUser Datagram Protocol\nMost applications use RTP (Real-Time Transport Protocol)\nReal Time audio or video application\ntime-stamp packet loss detectio\n
\n
Memory Virtualization: aggregating RAM resources from networked systems into a single memory pool.\nVirt. Memory: giving an application program the impression that it has contiguous working memory, isolating it from the underlying physical memory implementation.\nNetwork: creation of a virtualized network addressing space within or across network subnet. external network virtualization, in which one or more local networks are combined or subdivided into virtual networks, with the goal of improving the efficiency of a large corporate network or data center. internal network virtualization. Here a single system is configured with containers, such as the Xen domain, combined with hypervisor control programs or pseudo-interfaces such as the VNIC, to create a “network in a box.”\nStorage: the process of completely abstracting logical storage from physical storage\nCluster: high-availability\nGrid: throughput\nApp. Virtualization: encapsulate the app from the OS, so that they could be executed everywhere: wine, Java VM.\n
VMM:Virtual machine monitor\nPlatform virtualization approaches \nOperating system-level virtualization: the kernel of an operating system allows for multiple isolated user-space instances, instead of just one.\nPartial virtualization: Provides only a partial simulation of the underlying hardware. that entire operating systems cannot run in the virtual machine but that many applications can run.\n\n
\n
Full virtualization: emulates enough hardware to allow an unmodified "guest" OS to run. The challenge of emulate privileged operations. QEMU,Parallels Desktop for Mac, VirtualBox, Virtual Iron, Oracle VM.\nHardware-assisted virtualization: VMM can efficiently virtualize the entire x86 instruction set by handling these sensitive instructions using a classic trap-and-emulate model in hardware, as opposed to software. Linux KVM.\n
Paravirtualization: the virtual machine does not necessarily simulate hardware, but instead (or in addition) offers a special API that can only be used by modifying the "guest" OS. This system call to the hypervisor is called a "hypercall"\n\n
List of reasons that maybe they overlap in between each other\nServer consolidation is an approach to the efficient usage of computer server resources in order to reduce the total number of servers or server locations that an organization requires\n\n
\n
SW testing: Virtual machines can cut time and money out of the software development and testing process. Set of virtual machines that run a variety of platforms attached to an Execution Engine where Build and Test Jobs are executed on behalf of the submitting users.\nSW development: Software @ LHC\n Millions of lines of code  Different packaging and software distribution models  Complicated software installation/update/configuration procedure  Long and slow validation and certification process  Very difficult to roll out major OS upgrade (SLC4 -> SLC5)  Additional constraints imposed by the grid middleware development Effectively locked on one Linux flavour  Whole process is focused on middleware and not on applications\n\nDONATION\n
SaaS: The capability provided to the consumer is is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browse.\nPaaS: to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider.\nIaaS:to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications\n
Virtual Software Appliance is a lightweight Virtual Machine image that combines\n Minimal operating environment  Specialized application functionality  Easy end user configuration\nThese appliances are designed to run under one or more of the various virtualization technologies, such as\n VMware , Xen, Parallels, Microsoft Virtual PC, QEMU, User mode Linux, CoLinux\nVirtual Software Appliances also aim to eliminate the issues related to deployment in a traditional server environment\n Simplify configuration procedure  Ease maintenance effort\n
\n
The lectures were divided into theoretical and practical sessions. \nI would be impossible to mention everything in this presentation, so i will just mention some keywords and main concepts learnt for each module\n
The lectures were divided into theoretical and practical sessions. \nI would be impossible to mention everything in this presentation, so i will just mention some keywords and main concepts learnt for each module\n
Storage Technologies\nFile systems: Make the storage devices available to the user applications\nPhysical: Mapping of disk blocks to files\nLogical: Hierarchical arrangement of directories Stores the actual file data and structural file system meta-data\nRAID: redundant array of independent (or inexpensive) disks is a technology that provides increased storage reliability through redundancy, combining multiple relatively low-cost, less-reliable disk drives components into a logical unit where all drives in the array are interdependent.\n
Storage Technologies\nFile systems: Make the storage devices available to the user applications\nPhysical: Mapping of disk blocks to files\nLogical: Hierarchical arrangement of directories Stores the actual file data and structural file system meta-data\nRAID: redundant array of independent (or inexpensive) disks is a technology that provides increased storage reliability through redundancy, combining multiple relatively low-cost, less-reliable disk drives components into a logical unit where all drives in the array are interdependent.\n
Storage Technologies\nFile systems: Make the storage devices available to the user applications\nPhysical: Mapping of disk blocks to files\nLogical: Hierarchical arrangement of directories Stores the actual file data and structural file system meta-data\nRAID: redundant array of independent (or inexpensive) disks is a technology that provides increased storage reliability through redundancy, combining multiple relatively low-cost, less-reliable disk drives components into a logical unit where all drives in the array are interdependent.\n
\n
Motivation: hands-on exercises in shape of contests\nExamination -> csc diploma\nown presentations\nexcursion to oxford and bletchley park, dinner at a college\ndinner at cruiser in the thames\nsports activities everyday\n\n\n
Motivation: hands-on exercises in shape of contests\nExamination -> csc diploma\nown presentations\nexcursion to oxford and bletchley park, dinner at a college\ndinner at cruiser in the thames\nsports activities everyday\n\n\n
Motivation: hands-on exercises in shape of contests\nExamination -> csc diploma\nown presentations\nexcursion to oxford and bletchley park, dinner at a college\ndinner at cruiser in the thames\nsports activities everyday\n\n\n
Motivation: hands-on exercises in shape of contests\nExamination -> csc diploma\nown presentations\nexcursion to oxford and bletchley park, dinner at a college\ndinner at cruiser in the thames\nsports activities everyday\n\n\n
Motivation: hands-on exercises in shape of contests\nExamination -> csc diploma\nown presentations\nexcursion to oxford and bletchley park, dinner at a college\ndinner at cruiser in the thames\nsports activities everyday\n\n\n
Motivation: hands-on exercises in shape of contests\nExamination -> csc diploma\nown presentations\nexcursion to oxford and bletchley park, dinner at a college\ndinner at cruiser in the thames\nsports activities everyday\n\n\n
Motivation: hands-on exercises in shape of contests\nExamination -> csc diploma\nown presentations\nexcursion to oxford and bletchley park, dinner at a college\ndinner at cruiser in the thames\nsports activities everyday\n\n\n
Motivation: hands-on exercises in shape of contests\nExamination -> csc diploma\nown presentations\nexcursion to oxford and bletchley park, dinner at a college\ndinner at cruiser in the thames\nsports activities everyday\n\n\n
Motivation: hands-on exercises in shape of contests\nExamination -> csc diploma\nown presentations\nexcursion to oxford and bletchley park, dinner at a college\ndinner at cruiser in the thames\nsports activities everyday\n\n\n