5. What is cluster computing?
cluster computing is the
technique of linking two
or more computers into
a network (usually through
a local area network)
inorder to take advantage
of the Parallel processing
power of those computers.
6. The clustering model can provide both HA &
HP, and also manageability, scalability, &
affordability
Typically homogeneous, tightly
coupled, nodes trust each other.
As number of h/w components rises, so
does the probability of failure.
Increasing probability of fault occurrence
for long-running applications.
7. Improve the operating speed
ofprocessors & other components.
Connect multiple processors
together & coordinate their
computational efforts.
allow the sharing of a computational
task among multiple processors
8. A user submits a job to the head node.
The job identifies the application to run on
the cluster.
The job scheduler on the head node
assigns each task defined by the job to a
node and then starts each application
instance on the assigned node.
Results from each of the application
instances are returned to the client via files
or databases.
9.
10. Cluster computing system contract by IBM in
the 1950s based on the MIT
Whirlwind computer architecture.
During the decade of the 1980s, increased
interest in the potential of cluster computing
was marked by important experiments in
research and industry.
12
12. The clusters are designed to maintain
redundant nodes that can act as backup
systems in the event of failure.
The minimum number of nodes in a HA
cluster is two - one active and one
redundant - though most HA clusters will
use considerably more nodes.
14. Load-balancing clusters are
extremely useful for those
working with limited IT
budgets.
Load-balancing clusters
operate by routing all work
through one or more
load-balancing front-end
nodes.
17. July 1999
1000 nodes
Used for genetic algorithm
research by John Koza,
Stanford University
www.genetic-
programming.com/
Mateti-Everything-About-Linux 19
20. There are three primary categories of
applications that use parallel
clusters.
1.Compute intensive application.
2.Data or i/o intensive applicaion.
3.Transaction intensive application.
22. Software: difficult to develop software
for
distributed systems.
Network:- saturation, transmissions.
Security: easy access also applies to
secrete
data.
23. Solve parallel processing paradox .
Clusters based supercomputers can be
seen everywhere!
New trends in hardware and software
tech-nologies are likely to make clusters
more promising and fill SSI (Single System
Image)gap.
24. The Grid is a large system of
computing resources that performs
tasks and provides to users a
single point of access, commonly
based on the World Wide Web
interface, to these distributed
resources.
Major Grid projects include NASA’s
Information Power Grid, two NSF Grid
projects (NCSA Alliance’s Virtual