In this presentation from the Dell booth at SC13, Joseph Antony from NCI describes how they are using HPC Virtualization to meet user needs.
Watch the video presentation: http://insidehpc.com/2013/12/05/panel-discussion-thought-hpc-virtualization-never-going-happen/
3. NCI
–
an
overview
Mission:
Research
Outcomes
•
to
foster
ambi8ous
and
aspira8onal
research
objec8ves
and
to
enable
their
realisa8on,
in
the
Australian
context,
through
world-‐class,
high-‐end
compu8ng
services
•
being
driven
by
research
objec8ves
•
a
comprehensive,
ver8cally-‐integrated
research
service
•
providing
na8onal
access
on
priority
and
merit,
and
•
being
built
on,
and
sustained
by,
a
collabora8on
of
na8onal
organisa8ons
and
research-‐intensive
universi8es
Research Objectives
NCI
is:
Communi.es
and
Ins.tu.ons/
Access
and
Services
Exper.se
Support
and
Development
Digital
Laboratories
Data
Centric
Services
Compute
(HPC/Cloud)
and
Data
Infrastructure
4. Climate Science has a solution
Integrated
Intimately connected
Robust
Accessible
Raijin
NCI
cloud
NCI + CoE
technical
5. Impact !
ACCESS
• A collaborative tool
• Under svn
• Co-support
• CoE/BoM/CSIRO PhDs
• Shared research(ers)
CMIP-5
• A collaborative data set
• Co-supported
• Shared analyses
• CoE/BoM/CSIRO PhDs
• Shared research(ers)
Raijin
NCI
cloud
NCI + CoE
technical
6. In
case
you’re
wondering
where
are
we
located?
• In
the
na.on’s
capital,
at
its
na.onal
university
…
7. HPC
Virtualiza.on?
• HPC
procurements
typically
involve
major
CAPEX
spend
on
Big
Iron
for
aWacking
grand
challenge
problems
• Typically
most
large
HPC
centers
have
– Capability
machines:
lands
in
the
TOP500
around
#10
to
20.
These
have
special
purpose
architectures
and
accelerators
–
Work-‐horse
machines:
usually
x86
+
IB
8. HPC
Virtualiza.on?
(1)
• HPC
procurements
typically
involve
major
CAPEX
spend
on
Big
Iron
for
aWacking
grand
challenge
problems
• Typically
most
large
HPC
centers
have
– Capability
machines:
lands
in
the
TOP500
around
#10
to
20.
These
have
special
purpose
architectures
and
accelerators
–
Work-‐horse
machines:
usually
x86
+
IB
13. Is
there
life
beyond
a
batch-‐oriented
system?
• HPC
Centers
will
be
forced
to
evolve
beyond
batch-‐oriented
systems
due
to
an
immovable
iceberg
–
‘Big
Data’
• The
NCI
moved
to
virtualiza.on
in
1999
to
handle
non-‐tradi.onal
workloads
due
to
complex
data
lifecycles
– Satellite
Image
Processing
– CMIP5
Climate
Data
Processing
– Genomics
Assembly
Pipelines
– N-‐to-‐N
Cancer
Genomics
Comparisons
– Interac.ve
Volume
Rendering
– Trawling
YouTube
and
Analyzing
Birthday
Videos
14.
15. Engaging
with
Priority
Research:
Environment.
Goal:
• To
provide
a
single
high-‐performance
compu.ng
for
environment
research
Partners:
• CSIRO,
GA,
Bureau,
Research
Community
• Lockheed-‐Mar.n,
GA,
NCI,
VPAC
Requirements:
• Provide
na.onal
processing
environment
for
key
satellite
data
(eg.
SEADAS)
• Provide
collabora.ve
environment
for
tools
that
produce
reference
Digital
Eleva.on
Maps
• Provide
data
environment
for
fast,
easy
na.onal-‐nested
grid.
16. Engaging
with
Priority
Research:
Environment.
Data
Intensive
Ac8vity
• Data
Processing
Intensive
Pipelines
(SEADAS)
over
large
data
raw
imagery
Key
ini8al
datasets
• LANDSAT
archive
• MODIS
• DEMs
(9s,
3s,
1s)
• LIDAR
• Deriva.ve
products
Data
Intensive
query
and
analysis
environment
Eg.
Hadoop
over
nested
grids.
17. Engaging
with
Priority
Research:
Environment.
•
•
•
Collabora.on
to
provide
beWer
and
common
processing
environments
Next
genera.on
of
tools
(able
to
operate
at
na.onal
scale)
New
aggrega.on
of
tools
and
techniques
under
TERN
e-‐MAST
project.
27. NCI’s
Science
Cloud
Building
Blocks
from
Dell
• From
Dell
using
C8000
chassis
building
blocks
for
OpenStack
compute,
Swik
(S3)
and
Ceph
(EBS)
• Hyperscale
meets
Exascale
…
(?)
28. Summary
• HPC
in
the
Cloud
–
Clusters-‐in-‐the-‐cloud
– Offload,
Burs.ng
• Big
Data
needs
– Complex,
long-‐lived
data
processing
– Community
ecosystems
• Service
provider
abstrac.on