SlideShare une entreprise Scribd logo
1  sur  17
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP Apollo Systems
John Gromala, Senior Director, Hyperscale, Product Management, HP Servers
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.2
HP Confidential until June 9, 2014
Solving global problems requires greater…
• Geophysical
Sciences
• Energy Research
& Production
• Meteorological
Sciences
• Government
• Academia
• Research &
Development
• Life Sciences
• Pharmaceutical
• Entertainment
• Media
Production
• Visualization
& Rendering
• Computer-Aided
Engineering
• Electronic Design
Automation
AccessibilityPerformance Efficiency
• Financial
Services
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.3
HP Confidential until June 9, 2014
Reinventing HPC to accelerate the world of
tomorrow
• HP’s advanced HPC compute solutions to solve
the world’s most difficult problems
– Rapidly ramp performance for accelerated
results
– Maximize rack-scale density and energy
efficiency
– Unleash HPC with an infrastructure that is
affordable, less complex, and easy to
manage
– Flexibility to precisely meet workload needs
• Expanding access to HPC resources via HPC
Innovation Hubs, and HPC as a Service
Introducing: The new Apollo family
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.4
HP Confidential until June 9, 2014
10/100/1000Base-T
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
53
Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex
54
51 5249 50
Green=10Gbps, Yellow=1GbpsSFP+
Unit
SYS
ModeGreen = Simplex
Yellow = Duplex
HP A5800 Series
Switch JG225A
10/100/1000Base-T
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
53
Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex
54
51 5249 50
Green=10Gbps, Yellow=1GbpsSFP+
Unit
SYS
ModeGreen = Simplex
Yellow = Duplex
HP A5800 Series
Switch JG225A
10/100/1000Base-T
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
53
Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex
54
51 5249 50
Green=10Gbps, Yellow=1GbpsSFP+
Unit
SYS
ModeGreen = Simplex
Yellow = Duplex
HP A5800 Series
Switch JG225A
10/100/1000Base-T
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
53
Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex
54
51 5249 50
Green=10Gbps, Yellow=1GbpsSFP+
Unit
SYS
ModeGreen = Simplex
Yellow = Duplex
HP A5800 Series
Switch JG225A
42
41
40
39
38
37
36
35
34
33
32
31
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
42
41
40
39
38
37
36
35
34
33
32
31
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
47
46
45
44
43
47
46
45
44
43
The best performance for your budget
Leading performance per $ per
watt
• up to 4x more performance/$/watt
• up to 60% less floor space
Rack scale efficiency
• 160 x 1P servers per rack with 10 hot-
pluggable dual-server trays per 5U
chassis
• Maximize rack-level energy efficiency
Tailor to the workload for lower
TCO
• Mix compute, accelerator, storage and
networking to fit workload needs
Introducing the HP Apollo 6000 System
Electronic Design Automation
• Computer chips
• Cellular phones
• Pacemakers
• Controls for automobiles and satellites
• Servers, routers and switches
Monte Carlo Simulations
• Investment analysis
• Financial derivatives
• Physical sciences
• Engineering
• Computational biology
• Computer graphics
• Gaming
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.5
HP Confidential until June 9, 2014
HP Apollo 6000 System Overview
First available tray
• ProLiant XL220a dual-server tray
• Front serviceable
• Rear cabled solution
• Max power of ~169W per tray
Rack scale
• 160 nodes per 48U rack
• 5U chassis (1.0m deep rack)
• 20 nodes per enclosure
• Front service, rear cabled
10/100/1000Base-T
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
53
Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex
54
51 5249 50
Green=10Gbps, Yellow=1GbpsSFP+
Unit
SYS
ModeGreen = Simplex
Yellow = Duplex
HP A5800 Series
Switch JG225A
10/100/1000Base-T
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
53
Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex
54
51 5249 50
Green=10Gbps, Yellow=1GbpsSFP+
Unit
SYS
ModeGreen = Simplex
Yellow = Duplex
HP A5800 Series
Switch JG225A
10/100/1000Base-T
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
53
Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex
54
51 5249 50
Green=10Gbps, Yellow=1GbpsSFP+
Unit
SYS
ModeGreen = Simplex
Yellow = Duplex
HP A5800 Series
Switch JG225A
42
41
40
39
38
37
36
35
34
33
32
31
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
42
41
40
39
38
37
36
35
34
33
32
31
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
High performance
computing
CPU
DIMMs
2x 1P
CPU
DIMMs
Shared power & cooling
• Efficient pooled power shelf
supports up to 6 chassis
• N, N+1, 2N redundancy
configs
• 12 volts DC output with max
power of 15.9kW
• Advanced Power Manager
Rack-level shared infrastructure for efficiency and flexibility
• Highest frequency per core
• Intel E3-12xx v3 Haswell
• CPU core generation ahead
• Single-threaded applications
• Max turbo frequency of 4GHz
• Low latency: No 2P cache
coherency
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.6
HP Confidential until June 9, 2014
Next generation density and efficiency at scale
Dell M620
2 1/2 racks
20% more performance*
60% less rack space
46% less power**
$3M
HP Apollo 6000 System with
ProLiant XL220a Servers, 1 rack
TCO savings
10/100/1000Base-T
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
53
Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex
54
51 5249 50
Green=10Gbps, Yellow=1GbpsSFP+
Unit
SYS
ModeGreen = Simplex
Yellow = Duplex
HP A5800 Series
Switch JG225A
10/100/1000Base-T
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
53
Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex
54
51 5249 50
Green=10Gbps, Yellow=1GbpsSFP+
Unit
SYS
ModeGreen = Simplex
Yellow = Duplex
HP A5800 Series
Switch JG225A
10/100/1000Base-T
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
53
Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex
54
51 5249 50
Green=10Gbps, Yellow=1GbpsSFP+
Unit
SYS
ModeGreen = Simplex
Yellow = Duplex
HP A5800 Series
Switch JG225A
10/100/1000Base-T
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
53
Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex
54
51 5249 50
Green=10Gbps, Yellow=1GbpsSFP+
Unit
SYS
ModeGreen = Simplex
Yellow = Duplex
HP A5800 Series
Switch JG225A
42
41
40
39
38
37
36
35
34
33
32
31
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
42
41
40
39
38
37
36
35
34
33
32
31
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
47
46
45
44
43
47
46
45
44
43
The best performance for your budget
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
“We are seeing up to 35% performance
increase in our Electronic Design Automation
application workloads. We have deployed
more than 5,000 of these servers, achieving
better rack density and power efficiency,
while delivering higher application
performance to Intel silicon design
engineers.”
Kim Stevenson, CIO, Intel
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.8
HP Confidential until June 9, 2014
Advancing the science of supercomputing
The New HP Apollo 8000 System
Leading teraflops per rack for accelerated
results
• 4X teraflops/sq. ft. than air-cooled systems
• > 250 teraflops/rack
Efficient liquid cooling without the risk
• 40% more FLOPS/watt and 28% less energy
than air-cooled systems
• Dry-disconnect servers, intelligent Cooling
Distribution Unit (iCDU) monitoring and isolation
Redefining data center energy recycling
• Save up to 3,800 tons of CO2/year (790 cars)
• Recycle water to heat facility
Scientific Computing
• Research computing
• Climate modeling
• Protein analysis
Manufacturing
• Product modeling
• Simulations
• Material analysis
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.9
HP Confidential until June 9, 2014
Dry disconnect servers
• 100% water cooled components
• Designed for serviceability
Apollo 8000 System Technologies
Open door view of 4 compute & redundant CDU
racks
Management infrastructure
• HP iLO4, IPMI 2.0 and DCMI 1.0
• Rack-level Advanced Power
Manager
Power infrastructure
• Up to 80kW per rack
• Four 30A 3-phase 380-480VAC
Intelligent Cooling Distribution Unit
• 320 KW power capacity
• Integrated controls with active-active failover
Warm water
• Closed secondary loop in CDU
• Isolated and open facility loop
Advancing the science of supercomputing
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.10
HP Confidential until June 9, 2014
Differentiated: Dry disconnect servers
• Enables maintenance of servers without breaking a water connection
• Inside the server tray, heat is transferred from components via vapor in sealed heat pipes
• Thermal bus bars on the side of the compute tray transfer heat to the water wall in the rack
• Water flows through thermal bus bar in the rack from supply-and-return pipes
• Fluid fully contained under vacuum
New patented technology making a liquid-cooled system as easy to service as air-
cooled
Thermal bus
bars
Sealed heat
pipes
Sealed heat
pipes
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.11
HP Confidential until June 9, 2014
Failure is not an option
• Dry disconnect servers: sealed heat pipes
cool components
• Facility water isolated from IT loop
• Takes ASHRAE spec water
• Secondary IT loop vacuum keeps water in
place
• Intelligent Cooling Distribution Unit designed to
minimize and isolate issues
• Comprehensive system insight and
management built on Advanced Power
Management and smart sensors
Efficient liquid cooling without the risk
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.12
HP Confidential until June 9, 2014
Accelerating the science of supercomputing
IBM rack servers
(Air cooled)
4X teraflop performance
per sq. foot
40% more FLOPs/watt
28% less energy
$1M savings
2.6X more energy for facilities infrastructure
up to 3800 tons more CO2/year
save over 1200 ft for a 1 petaflop system
annualized PUE 1.06 or better
HP Apollo 8000 system
(Liquid cooled)
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.13
HP Confidential until June 9, 2014
World’s largest supercomputer dedicated to
advancing renewable energy research
• $1 million in annual energy savings and cost
avoidance through efficiency improvements
• Petascale (one million billion calculations/ second)
• 6-fold increase in modeling and simulation
capabilities
• Average PUE of 1.06 or better
• Source of heat for ESIF’s 185,000 square feet of
office and lab spaces, as well as the walkways
• 1MW of data center power in under 1,000 sq. ft.,
very energy-dense configuration
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.14
HP Confidential until June 9, 2014
Global impact: Reduce the carbon footprint
• Top supercomputing performance has grown
944X and the number of cores has exploded by
609X over 10 years*
• Power requirements have increased 5.5X, while
power costs per kW have spiked
• The HP Apollo family is a new approach to
HPC, with cost-effective and environmentally
friendly HPC solutions
• Save while reducing carbon footprint
Save 3,800 tons of CO2 per year, ~amount of CO2 produced by 790 cars
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.15
HP Confidential until June 9, 2014
HP Services for Apollo solutions
• Data center facilities
• Workload migration
• Big data, mobility,
virtualization design,
planning and implementation
• Factory Express
• Onsite installation
• HP Education
• Datacenter Care supports
your environment
• Proactive Care helps
prevent problems
• Foundation Care helps
solve problems faster
Support
Consulting
Services Implementation
Data center planning and
design services to
achieve business
outcomes
Ongoing support to help
get connected and get
back to business
Services to speed
startup and build
capabilities with new
technology
Financing
• Available globally where
HP Financial Services
conducts business
• Technology refresh
approach to allow for
future scalability and
upgrades
Flexible payment plan
and terms to meet your
needs
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.16
HP Confidential until June 9, 2014
Extending HPC access to SMBs
HP – Intel HPC Innovation Hubs open
up new opportunities to compete
• Partnering with universities, organizations and
ISVs
• Access to HPC resources, modeling and
simulation applications and expertise
• Result: Make vehicle components safer and
more reliable
• Grow the industry, improve competitiveness in
the global marketplace
Fueling innovation with HPC as a Service and HPC Innovation Hubs
HPC as a Service bridges the gap
between public and private cloud
• Access a common, open cloud platform across
standard IT cloud and HPC cloud initiatives
• Easy to use to increase competitive agility
• Open, efficient management of complex HPC
resources
• Secure, scalable performance for your
demanding workloads with leading cluster
infrastructure and security integration
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Contenu connexe

Plus de inside-BigData.com

HPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural NetworksHPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural Networksinside-BigData.com
 
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean MonitoringBiohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoringinside-BigData.com
 
Machine Learning for Weather Forecasts
Machine Learning for Weather ForecastsMachine Learning for Weather Forecasts
Machine Learning for Weather Forecastsinside-BigData.com
 
HPC AI Advisory Council Update
HPC AI Advisory Council UpdateHPC AI Advisory Council Update
HPC AI Advisory Council Updateinside-BigData.com
 
Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19inside-BigData.com
 
Energy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic TuningEnergy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic Tuninginside-BigData.com
 
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPODHPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPODinside-BigData.com
 
Versal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud AccelerationVersal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud Accelerationinside-BigData.com
 
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance EfficientlyZettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance Efficientlyinside-BigData.com
 
Scaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's EraScaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's Erainside-BigData.com
 
CUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computingCUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computinginside-BigData.com
 
Introducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi ClusterIntroducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi Clusterinside-BigData.com
 
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...inside-BigData.com
 
Adaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and EigensolversAdaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and Eigensolversinside-BigData.com
 
Scientific Applications and Heterogeneous Architectures
Scientific Applications and Heterogeneous ArchitecturesScientific Applications and Heterogeneous Architectures
Scientific Applications and Heterogeneous Architecturesinside-BigData.com
 
SW/HW co-design for near-term quantum computing
SW/HW co-design for near-term quantum computingSW/HW co-design for near-term quantum computing
SW/HW co-design for near-term quantum computinginside-BigData.com
 

Plus de inside-BigData.com (20)

HPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural NetworksHPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural Networks
 
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean MonitoringBiohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
 
Machine Learning for Weather Forecasts
Machine Learning for Weather ForecastsMachine Learning for Weather Forecasts
Machine Learning for Weather Forecasts
 
HPC AI Advisory Council Update
HPC AI Advisory Council UpdateHPC AI Advisory Council Update
HPC AI Advisory Council Update
 
Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19
 
Energy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic TuningEnergy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic Tuning
 
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPODHPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
 
State of ARM-based HPC
State of ARM-based HPCState of ARM-based HPC
State of ARM-based HPC
 
Versal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud AccelerationVersal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud Acceleration
 
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance EfficientlyZettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
 
Scaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's EraScaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's Era
 
CUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computingCUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computing
 
Introducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi ClusterIntroducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi Cluster
 
Overview of HPC Interconnects
Overview of HPC InterconnectsOverview of HPC Interconnects
Overview of HPC Interconnects
 
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
 
Data Parallel Deep Learning
Data Parallel Deep LearningData Parallel Deep Learning
Data Parallel Deep Learning
 
Making Supernovae with Jets
Making Supernovae with JetsMaking Supernovae with Jets
Making Supernovae with Jets
 
Adaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and EigensolversAdaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and Eigensolvers
 
Scientific Applications and Heterogeneous Architectures
Scientific Applications and Heterogeneous ArchitecturesScientific Applications and Heterogeneous Architectures
Scientific Applications and Heterogeneous Architectures
 
SW/HW co-design for near-term quantum computing
SW/HW co-design for near-term quantum computingSW/HW co-design for near-term quantum computing
SW/HW co-design for near-term quantum computing
 

Dernier

Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 

Dernier (20)

Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 

HP Apollo Slidecast

  • 1. © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Apollo Systems John Gromala, Senior Director, Hyperscale, Product Management, HP Servers
  • 2. © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.2 HP Confidential until June 9, 2014 Solving global problems requires greater… • Geophysical Sciences • Energy Research & Production • Meteorological Sciences • Government • Academia • Research & Development • Life Sciences • Pharmaceutical • Entertainment • Media Production • Visualization & Rendering • Computer-Aided Engineering • Electronic Design Automation AccessibilityPerformance Efficiency • Financial Services
  • 3. © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.3 HP Confidential until June 9, 2014 Reinventing HPC to accelerate the world of tomorrow • HP’s advanced HPC compute solutions to solve the world’s most difficult problems – Rapidly ramp performance for accelerated results – Maximize rack-scale density and energy efficiency – Unleash HPC with an infrastructure that is affordable, less complex, and easy to manage – Flexibility to precisely meet workload needs • Expanding access to HPC resources via HPC Innovation Hubs, and HPC as a Service Introducing: The new Apollo family
  • 4. © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.4 HP Confidential until June 9, 2014 10/100/1000Base-T 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 53 Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex 54 51 5249 50 Green=10Gbps, Yellow=1GbpsSFP+ Unit SYS ModeGreen = Simplex Yellow = Duplex HP A5800 Series Switch JG225A 10/100/1000Base-T 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 53 Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex 54 51 5249 50 Green=10Gbps, Yellow=1GbpsSFP+ Unit SYS ModeGreen = Simplex Yellow = Duplex HP A5800 Series Switch JG225A 10/100/1000Base-T 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 53 Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex 54 51 5249 50 Green=10Gbps, Yellow=1GbpsSFP+ Unit SYS ModeGreen = Simplex Yellow = Duplex HP A5800 Series Switch JG225A 10/100/1000Base-T 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 53 Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex 54 51 5249 50 Green=10Gbps, Yellow=1GbpsSFP+ Unit SYS ModeGreen = Simplex Yellow = Duplex HP A5800 Series Switch JG225A 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 47 46 45 44 43 47 46 45 44 43 The best performance for your budget Leading performance per $ per watt • up to 4x more performance/$/watt • up to 60% less floor space Rack scale efficiency • 160 x 1P servers per rack with 10 hot- pluggable dual-server trays per 5U chassis • Maximize rack-level energy efficiency Tailor to the workload for lower TCO • Mix compute, accelerator, storage and networking to fit workload needs Introducing the HP Apollo 6000 System Electronic Design Automation • Computer chips • Cellular phones • Pacemakers • Controls for automobiles and satellites • Servers, routers and switches Monte Carlo Simulations • Investment analysis • Financial derivatives • Physical sciences • Engineering • Computational biology • Computer graphics • Gaming
  • 5. © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.5 HP Confidential until June 9, 2014 HP Apollo 6000 System Overview First available tray • ProLiant XL220a dual-server tray • Front serviceable • Rear cabled solution • Max power of ~169W per tray Rack scale • 160 nodes per 48U rack • 5U chassis (1.0m deep rack) • 20 nodes per enclosure • Front service, rear cabled 10/100/1000Base-T 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 53 Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex 54 51 5249 50 Green=10Gbps, Yellow=1GbpsSFP+ Unit SYS ModeGreen = Simplex Yellow = Duplex HP A5800 Series Switch JG225A 10/100/1000Base-T 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 53 Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex 54 51 5249 50 Green=10Gbps, Yellow=1GbpsSFP+ Unit SYS ModeGreen = Simplex Yellow = Duplex HP A5800 Series Switch JG225A 10/100/1000Base-T 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 53 Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex 54 51 5249 50 Green=10Gbps, Yellow=1GbpsSFP+ Unit SYS ModeGreen = Simplex Yellow = Duplex HP A5800 Series Switch JG225A 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 High performance computing CPU DIMMs 2x 1P CPU DIMMs Shared power & cooling • Efficient pooled power shelf supports up to 6 chassis • N, N+1, 2N redundancy configs • 12 volts DC output with max power of 15.9kW • Advanced Power Manager Rack-level shared infrastructure for efficiency and flexibility • Highest frequency per core • Intel E3-12xx v3 Haswell • CPU core generation ahead • Single-threaded applications • Max turbo frequency of 4GHz • Low latency: No 2P cache coherency
  • 6. © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.6 HP Confidential until June 9, 2014 Next generation density and efficiency at scale Dell M620 2 1/2 racks 20% more performance* 60% less rack space 46% less power** $3M HP Apollo 6000 System with ProLiant XL220a Servers, 1 rack TCO savings 10/100/1000Base-T 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 53 Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex 54 51 5249 50 Green=10Gbps, Yellow=1GbpsSFP+ Unit SYS ModeGreen = Simplex Yellow = Duplex HP A5800 Series Switch JG225A 10/100/1000Base-T 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 53 Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex 54 51 5249 50 Green=10Gbps, Yellow=1GbpsSFP+ Unit SYS ModeGreen = Simplex Yellow = Duplex HP A5800 Series Switch JG225A 10/100/1000Base-T 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 53 Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex 54 51 5249 50 Green=10Gbps, Yellow=1GbpsSFP+ Unit SYS ModeGreen = Simplex Yellow = Duplex HP A5800 Series Switch JG225A 10/100/1000Base-T 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4817 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 53 Speed: Green=1000Mbps, Yellow=10/100Mbps Duplex: Green=Full Duplex, Yellow=Half Duplex 54 51 5249 50 Green=10Gbps, Yellow=1GbpsSFP+ Unit SYS ModeGreen = Simplex Yellow = Duplex HP A5800 Series Switch JG225A 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 47 46 45 44 43 47 46 45 44 43 The best performance for your budget
  • 7. © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. “We are seeing up to 35% performance increase in our Electronic Design Automation application workloads. We have deployed more than 5,000 of these servers, achieving better rack density and power efficiency, while delivering higher application performance to Intel silicon design engineers.” Kim Stevenson, CIO, Intel
  • 8. © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.8 HP Confidential until June 9, 2014 Advancing the science of supercomputing The New HP Apollo 8000 System Leading teraflops per rack for accelerated results • 4X teraflops/sq. ft. than air-cooled systems • > 250 teraflops/rack Efficient liquid cooling without the risk • 40% more FLOPS/watt and 28% less energy than air-cooled systems • Dry-disconnect servers, intelligent Cooling Distribution Unit (iCDU) monitoring and isolation Redefining data center energy recycling • Save up to 3,800 tons of CO2/year (790 cars) • Recycle water to heat facility Scientific Computing • Research computing • Climate modeling • Protein analysis Manufacturing • Product modeling • Simulations • Material analysis
  • 9. © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.9 HP Confidential until June 9, 2014 Dry disconnect servers • 100% water cooled components • Designed for serviceability Apollo 8000 System Technologies Open door view of 4 compute & redundant CDU racks Management infrastructure • HP iLO4, IPMI 2.0 and DCMI 1.0 • Rack-level Advanced Power Manager Power infrastructure • Up to 80kW per rack • Four 30A 3-phase 380-480VAC Intelligent Cooling Distribution Unit • 320 KW power capacity • Integrated controls with active-active failover Warm water • Closed secondary loop in CDU • Isolated and open facility loop Advancing the science of supercomputing
  • 10. © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.10 HP Confidential until June 9, 2014 Differentiated: Dry disconnect servers • Enables maintenance of servers without breaking a water connection • Inside the server tray, heat is transferred from components via vapor in sealed heat pipes • Thermal bus bars on the side of the compute tray transfer heat to the water wall in the rack • Water flows through thermal bus bar in the rack from supply-and-return pipes • Fluid fully contained under vacuum New patented technology making a liquid-cooled system as easy to service as air- cooled Thermal bus bars Sealed heat pipes Sealed heat pipes
  • 11. © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.11 HP Confidential until June 9, 2014 Failure is not an option • Dry disconnect servers: sealed heat pipes cool components • Facility water isolated from IT loop • Takes ASHRAE spec water • Secondary IT loop vacuum keeps water in place • Intelligent Cooling Distribution Unit designed to minimize and isolate issues • Comprehensive system insight and management built on Advanced Power Management and smart sensors Efficient liquid cooling without the risk
  • 12. © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.12 HP Confidential until June 9, 2014 Accelerating the science of supercomputing IBM rack servers (Air cooled) 4X teraflop performance per sq. foot 40% more FLOPs/watt 28% less energy $1M savings 2.6X more energy for facilities infrastructure up to 3800 tons more CO2/year save over 1200 ft for a 1 petaflop system annualized PUE 1.06 or better HP Apollo 8000 system (Liquid cooled)
  • 13. © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.13 HP Confidential until June 9, 2014 World’s largest supercomputer dedicated to advancing renewable energy research • $1 million in annual energy savings and cost avoidance through efficiency improvements • Petascale (one million billion calculations/ second) • 6-fold increase in modeling and simulation capabilities • Average PUE of 1.06 or better • Source of heat for ESIF’s 185,000 square feet of office and lab spaces, as well as the walkways • 1MW of data center power in under 1,000 sq. ft., very energy-dense configuration
  • 14. © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.14 HP Confidential until June 9, 2014 Global impact: Reduce the carbon footprint • Top supercomputing performance has grown 944X and the number of cores has exploded by 609X over 10 years* • Power requirements have increased 5.5X, while power costs per kW have spiked • The HP Apollo family is a new approach to HPC, with cost-effective and environmentally friendly HPC solutions • Save while reducing carbon footprint Save 3,800 tons of CO2 per year, ~amount of CO2 produced by 790 cars
  • 15. © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.15 HP Confidential until June 9, 2014 HP Services for Apollo solutions • Data center facilities • Workload migration • Big data, mobility, virtualization design, planning and implementation • Factory Express • Onsite installation • HP Education • Datacenter Care supports your environment • Proactive Care helps prevent problems • Foundation Care helps solve problems faster Support Consulting Services Implementation Data center planning and design services to achieve business outcomes Ongoing support to help get connected and get back to business Services to speed startup and build capabilities with new technology Financing • Available globally where HP Financial Services conducts business • Technology refresh approach to allow for future scalability and upgrades Flexible payment plan and terms to meet your needs
  • 16. © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.16 HP Confidential until June 9, 2014 Extending HPC access to SMBs HP – Intel HPC Innovation Hubs open up new opportunities to compete • Partnering with universities, organizations and ISVs • Access to HPC resources, modeling and simulation applications and expertise • Result: Make vehicle components safer and more reliable • Grow the industry, improve competitiveness in the global marketplace Fueling innovation with HPC as a Service and HPC Innovation Hubs HPC as a Service bridges the gap between public and private cloud • Access a common, open cloud platform across standard IT cloud and HPC cloud initiatives • Easy to use to increase competitive agility • Open, efficient management of complex HPC resources • Secure, scalable performance for your demanding workloads with leading cluster infrastructure and security integration
  • 17. © Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Notes de l'éditeur

  1. As I said earlier, it’s all about the right server for the right workload at the right economics . . . Every time. So, let’s take a look at some of the HPC workloads. They range from computer-aided engineering (EDA), to life sciences, oil & gas, visualization and rendering, research and financial services. While they may seem dramatically different, they actually have a few things in common. They are all looking for the highest possible levels of performance and efficiency. And the smaller organizations, such as Small and Medium Sized Manufacturers, are looking for access to these tools without necessarily having to make as much of an investment in the hardware, software and HPC expertise.
  2. HP is reinventing HPC today to accelerate the world of tomorrow with the announcement of the new HP Apollo portfolio Continuing on HP’s strategy to deliver the right compute, for the right workload, and the right economics every time, HP is announcing the new Apollo portfolio to provide the performance, efficiency and accessibility necessary to transform the HPC industry. Leading researchers and engineers will be able to take advantage of HP’s advanced HPC compute solutions essential to doing the analysis and simulation necessary to solve the world’s most difficult problems (i.e. global warming, origins of the universe, designing airplanes, increasing safety by simulating car crash tests or cracking the genetic code).   The new family of Apollo High Performance Computing solutions is designed for high performance computing at rack scale. Apollo combines a rack level infrastructure with ProLiant technologies to provide a unique HPC compute solution. The new portfolio rapidly ramps performance to accelerate answers such as determining climate patterns and finding new medicines and treatments with an industry leading 4X teraflops per sq ft when compared to air cooled solutions. The HP Apollo portfolio maximizes rack-scale efficiency, delivering significant CAPEX and OPEX savings as well as reducing the world’s carbon footprint, such as up to 4x better performance per watt per dollar when compared to the competition. Finally, the HP Apollo solutions are designed to unleash HPC to the masses with an infrastructure that is affordable, less complex, and easy to manage and new services to enable enterprises of any size to take advantage of these rack-scale HPC solutions.   The HP Apollo 6000 provides the flexibility to tailor the system to precisely meet the needs of single-threaded applications such as design automation or financial service risk analysis. The HP Apollo 6000 delivers 4x better performance per dollar per watt than the competition using 67% less floor space. From the beginning, HP designed this platform for scalability and efficiency at rack-scale delivering a TCO savings of $3.25M/1000 servers over 3 years. The system has up to 160 x 1P servers/48U racks with efficiencies fueled by HP’s unique external power shelf, dynamically allocating power to help maximize rack-level energy utilization while providing the right amount of redundancy for your business. HP plans to continue to extend the Apollo 6000 capabilities in the future to address a broader range of HPC workloads.   For large compute problems such as predicting agricultural parameters such as crop growth or finding a medical cure, researchers are excited about the new HP Apollo 8000 System, fueling ground-breaking research in science and engineering with HP’s leading-edge technology. The HP Apollo 8000 System reaches new heights of performance density, with 144 teraflops/rack, that’s up to 8x the teraflops per sq. ft. and up to 40% more FLOPs/watt than comparable air-cooled servers. At the same time, it helps reduce your carbon footprint, saving up to 3,800 tons of CO2 per year. That’s about the same amount of CO2 produced/year by 790 cars! In fact, the environmental advantages of the HP Apollo 8000 System can be taken one step further by leveraging the water used to cool the solution to heat your facilities which NREL estimates will save them $1,000,000 in OPEX including the money that would otherwise be used to heat the building.   Additionally, on top of the technology advancements, HP is introducing new business model innovations to make HPC accessible to enterprises of any size. HP’s new HPC as a Service and HPC Innovation Hubs empower organizations to rapidly deliver more competitive products and insights by making HPC resources easier to access. HPC as a Service, based on HPC-optimized, secured private clouds, provides a self-service portal that makes using high performance compute resources as easy as using a familiar application enabling faster access to these HPC compute resources to more users and projects. This solution makes HPC accessible and manageable for more organizations by allowing them to manage it themselves or have HP HPC experts implement and manage it for them with a pay for use model so they can stay focused on delivering products and innovation. Additionally, HP in partnership with universities and neighboring communities/organizations is implementing HPC Innovation Hubs to support local enterprises of any size with their high performance computing requirements. By working together in a close relationship with ISV partners, channel partners and universities, HP is able to bring all the participants together to foster centers of excellence. As a result, enterprises are able to leverage all of the advantages of HPC in a pay as you go model, eliminating out of the pocket upfront expenditures while lowering the expertise required to support the infrastructure and minimizing ongoing costs. Through HPC as a Service and HPC Innovation Hubs, HP is unleashing HPC to the masses!     Partner with HP to accelerate your pace of innovation and solve the world’s most challenging problems!
  3. The HP Apollo 6000 System delivers 4x better performance per dollar per watt than a competing blade using 60% less floor space. From the beginning, HP designed this platform for scalability and efficiency at rack-scale delivering a TCO savings of $3M/1000 servers over 3 years. The Apollo 6000 provides the flexibility to tailor the system to precisely meet the needs of your workloads. The first available server tray, the ProLiant XL220a is a great fit for single-threaded applications with more options coming soon, the system has up to 160 x 1P servers/48U rack. Efficiency at rack scale is fueled by HP’s unique external power shelf, dynamically allocating power to help maximize rack-level energy efficiency while providing the right amount of redundancy for your business.
  4. Advanced Power Manager -See and manage shared infrastructure, server, chassis and rack-level power from a single console -Simplify, and save >80% by avoiding spend on serial concentrators, adaptors, cables and switches -Flex to meet workload demands with dynamic power allocation, measurement and control
  5. For Synopsys VCS data measurements see: SNUG (Synopsys User Group) Silicon Valley Conference – March 2014 https://www.synopsys.com/community/snug/pages/ProceedingLP.aspx?loc=Silicon%20Valley&locy=2014&ploc=Boston WB-07 Compute Farm Resource Selection and Management CPU Choice, Server Architecture, and BIOS Settings for EDA Tool Performance Kamran Casim - HP; Manish Neema, Glenn Newell - Synopsys, Inc. SandyBridge, Ivybridge, Haswell? Single socket, dual, quad? Hyper-Threading on or off? Turbo? Default BIOS vs. energy saving vs. max performance and the effect on EDA tool performance. Traditional servers are two or four sockets, but vendors are bringing faster single socket servers to market in new dense form factors, but with fewer cores. We will demystify the options and present relative performance results using Synopsys EDA Tools, including Design Compiler, PrimeTime, VCS, HSPICE, Liberty NCX, Proteus, TCAD, CATS, and ZeBu Compile. ============================================== 20% more performance for single threaded applications Synopsys VCS data   46% less energy at the system HP measured vs. Dell calculator (AC input) Dell M620 1x2P vs. 1x1P HP pwr number: 153W measured Dell: 281 power calculator 153/281 =.54 (46% savings)   49% less cost than a competing blade Pricing data with Lynn   4x better performance per $ per watt saving x$/rack = solution x% lower cost or TCO, pay for itself in x years (20%/54%) (1.2/.54)/.51=4.36 time better ================================= ¼ the floor space of traditional 1U servers Typical 1U servers in   67% less space than a competing blade Dell M620 10U/16 servers HP 160 in 48U 160/16=10 Dell enclosures 10x10=100U 100U/48= 2.08 racks  3 racks 1/3 = 67% less space Going conservative on 60% less rack space   $157 Opex savings per year per server HP: 153W per server@ AC input Dell: 281W per server AC input w pwr calc. (281W-153W) = 128W 128*160 servers = 20.47kW 20.47kW* $0.14 kWH * 24H * 365days/yr =$25.1k/yr Per rack $25.1k/160 servers = $157 pwr savings per server/yr   (*$0.14 from Dell pwr. Calculator) *No energy efficiency story =========================================   $3.25M TCO savings over 3 yrs @1000 servers Assuming 1000 (5677.31 * 1000) = 5, 677, 310 5, 677, 310 * .49 = 2,781,881.90 HP server 1000 * $157 = 157,000 157,000 * 3 = $471, 000 (savings in power) 2,781,881.90 + 471,000 = 3,252,881.90 savings over 3 yrs @ 1000 servers Individual server saving is 3,252.59 per server per 3 yr.   M620 w/ single processor (E5-2667 CPU)* see Brian’s spreadsheet vs. Longclaw   Which is 6.25 racks.   Payback of $ savings within “x” # of weeks? Avg./typical no. of servers deployed
  6. -expand recycle story here Performance per rack Leading teraflops per rack for accelerated results With XL730f, 144 teraflops/rack With NVIDIA GPUs XL750f, 273 teraflops/rack With Xeon Phi XL740F, 246 teraflops/rack Up to 144 servers in 50U   8X teraflops per sq ft compared to air cooled 2.12x teraflops/sq ft compared to water-cooled IBM Power 775 2.36x teraflops/per sq ft compared to Cray CS300 LC Save 1217 sq. ft. compared to air cooled 1-petaflop system Save 195 sq. ft. vs. IBM Power 775 Save 236 sq. ft. vs. Cray CS300 LC ========================================== Efficiency Industry’s only efficient liquid cooling without the risk Save up to $1M in energy over 5 years for each MW of IT in the data center compared to air cooled systems   Get up to 40% more FLOPS/watt than air cooled systems   For the same performance, HP Apollo 8000 System consumes up to 28% less energy than air cooled systems   Air cooled systems consume 2.6X more energy for facility infrastructure as compared to HP Apollo 8000   Air cooled systems use up to 1.2X more MW power per 5.6 petaflop data center   Avoid up to $7.1M of new facility CapEx with water cooling to reach 5.6 petaflops ==================================================== Sustainability Redefining data center energy recycling Save up to 3,800 tons of CO2 per year vs. an air cooled data center 3MW water vs. air cooled   CO2 = 790 of cars
  7. CDU density incredible compared to current CDU that are typically 30”x28”x84” for 80-100 kW
  8. Performance per rack Leading teraflops per rack for accelerated results With XL730f, 144 teraflops/rack With NVIDIA GPUs XL750f, 273 teraflops/rack With Xeon Phi XL740F, 246 teraflops/rack Up to 144 servers in 50U   8X teraflops per sq ft compared to 2U air cooled, 4x teraflops per sq ft compared to 1U air cooled 2.12x teraflops/sq ft compared to water-cooled IBM Power 775 2.36x teraflops/per sq ft compared to Cray CS300 LC Save 1217 sq. ft. compared to air cooled 1-petaflop system Save 195 sq. ft. vs. IBM Power 775 Save 236 sq. ft. vs. Cray CS300 LC ========================================== Efficiency Industry’s only efficient liquid cooling without the risk Save up to $1M in energy over 5 years for each MW of IT in the data center compared to air cooled systems   Get up to 40% more FLOPS/watt than air cooled systems   For the same performance, HP Apollo 8000 System consumes up to 28% less energy than air cooled systems   Air cooled systems consume 2.6X more energy for facility infrastructure as compared to HP Apollo 8000   Air cooled systems use up to 1.2X more MW power per 5.6 petaflop data center   Avoid up to $7.1M of new facility CapEx with water cooling to reach 5.6 petaflops ==================================================== Sustainability Redefining data center energy recycling Save up to 3,800 tons of CO2 per year vs. an air cooled data center 3MW water vs. air cooled   CO2 = 790 of cars
  9. http://h20195.www2.hp.com/v2/GetPDF.aspx%2F4AA5-0069ENW.pdf http://goparallel.sourceforge.net/peregrine-supercomputer-takes-flight-in-search-of-renewable-energy/
  10. “Leveraging the efficiency of HP Apollo 8000, we expect to save $800,000 in operating expenses per year,” Steve Hammond, director, Computational Science Center, NREL. “Because we are capturing and using waste heat, we estimate we will save another $200,000 that would otherwise be used to heat the building. So, we are looking at saving $1 million per year in operations costs for a data center that cost less to build than a typical data center.” A new compute model is required that is sustainable and environmentally friendly in the use of power and space. Current data centers are not designed to support the extensive floor space and power infrastructure necessary to run that level of processing power. Over the past 10 years, the power requirements necessary to support the performance growth has increased 5.5X. At the same time power costs per kW have spiked A new approach to HPC is required to fit the processing power within energy and floor space envelopes and researches need to implement cost-effective and environmentally friendly HPC solutions in order to minimize operational expenditures which can be in the millions of dollars per year and at that same time reduce the world’s carbon footprint. At the same time, it helps reduce your carbon footprint, saving up to 3,800 tons of CO2 per year. That’s about the same amount of CO2 produced/year by 790 cars! In fact, the environmental advantages of the HP Apollo 8000 System can be taken one step further by leveraging the water used to cool the solution to heat your facilities which NREL estimates will save them $1,000,000 in OPEX including the money that would otherwise be used to heat the building.
  11. HP Technology Services is ready to engage early as you consider the Apollo solution. We offer consulting services to help you analyze and prioritize your needs for power and cooling as well as more detailed design and datacenter implementation planning. HP Factory Express will oversee the implementation of your new Apollo system from the HP factory floor to your datacenter floor. Our HPC Cluster specialists are ready to configure your cluster software solution and any third party integration needed. Once your new Apollo system is in place and supporting your HPC projects, you’ll want easy access to expertise for routine hardware replacements and the ability to get high level visibility if a more complex situation arises. Call on the experts in our Datacenter Care team to help with a holistic view of the Apollo HPC environment and any additional IT you want to include in this level of service.
  12. Script per announcement message map See GMU/NCMS case study Another example: http://www.hpcwire.com/2014/01/28/navigating-digital-divide/ http://blog.industrysoftware.automation.siemens.com/blog/2014/04/02/who-says-you-cant-have-it-all/