In this slidecast, John Gromala from HP describes the company's new Apollo series of HPC servers. Tailor-made for the HPC market, the Apollo Series combines a modular design with innovative power distribution and air- and liquid-cooling techniques for extreme performance at rack scale, providing up to four times more performance per square foot than standard rack servers.
Watch the video presentation: http://wp.me/p3RLHQ-clp
As I said earlier, it’s all about the right server for the right workload at the right economics . . . Every time. So, let’s take a look at some of the HPC workloads. They range from computer-aided engineering (EDA), to life sciences, oil & gas, visualization and rendering, research and financial services. While they may seem dramatically different, they actually have a few things in common. They are all looking for the highest possible levels of performance and efficiency. And the smaller organizations, such as Small and Medium Sized Manufacturers, are looking for access to these tools without necessarily having to make as much of an investment in the hardware, software and HPC expertise.
HP is reinventing HPC today to accelerate the world of tomorrow with the announcement of the new HP Apollo portfolio
Continuing on HP’s strategy to deliver the right compute, for the right workload, and the right economics every time, HP is announcing the new Apollo portfolio to provide the performance, efficiency and accessibility necessary to transform the HPC industry. Leading researchers and engineers will be able to take advantage of HP’s advanced HPC compute solutions essential to doing the analysis and simulation necessary to solve the world’s most difficult problems (i.e. global warming, origins of the universe, designing airplanes, increasing safety by simulating car crash tests or cracking the genetic code).
The new family of Apollo High Performance Computing solutions is designed for high performance computing at rack scale. Apollo combines a rack level infrastructure with ProLiant technologies to provide a unique HPC compute solution. The new portfolio rapidly ramps performance to accelerate answers such as determining climate patterns and finding new medicines and treatments with an industry leading 4X teraflops per sq ft when compared to air cooled solutions. The HP Apollo portfolio maximizes rack-scale efficiency, delivering significant CAPEX and OPEX savings as well as reducing the world’s carbon footprint, such as up to 4x better performance per watt per dollar when compared to the competition. Finally, the HP Apollo solutions are designed to unleash HPC to the masses with an infrastructure that is affordable, less complex, and easy to manage and new services to enable enterprises of any size to take advantage of these rack-scale HPC solutions.
The HP Apollo 6000 provides the flexibility to tailor the system to precisely meet the needs of single-threaded applications such as design automation or financial service risk analysis. The HP Apollo 6000 delivers 4x better performance per dollar per watt than the competition using 67% less floor space. From the beginning, HP designed this platform for scalability and efficiency at rack-scale delivering a TCO savings of $3.25M/1000 servers over 3 years. The system has up to 160 x 1P servers/48U racks with efficiencies fueled by HP’s unique external power shelf, dynamically allocating power to help maximize rack-level energy utilization while providing the right amount of redundancy for your business. HP plans to continue to extend the Apollo 6000 capabilities in the future to address a broader range of HPC workloads.
For large compute problems such as predicting agricultural parameters such as crop growth or finding a medical cure, researchers are excited about the new HP Apollo 8000 System, fueling ground-breaking research in science and engineering with HP’s leading-edge technology. The HP Apollo 8000 System reaches new heights of performance density, with 144 teraflops/rack, that’s up to 8x the teraflops per sq. ft. and up to 40% more FLOPs/watt than comparable air-cooled servers. At the same time, it helps reduce your carbon footprint, saving up to 3,800 tons of CO2 per year. That’s about the same amount of CO2 produced/year by 790 cars! In fact, the environmental advantages of the HP Apollo 8000 System can be taken one step further by leveraging the water used to cool the solution to heat your facilities which NREL estimates will save them $1,000,000 in OPEX including the money that would otherwise be used to heat the building.
Additionally, on top of the technology advancements, HP is introducing new business model innovations to make HPC accessible to enterprises of any size. HP’s new HPC as a Service and HPC Innovation Hubs empower organizations to rapidly deliver more competitive products and insights by making HPC resources easier to access. HPC as a Service, based on HPC-optimized, secured private clouds, provides a self-service portal that makes using high performance compute resources as easy as using a familiar application enabling faster access to these HPC compute resources to more users and projects. This solution makes HPC accessible and manageable for more organizations by allowing them to manage it themselves or have HP HPC experts implement and manage it for them with a pay for use model so they can stay focused on delivering products and innovation. Additionally, HP in partnership with universities and neighboring communities/organizations is implementing HPC Innovation Hubs to support local enterprises of any size with their high performance computing requirements. By working together in a close relationship with ISV partners, channel partners and universities, HP is able to bring all the participants together to foster centers of excellence. As a result, enterprises are able to leverage all of the advantages of HPC in a pay as you go model, eliminating out of the pocket upfront expenditures while lowering the expertise required to support the infrastructure and minimizing ongoing costs. Through HPC as a Service and HPC Innovation Hubs, HP is unleashing HPC to the masses!
Partner with HP to accelerate your pace of innovation and solve the world’s most challenging problems!
The HP Apollo 6000 System delivers 4x better performance per dollar per watt than a competing blade using 60% less floor space. From the beginning, HP designed this platform for scalability and efficiency at rack-scale delivering a TCO savings of $3M/1000 servers over 3 years. The Apollo 6000 provides the flexibility to tailor the system to precisely meet the needs of your workloads. The first available server tray, the ProLiant XL220a is a great fit for single-threaded applications with more options coming soon, the system has up to 160 x 1P servers/48U rack. Efficiency at rack scale is fueled by HP’s unique external power shelf, dynamically allocating power to help maximize rack-level energy efficiency while providing the right amount of redundancy for your business.
Advanced Power Manager
-See and manage shared infrastructure, server, chassis and rack-level power from a single console
-Simplify, and save >80% by avoiding spend on serial concentrators, adaptors, cables and switches
-Flex to meet workload demands with dynamic power allocation, measurement and control
For Synopsys VCS data measurements see:
SNUG (Synopsys User Group) Silicon Valley Conference – March 2014
https://www.synopsys.com/community/snug/pages/ProceedingLP.aspx?loc=Silicon%20Valley&locy=2014&ploc=Boston
WB-07 Compute Farm Resource Selection and Management
CPU Choice, Server Architecture, and BIOS Settings for EDA Tool PerformanceKamran Casim - HP; Manish Neema, Glenn Newell - Synopsys, Inc.
SandyBridge, Ivybridge, Haswell? Single socket, dual, quad? Hyper-Threading on or off? Turbo? Default BIOS vs. energy saving vs. max performance and the effect on EDA tool performance. Traditional servers are two or four sockets, but vendors are bringing faster single socket servers to market in new dense form factors, but with fewer cores. We will demystify the options and present relative performance results using Synopsys EDA Tools, including Design Compiler, PrimeTime, VCS, HSPICE, Liberty NCX, Proteus, TCAD, CATS, and ZeBu Compile.
==============================================
20% more performance for single threaded applications
Synopsys VCS data
46% less energy at the system
HP measured vs. Dell calculator (AC input)
Dell M620 1x2P vs. 1x1P
HP pwr number: 153W measured
Dell: 281 power calculator
153/281 =.54 (46% savings)
49% less cost than a competing blade
Pricing data with Lynn
4x better performance per $ per watt saving x$/rack = solution x% lower cost or TCO, pay for itself in x years
(20%/54%)
(1.2/.54)/.51=4.36 time better
=================================
¼ the floor space of traditional 1U servers
Typical 1U servers in
67% less space than a competing blade
Dell M620 10U/16 servers
HP 160 in 48U
160/16=10 Dell enclosures
10x10=100U
100U/48= 2.08 racks 3 racks
1/3 = 67% less space
Going conservative on 60% less rack space
$157 Opex savings per year per server
HP: 153W per server@ AC input
Dell: 281W per server AC input w pwr calc.
(281W-153W) = 128W
128*160 servers = 20.47kW
20.47kW* $0.14 kWH * 24H * 365days/yr =$25.1k/yr
Per rack
$25.1k/160 servers = $157 pwr savings per server/yr
(*$0.14 from Dell pwr. Calculator)
*No energy efficiency story
=========================================
$3.25M TCO savings over 3 yrs @1000 servers
Assuming 1000 (5677.31 * 1000) = 5, 677, 310
5, 677, 310 * .49 = 2,781,881.90
HP server 1000 * $157 = 157,000
157,000 * 3 = $471, 000 (savings in power)
2,781,881.90 + 471,000 = 3,252,881.90 savings over 3 yrs @ 1000 servers
Individual server saving is 3,252.59 per server per 3 yr.
M620 w/ single processor (E5-2667 CPU)* see Brian’s spreadsheet vs. Longclaw
Which is 6.25 racks.
Payback of $ savings within “x” # of weeks?
Avg./typical no. of servers deployed
-expand recycle story here
Performance per rack
Leading teraflops per rack for accelerated results
With XL730f, 144 teraflops/rack
With NVIDIA GPUs XL750f, 273 teraflops/rack
With Xeon Phi XL740F, 246 teraflops/rack
Up to 144 servers in 50U
8X teraflops per sq ft compared to air cooled
2.12x teraflops/sq ft compared to water-cooled IBM Power 775
2.36x teraflops/per sq ft compared to Cray CS300 LC
Save 1217 sq. ft. compared to air cooled 1-petaflop system
Save 195 sq. ft. vs. IBM Power 775
Save 236 sq. ft. vs. Cray CS300 LC
==========================================
Efficiency
Industry’s only efficient liquid cooling without the risk
Save up to $1M in energy over 5 years for each MW of IT in the data center compared to air cooled systems
Get up to 40% more FLOPS/watt than air cooled systems
For the same performance, HP Apollo 8000 System consumes up to 28% less energy than air cooled systems
Air cooled systems consume 2.6X more energy for facility infrastructure as compared to HP Apollo 8000
Air cooled systems use up to 1.2X more MW power per 5.6 petaflop data center
Avoid up to $7.1M of new facility CapEx with water cooling to reach 5.6 petaflops
====================================================
Sustainability
Redefining data center energy recycling
Save up to 3,800 tons of CO2 per year vs. an air cooled data center 3MW water vs. air cooled
CO2 = 790 of cars
CDU density incredible compared to current CDU that are typically 30”x28”x84” for 80-100 kW
Performance per rack
Leading teraflops per rack for accelerated results
With XL730f, 144 teraflops/rack
With NVIDIA GPUs XL750f, 273 teraflops/rack
With Xeon Phi XL740F, 246 teraflops/rack
Up to 144 servers in 50U
8X teraflops per sq ft compared to 2U air cooled, 4x teraflops per sq ft compared to 1U air cooled
2.12x teraflops/sq ft compared to water-cooled IBM Power 775
2.36x teraflops/per sq ft compared to Cray CS300 LC
Save 1217 sq. ft. compared to air cooled 1-petaflop system
Save 195 sq. ft. vs. IBM Power 775
Save 236 sq. ft. vs. Cray CS300 LC
==========================================
Efficiency
Industry’s only efficient liquid cooling without the risk
Save up to $1M in energy over 5 years for each MW of IT in the data center compared to air cooled systems
Get up to 40% more FLOPS/watt than air cooled systems
For the same performance, HP Apollo 8000 System consumes up to 28% less energy than air cooled systems
Air cooled systems consume 2.6X more energy for facility infrastructure as compared to HP Apollo 8000
Air cooled systems use up to 1.2X more MW power per 5.6 petaflop data center
Avoid up to $7.1M of new facility CapEx with water cooling to reach 5.6 petaflops
====================================================
Sustainability
Redefining data center energy recycling
Save up to 3,800 tons of CO2 per year vs. an air cooled data center 3MW water vs. air cooled
CO2 = 790 of cars
“Leveraging the efficiency of HP Apollo 8000, we expect to save $800,000 in operating expenses per year,” Steve Hammond, director, Computational Science Center, NREL. “Because we are capturing and using waste heat, we estimate we will save another $200,000 that would otherwise be used to heat the building. So, we are looking at saving $1 million per year in operations costs for a data center that cost less to build than a typical data center.”
A new compute model is required that is sustainable and environmentally friendly in the use of power and space. Current data centers are not designed to support the extensive floor space and power infrastructure necessary to run that level of processing power. Over the past 10 years, the power requirements necessary to support the performance growth has increased 5.5X. At the same time power costs per kW have spiked A new approach to HPC is required to fit the processing power within energy and floor space envelopes and researches need to implement cost-effective and environmentally friendly HPC solutions in order to minimize operational expenditures which can be in the millions of dollars per year and at that same time reduce the world’s carbon footprint.
At the same time, it helps reduce your carbon footprint, saving up to 3,800 tons of CO2 per year. That’s about the same amount of CO2 produced/year by 790 cars! In fact, the environmental advantages of the HP Apollo 8000 System can be taken one step further by leveraging the water used to cool the solution to heat your facilities which NREL estimates will save them $1,000,000 in OPEX including the money that would otherwise be used to heat the building.
HP Technology Services is ready to engage early as you consider the Apollo solution. We offer consulting services to help you analyze and prioritize your needs for power and cooling as well as more detailed design and datacenter implementation planning.
HP Factory Express will oversee the implementation of your new Apollo system from the HP factory floor to your datacenter floor. Our HPC Cluster specialists are ready to configure your cluster software solution and any third party integration needed.
Once your new Apollo system is in place and supporting your HPC projects, you’ll want easy access to expertise for routine hardware replacements and the ability to get high level visibility if a more complex situation arises. Call on the experts in our Datacenter Care team to help with a holistic view of the Apollo HPC environment and any additional IT you want to include in this level of service.
Script per announcement message map
See GMU/NCMS case study
Another example: http://www.hpcwire.com/2014/01/28/navigating-digital-divide/
http://blog.industrysoftware.automation.siemens.com/blog/2014/04/02/who-says-you-cant-have-it-all/