SlideShare une entreprise Scribd logo
1  sur  14
Télécharger pour lire hors ligne
MAKING HYPERSCALE AVAILABLE
In the Networked Society, enterprises will need 10 times their current IT capacity – but
without 10 times the budget. Leading cloud providers have already changed the game by
developing their own hyperscale computing approaches, and operators and enterprises
can also adopt hyperscale infrastructure that enables a lower total cost of ownership.
This paper discusses the opportunity for a competitive economic, operational and
technical solution to deliver on the future needs of enterprises.
ericsson White paper
Uen 284 23-3264 | February 2015
Next-generation
data center
infrastructure
This paper was developed in collaboration between Ericsson and Intel.
NEXT-GENERATION DATA CENTER INFRASTRUCTURE • INTRODUCTION  2
Introduction
The current rate of technical change is exponential, not linear. As the book Exponential
Organizations by Salim Ismail explains: “Together, all indications are that we are shifting to an
information-based paradigm…when you shift to an information-based environment, the pace of
development jumps onto an exponential growth path and performance/price doubles every year
or two” [1].
This means that when a technical challenge is 1 percent accomplished, such as mapping the
human genome, it is actually halfway to completion.This also explains why the amount of change
in one year is always overestimated, but the amount of change in 10 years is always underestimated.
In the Networked Society, everything that benefits from being connected will be connected.
As a result, in the near future the vast majority of end points on networks are going to be machines
independent of humans, rather than humans holding machines. New use cases will include data
collection and control signaling, alongside voice, media and messaging. And in terms of traffic
flow, the new use cases will be predominantly upload (data collection), not download. We do not
know exactly what these use cases will require from the underlying compute, storage and network
infrastructures, but early indications are that today’s design assumptions will be challenged, and
that the future will require a new hyperscale approach.
NEXT-GENERATION DATA CENTER INFRASTRUCTURE • A DIFFERENT MODEL  3
A different
model
All businesses are becoming both software companies and “information enabled.” In the near
future, companies will be dependent on 10 times the IT capacity of today, yet they will not have
10 times the budget to deliver.Their approach must therefore change.The companies that have
already accomplished this transformation are the public cloud providers (Google, Facebook and
Amazon, among others), which have developed their own “hyperscale” computing approaches,
and changed the rules of the game in data center design, construction and management.
They have moved on from traditional IT practice around data center design, hardware
procurement and life cycle management and business operation.To deliver hyperscale computing,
they no longer buy from traditional IT vendors but instead have created their own technology
and operations in-house.
By having discipline and focus on issues of power, cooling, server, storage, network, automation
and governance, these leading cloud providers have reached new levels of efficiency, performance
and agility to realize what the industry is starting to call “web-scale IT.”
This approach enables them to rapidly scale up or down, to adapt and deliver highly focused
and resilient customer-centric solutions, while dramatically minimizing resource wastage and
operational costs. The problem for the rest of the world is that the leaders in this space are not
vendors, while the traditional IT vendors have not stepped up to meet the new demand.
Most enterprises do not have access to the resources needed to develop their own hyperscale
architecture. Instead, they must rely on equipment from traditional IT vendors or custom
management solutions with little control or vendor support.This puts them at risk of getting left
behind economically.
Meanwhile, the traditional IT vendors are using a model that ensures margin maintenance for
the vendor, but at the expense of the customer that must compete against hyperscale cloud
performance and characteristics. This means that most companies have a bad choice: either
use the public hyperscale cloud providers and be constrained to their business mandates, or
carry on buying from traditional IT vendors and risk not being competitive in their market.
NEXT-GENERATION DATA CENTER INFRASTRUCTURE • DISAGGREGATED HARDWARE ARCHITECTURES 4
Disaggregated
hardware
architectures
As the massive growth of information technology services places increasing demand on the data
center, it is important to re-architect the underlying infrastructure, allowing companies and users
to benefit from an increasingly services-oriented world.
The core of a disaggregated architecture is a range of compute, fabric, storage and management
modules that work together to build a wide range of virtual systems.This flexibility can be utilized
differently by different solution stacks. In such a system, the following four pillars of capability
are essential:
 manager for multi-rack management: this includes hardware, firmware and software
application programming interfaces (APIs) that enable the management of resources and
policies across total operation and expose a standard interface to both hardware below it and
the orchestration layer above it.
 pooled system: this enables the creation of a system using pooled resources that include
compute, network and storage based on workload requirements.
 scalable multi-rack storage: this includes Ethernet connected storage that can be loaded
with storage algorithms to support a range of uses. Different configurations of storage hardware
and compute nodes with local storage, as well as multi-rack resources, should be made
available.
 efficient configurable network fabric: the networking hardware, interconnect (cables,
backplane) and management that support a wide range of cost-effective network topologies.
Designs should include current “top of rack” switch designs but also extend to designs that
utilize distributed switches in the platforms, which remove levels of the switch hierarchy.
Combining a disaggregated hardware architecture with optical interconnect removes the traditional
distance and capacity limitations of electrical connections. It enables efficient pooling of resources
while removing connectivity bottlenecks, which is critical for real-time services, for example.
As data center demand grows, high-speed optical connectivity helps organizations to scale
out by enabling software to span across large hardware “pools” that are geographically distant
and do not simply scale up with bigger and faster hardware locally.
NEXT-GENERATION DATA CENTER INFRASTRUCTURE • HYPERSCALE PERFORMANCE  5
Hyperscale
performance
In an exponentially changing world, hyperscale performance is not a static state, and in the real
world, it has to start from where an organization finds itself today. One methodology to achieve
continuously improving hyperscale performance comes from adoption and continuous iteration
through the industrialization cycle.This drives organizations to have one approach to all they do,
since this is where best performance has the optimum chance of being found. Adoption of the
approach enables disruptive economics, uses “catalytic modernization” capabilities, and depends
on having an infrastructure approach that can break
the traditional refresh cycle. The ability to answer
seemingly basic questions such as: “How many
servers are in operation?”, “Where are they?”, and
“What are they running?” highlights the maturity of
any organization in its industrialization journey.The
public cloud providers have this information in real
time as a side effect of how they approach their
total business and infrastructure management.
The industrialization cycle shown in Figure 1
represents an operational improvement model
continuously driven by business case
argumentation.
STANDARDIZE
The first step to modernization is to standardize
one set of technologies. Google is a great example
of a company that has standardized on a single
hardware, software, operational and economic
strategy – all on an end-to-end basis. The key
benefit of standardization is increased efficiency
– the continuous focus on standardization enables
continuous improvement. Relevant questions that
organizations should ask here include:
 To what extent has your organization standardized along the dimensions
of facilities, hardware, software, operations and economics?
 How effectively is your organization reaping the benefits of continuous
improvement from standardization?
COMBINE
Organizations need a combine and consolidate strategy. “Consolidate” means taking everything
and bringing it onto one platform, and “combine” means leaving selected legacy systems where
they are. It is critical to recognize that some things sit perfectly fine on a legacy mainframe, and
should remain there. However, data and capabilities need to be combined so that they are
exposed through a single platform. Relevant questions that organizations should ask here include:
 As an organization, where are you selectively consolidating systems?
 Where and how are you pursuing a combination strategy?
 Are data center economics a problem? How are you achieving high occupancy rates, so that
your data center economics are not dominated by real estate costs?
Govern
Automate
Abstract
Combine
Standardize
Data center and
central office
Figure 1: The industrialization cycle.
NEXT-GENERATION DATA CENTER INFRASTRUCTURE • HYPERSCALE PERFORMANCE  6
ABSTRACT
Organizations need to create proper levels of abstraction to expose cloud functionality. A key
advantage of cloud systems is that they are not a black box; they are in fact highly accessible.
Relevant questions that organizations should ask here include:
 How have you approached the abstraction of your cloud-based systems?
 At what layers in the cloud stack have you modularized and abstracted? How well is this
working for you?
 Are there functions or capabilities where complete programmability is lacking?
AUTOMATE
The process of automation delivers benefits in terms of cost savings and agility. Taking out the
human element provides opex benefits and facilitates responsiveness. However, it also introduces
the question of security and control – automation and governance really go hand-in-hand.
Relevant questions that organizations should ask here include:
 To what extent have you been able to reduce the human element in your cloud system? Where
would you like to push harder to derive additional opex benefits?
 How much of your budget is spent on maintaining what you currently have?
 What needs to happen for your organization to accelerate its automation journey?
 How would you prioritize the opportunities for automation and the level of effort required?
GOVERN
Normally, security is managed by limiting access. But as already noted, clouds are highly
accessible. Think about the public cloud – a situation where unknown people with unknown
credentials are handling unknown workloads and have complete access to hardware.This is the
very definition of compromise. Security is the biggest inhibitor to cloud adoption, and there is
no question that robust security models have to complete the industrialization cycle. Relevant
questions that organizations should ask here include:
 Security: How secure would you say your key cloud implementations are? Why? How are you
approaching cloud security today?
 Governance: What trade-offs are you making in exchange for governance (for example, speed
or agility)?
 Governance: How much time does it take for a new policy to be enacted across all your data
centers? For example, would you be able to change all passwords with a single command?
If not, how distant is this from your reality today?
 Compliance: Do you have processes for procedures that come up regularly, such as the
disposal of a server?
 Compliance: At any point in time, have you risked breaking the law because of jurisdiction or
data location issues?
Short-term vendor management on “lowest price wins” will not translate to business success
unless placed in the context of true economic cycle improvement and thus continuous business
improvement. If there is no continuous focus on standardizing facility, hardware, software,
operations and economic strategy, there will not be continuous improvement. And if there is no
continuous combine/consolidate strategy, then there will not be capex savings.
Similarly, if there is no focus on abstracting underlying complexity into simple APIs, then
programmability will not be possible and automation will not happen, translating to no opex
savings or agility since humans are still required. And if there is no focus on governance, then
all decisions taken may work or may not, may cause liability or may not. It will simply not be
known until the first problems arise.
NEXT-GENERATION DATA CENTER INFRASTRUCTURE • DISRUPTIVE ECONOMICS  7
Disruptive
Economics
The effects of exponential technology change are all around us. Digital infrastructure is no different
– and defines the future market opportunity – but delivering it requires adoption of best practice
hyperscale approaches. Two backend engineers can now scale a system to support over 30
million active users [2].
Web-scale IT is a pattern of global-class computing that delivers the capabilities of large cloud
service providers within an enterprise IT setting by rethinking positions across several dimensions.
By 2017, web-scale IT will be an architectural approach found operating in 50 percent of global
enterprises, up from less than 10 percent in 2013 [3].
The benefits of adoption of hyperscale hardware include:
 New levels of capex and opex savings while decreasing time to market.
 New levels of utilization and operating resource efficiencies.
 On-demand infrastructure is treated as a single system with one focus on building, managing
and operating from an application delivery cost perspective. It is application-defined
infrastructure.
 Lower total cost of ownership (TCO) results from full awareness of hardware infrastructure
and system workloads as well as process change.
In addition to designing hyperscale data centers, telecom operators have a particular opportunity
to utilize under-occupied central office real estate to install modern data center infrastructure as
a complement to (or replacement for) traditional IT equipment.
Done the right way, this will result in huge increases in data center capacity and performance,
as well as significant improvements inTCO and utilization per square meter of real estate. However,
such an undertaking will require careful planning of the data center environment when it comes
to, for example, power supply and cooling, which might look very different compared with central
office usage.
Designing for high-energy performance targeting reduced energy consumption is one of the
key requirements in the development of hyperscale infrastructure.Traditionally, energy efficiency
in data centers has primarily, or even exclusively, focused on the efficiency of the site equipment.
Hyperscale infrastructure takes energy performance one step further by focusing on the network
functionality platform itself.
There are multiple reasons for the increased attention given to high network energy performance.
Energy-related opex is a significant cost for data center operations. In addition, there is also an
increasing awareness of – and concrete requests for – resource efficient and sustainable solutions
among operators.
Transparent asset management and server usage across the data center combined with
software-defined infrastructure can take server utilization to the next step, beyond what has
already been achieved with virtualization. This enables a more efficient packing of resources to
fewer hardware units during low traffic periods. High server utilization, in combination with the
ability to set unused servers in various types of sleep modes (or even to turn them off completely),
has the potential to reduce power consumption – and thereby energy-related opex – significantly.
This is enabled by modern server components, including the central processing unit (CPU), with
state of the art energy saving features that enable significant reduction of energy consumption,
at small loads in particular. Furthermore, if all intra-data center transmission solutions are optical,
this excludes more energy consuming electrical transmission. This not only enables high
performance but also minimizes energy consumption for extensive intra-data center transmissions.
Although high-energy performance in hyperscale infrastructure starts with decreased energy
consumption at the node and platform level, the site aspects remain crucial and must also be
adequately addressed. Modern data center building practices for cooling and high voltage power
feeds have in many places already realized significant savings in energy and cost. Modern cooling
NEXT-GENERATION DATA CENTER INFRASTRUCTURE • DISRUPTIVE ECONOMICS  8
techniques involving the separation of hot and cold air, using free-cooling, cooling close to the
source and sometimes liquid, can often reduce energy (and CO2 emissions) by a factor of nearly
50 percent for the complete data center compared with older techniques.This gives it significant
potential in modernizing older central office building practices.
For operators, this further supports industry initiatives around software-defined networking
and Network Functions Virtualization. These initiatives enable a transformation to a unified
hyperscale architecture, which brings significant benefits. Chief among these is greater agility
in operations, application creation and delivery, and network provisioning and adaptation. TCO
is greatly reduced when resources can be rapidly and dynamically shared across multiple
processes, applications and customers.
In summary, hyperscale infrastructure has a positive effect on the environment as well as on
operational cost.
NEXT-GENERATION DATA CENTER INFRASTRUCTURE • PERPETUAL REFRESH 9
Perpetual
refresh
Hyperscale infrastructure enables life cycle management and TCO to move from unit-based
servers to individual components.The introduction of hardware disaggregation breaks the three-
to-five-year refresh cycle, enables the replacement of components that benefit most from refresh,
and eliminates forced replacement of entire systems. Compute and memory have the shortest
refresh rates (one to two years) while the chassis has the longest (10 plus years).
Disaggregation enables scale up or down at the component level, as well as the customization
of hardware resources for unique application requirements. The customization can be defined
per workload on an on-demand basis. Disaggregation also enables the use of the most up-to-
date components, which drive continuous best in class return on investment and capabilities,
as well as a higher degree of flexibility.
For example, disaggregation makes it possible to run a data center in a multi-vendor environment
in which components can be procured from the best source. Such a scenario requires advanced
infrastructure management tools, both for managing the use of the resources and to automate
processes.
A unified infrastructure management architecture can improve utilization by enabling the
allocation of shared resources when a new application or workload is introduced, rather than
requiring additional data center hardware.
NEXT-GENERATION DATA CENTER INFRASTRUCTURE • CATALYTIC MODERNIZATION 10
Catalytic
modernization
Not all infrastructure can be replaced at the same time. However, through innovative equipment
management software, the old and new hardware can be brought under one intelligence,
governance and life cycle model.
For example, it should be possible to perform natural language search for every component
in all infrastructure. It should be possible to interrogate the BIOS version across the total
installation of, say, 10,000 machines and replace a vulnerable version with one that is secured
– in one action. This should be possible across any vendor and any device, all managed from
one single interface.
To run high performance operations at hyperscale, it is important to enable analysis of all state
information in real time. There is a need for comprehensive intelligence on performance, state
and reliability.
And to enable simple scale actions as referred to above, there is a need for simple, scalable
workflows for driving complex, massive-scale configurations that can enable deployment of new
machines in minutes, not months.
When evaluating catalytic modernization,
it is important to understand the six axes
of scalability in hardware design. These
represent independent “levers” that directly
influence and address a variety of
workloads, as shown in Figure 2. The six
axes are:
1.	CPU core count
2.	dynamic random access memory
(DRAM) size (memory available to
applications)
3.	storage capacity
4.	storage performance (read/write
performance)
5.	network capacity (bandwidth available)
6.	network performance (speed and latency
of the network).
The differential rates of change in these six
components drive the need to consider
them separately as part of overall data
center life cycle management.
Together with the disaggregated
hardware approach, this enables
organizations to implement a framework
that systematically addresses all of the
major workloads using a single system, as
shown in Figure 3.
The benefits extend beyond the positive
capex and depreciation impacts that come
from only replacing what needs to be
replaced. The six axes of scalability – in
combination with the governance,
automation, introspection and control of an
Directly impact the
number of applications
that the system can
support
Storage
capacity
Storage
performance
Storage
performance
CPU core
count
CPU core
count
DRAM
size
Maximize throughput across six axes of scalability
DRAM
size
Storage
capacity
Network
capacity
Network
capacity
Network
performance
Network
performance
Axes of
scalability
Impacts the system
disk I/O capabilities
Impacts the amount of
data the system can store
Impact the inter and
intra system messaging,
communications and
throughput
Figure 2: The six axes of scalability.
Standard
workloads
Standard High memory High CPU High network
I/O
High storage High disk I/O
Large memory
footprint
Intensive CPU
needs
Bandwidth
intensive
Large storage,
low I/O
Fast disk
read/write
Single system
architecture
Figure 3: A single system architecture addresses all major workloads.
NEXT-GENERATION DATA CENTER INFRASTRUCTURE • CATALYTIC MODERNIZATION 11
infrastructure manager – enable organizations to gain additional operational agility from large-
scale resource orchestration and automation.They also help ensure optimal service experience
by tailoring resources to meet specific workloads.
Furthermore, they improve operational cost efficiency, with market-leading cloud providers
having used these methodologies to move from the industry standard of 1:300 (system
administrators-to-server ratio) to a new industry benchmark of 1:25,000. This kind of efficiency
should be enabled by any new data center hardware.
NEXT-GENERATION DATA CENTER INFRASTRUCTURE • CONCLUSION 12
Conclusion
The rate of technical change is exponential, not linear. In the future, companies will be dependent
on 10 times their current IT capacity, yet they will not have 10 times the budget to deliver.
The leading public cloud providers (Amazon, Facebook and Google, among others) have
developed their own hyperscale computing approaches and have changed the rules of the game
in data center design, construction and management. They have moved on from traditional IT
practice around data center design, hardware procurement and life cycle management, and
business operation.
The need for new infrastructure will enable life cycle management and TCO to move from
unit-based servers to individual components.The introduction of hardware disaggregation breaks
the three-to-five-year refresh cycle, enables the replacement of components that benefit most
from refresh, and eliminates forced replacement of entire systems. It will also make it possible
to perform natural language search for every component in all infrastructure, and to manage any
vendor and any device from a single interface.
The approach outlined in this paper creates the opportunity for a competitive economic,
operational and technical solution to deliver on the future needs of hyperscale infrastructure.
One aspect of this is improved resource utilization, leading to lower energy consumption and a
positive environmental effect. The concluding industrialization cycle represents a continuous
operational improvement model that, if followed, creates total improvement rather than short-
term strategy.
The future will combine the roads to 5G and the next generation cloud [4], and will give
organizations the platform they need to transform from a sense of “deploy and hope” to one of
trust and security.
NEXT-GENERATION DATA CENTER INFRASTRUCTURE • GLOSSARY 13
GLOSSARY
API	 application programming interface
CPU	 central processing unit
DRAM	 dynamic random access memory
I/O	 input/output
TCO	 total cost of ownership
NEXT-GENERATION DATA CENTER INFRASTRUCTURE • REFERENCES 14
[1] Salim Ismail, 2014, Exponential Organizations: Why New Organizations Are Ten Times
Better, Faster, and Cheaper thanYours (and What to Do About It), Diversion Books.
[2] The Economic Times, May 2012, Cloud computing: How tech giants like Google, Facebook,
Amazon store the world’s data, available at: http://articles.economictimes.indiatimes.
com/2012-05-27/news/31860969_1_instagram-largest-online-retailer-users
[3] Gartner, March 2014, Gartner Says By 2017 Web-Scale IT Will Be an Architectural
Approach Found Operating in 50 Percent of Global Enterprises, available at:
http://www.gartner.com/newsroom/id/2675916
[4] Ericsson, February 2015, White Paper: 5G systems – enabling industry and society
transformation, available at: http://www.ericsson.com/news/150126-5g-systems-enabling-
industry-and-society-transformation_244069647_c?categoryFilter=white_
papers_1270673222_c
© 2015 Ericsson AB – All rights reserved
References

Contenu connexe

Tendances

Efficient Data Centers Are Built On New Technologies and Strategies
Efficient Data Centers Are Built On New Technologies and StrategiesEfficient Data Centers Are Built On New Technologies and Strategies
Efficient Data Centers Are Built On New Technologies and StrategiesCMI, Inc.
 
Survivors guide to the cloud whitepaper
Survivors guide to the cloud whitepaperSurvivors guide to the cloud whitepaper
Survivors guide to the cloud whitepaperOnomi
 
Why Infrastructure matters?!
Why Infrastructure matters?!Why Infrastructure matters?!
Why Infrastructure matters?!Gabi Bauer
 
Andmekeskuse hüperkonvergents
Andmekeskuse hüperkonvergentsAndmekeskuse hüperkonvergents
Andmekeskuse hüperkonvergentsPrimend
 
Modernizing the Enterprise Monolith: EQengineered Consulting Green Paper
Modernizing the Enterprise Monolith: EQengineered Consulting Green PaperModernizing the Enterprise Monolith: EQengineered Consulting Green Paper
Modernizing the Enterprise Monolith: EQengineered Consulting Green PaperMark Hewitt
 
Techaisle SMB Cloud Computing Adoption Market Research Report Details
Techaisle SMB Cloud Computing Adoption Market Research Report DetailsTechaisle SMB Cloud Computing Adoption Market Research Report Details
Techaisle SMB Cloud Computing Adoption Market Research Report DetailsTechaisle
 
Accenture hana-in-memory-pov
Accenture hana-in-memory-povAccenture hana-in-memory-pov
Accenture hana-in-memory-povK Thomas
 
Fog-Computing-Virtualizing-Industry-White-Paper
Fog-Computing-Virtualizing-Industry-White-PaperFog-Computing-Virtualizing-Industry-White-Paper
Fog-Computing-Virtualizing-Industry-White-PaperGraham Beauregard
 
Why Infrastructure Matters for Big Data & Analytics
Why Infrastructure Matters for Big Data & AnalyticsWhy Infrastructure Matters for Big Data & Analytics
Why Infrastructure Matters for Big Data & AnalyticsRick Perret
 
M&A Trends in Telco Analytics
M&A Trends in Telco AnalyticsM&A Trends in Telco Analytics
M&A Trends in Telco AnalyticsOpen Analytics
 
"How CenturyLink is Setting the standard for the Next Generation of Cloud Ser...
"How CenturyLink is Setting the standard for the Next Generation of Cloud Ser..."How CenturyLink is Setting the standard for the Next Generation of Cloud Ser...
"How CenturyLink is Setting the standard for the Next Generation of Cloud Ser...Lillian Hiscox
 
Big Data Infrastructure and Analytics Solution on FITAT2013
Big Data Infrastructure and Analytics Solution on FITAT2013Big Data Infrastructure and Analytics Solution on FITAT2013
Big Data Infrastructure and Analytics Solution on FITAT2013Erdenebayar Erdenebileg
 
Smart Analytics For The Utility Sector
Smart Analytics For The Utility SectorSmart Analytics For The Utility Sector
Smart Analytics For The Utility SectorHerman Bosker
 
Introduction: Real-Time Analytics on Data in Motion
Introduction: Real-Time Analytics on Data in MotionIntroduction: Real-Time Analytics on Data in Motion
Introduction: Real-Time Analytics on Data in MotionAvadhoot Patwardhan
 
Ibm big dataandanalytics_28433_archposter_wht_mar_2014_v4
Ibm big dataandanalytics_28433_archposter_wht_mar_2014_v4Ibm big dataandanalytics_28433_archposter_wht_mar_2014_v4
Ibm big dataandanalytics_28433_archposter_wht_mar_2014_v4Friedel Jonker
 
ViON Corporation: Surviving IT Change
ViON Corporation: Surviving IT ChangeViON Corporation: Surviving IT Change
ViON Corporation: Surviving IT ChangeGovCloud Network
 

Tendances (20)

Efficient Data Centers Are Built On New Technologies and Strategies
Efficient Data Centers Are Built On New Technologies and StrategiesEfficient Data Centers Are Built On New Technologies and Strategies
Efficient Data Centers Are Built On New Technologies and Strategies
 
Ibm sbp hw2_kapatsoulias_vasileios
Ibm sbp hw2_kapatsoulias_vasileiosIbm sbp hw2_kapatsoulias_vasileios
Ibm sbp hw2_kapatsoulias_vasileios
 
Survivors guide to the cloud whitepaper
Survivors guide to the cloud whitepaperSurvivors guide to the cloud whitepaper
Survivors guide to the cloud whitepaper
 
Survivors Guide To The Cloud
Survivors Guide To The CloudSurvivors Guide To The Cloud
Survivors Guide To The Cloud
 
Why Infrastructure matters?!
Why Infrastructure matters?!Why Infrastructure matters?!
Why Infrastructure matters?!
 
Andmekeskuse hüperkonvergents
Andmekeskuse hüperkonvergentsAndmekeskuse hüperkonvergents
Andmekeskuse hüperkonvergents
 
Modernizing the Enterprise Monolith: EQengineered Consulting Green Paper
Modernizing the Enterprise Monolith: EQengineered Consulting Green PaperModernizing the Enterprise Monolith: EQengineered Consulting Green Paper
Modernizing the Enterprise Monolith: EQengineered Consulting Green Paper
 
Techaisle SMB Cloud Computing Adoption Market Research Report Details
Techaisle SMB Cloud Computing Adoption Market Research Report DetailsTechaisle SMB Cloud Computing Adoption Market Research Report Details
Techaisle SMB Cloud Computing Adoption Market Research Report Details
 
Accenture hana-in-memory-pov
Accenture hana-in-memory-povAccenture hana-in-memory-pov
Accenture hana-in-memory-pov
 
Fog-Computing-Virtualizing-Industry-White-Paper
Fog-Computing-Virtualizing-Industry-White-PaperFog-Computing-Virtualizing-Industry-White-Paper
Fog-Computing-Virtualizing-Industry-White-Paper
 
Technology Trends 2012
Technology Trends 2012Technology Trends 2012
Technology Trends 2012
 
Why Infrastructure Matters for Big Data & Analytics
Why Infrastructure Matters for Big Data & AnalyticsWhy Infrastructure Matters for Big Data & Analytics
Why Infrastructure Matters for Big Data & Analytics
 
M&A Trends in Telco Analytics
M&A Trends in Telco AnalyticsM&A Trends in Telco Analytics
M&A Trends in Telco Analytics
 
"How CenturyLink is Setting the standard for the Next Generation of Cloud Ser...
"How CenturyLink is Setting the standard for the Next Generation of Cloud Ser..."How CenturyLink is Setting the standard for the Next Generation of Cloud Ser...
"How CenturyLink is Setting the standard for the Next Generation of Cloud Ser...
 
Big Data Infrastructure and Analytics Solution on FITAT2013
Big Data Infrastructure and Analytics Solution on FITAT2013Big Data Infrastructure and Analytics Solution on FITAT2013
Big Data Infrastructure and Analytics Solution on FITAT2013
 
Smart Analytics For The Utility Sector
Smart Analytics For The Utility SectorSmart Analytics For The Utility Sector
Smart Analytics For The Utility Sector
 
Introduction: Real-Time Analytics on Data in Motion
Introduction: Real-Time Analytics on Data in MotionIntroduction: Real-Time Analytics on Data in Motion
Introduction: Real-Time Analytics on Data in Motion
 
Ibm big dataandanalytics_28433_archposter_wht_mar_2014_v4
Ibm big dataandanalytics_28433_archposter_wht_mar_2014_v4Ibm big dataandanalytics_28433_archposter_wht_mar_2014_v4
Ibm big dataandanalytics_28433_archposter_wht_mar_2014_v4
 
IBM: Redefining Enterprise Systems
IBM: Redefining Enterprise SystemsIBM: Redefining Enterprise Systems
IBM: Redefining Enterprise Systems
 
ViON Corporation: Surviving IT Change
ViON Corporation: Surviving IT ChangeViON Corporation: Surviving IT Change
ViON Corporation: Surviving IT Change
 

Similaire à next-generation-data-centers

Ericsson hds 8000 wp 16
Ericsson hds 8000 wp 16Ericsson hds 8000 wp 16
Ericsson hds 8000 wp 16Mainstay
 
The data center impact of cloud, analytics, mobile, social and security rlw03...
The data center impact of cloud, analytics, mobile, social and security rlw03...The data center impact of cloud, analytics, mobile, social and security rlw03...
The data center impact of cloud, analytics, mobile, social and security rlw03...Diego Alberto Tamayo
 
Dynamic Hyper-Converged Future Proof Your Data Center
Dynamic Hyper-Converged Future Proof Your Data CenterDynamic Hyper-Converged Future Proof Your Data Center
Dynamic Hyper-Converged Future Proof Your Data CenterDataCore Software
 
Enabling Hybrid Cloud Today With Microsoft-technologies-v1-0
Enabling Hybrid Cloud Today With Microsoft-technologies-v1-0Enabling Hybrid Cloud Today With Microsoft-technologies-v1-0
Enabling Hybrid Cloud Today With Microsoft-technologies-v1-0David J Rosenthal
 
Supply Chain Transformation on the Cloud |Accenture
Supply Chain Transformation on the Cloud |AccentureSupply Chain Transformation on the Cloud |Accenture
Supply Chain Transformation on the Cloud |Accentureaccenture
 
Clabby Analytics White Paper: Beyond Virtualization: Building A Long Term Inf...
Clabby Analytics White Paper: Beyond Virtualization: Building A Long Term Inf...Clabby Analytics White Paper: Beyond Virtualization: Building A Long Term Inf...
Clabby Analytics White Paper: Beyond Virtualization: Building A Long Term Inf...IBM India Smarter Computing
 
Data Virtualization, a Strategic IT Investment to Build Modern Enterprise Dat...
Data Virtualization, a Strategic IT Investment to Build Modern Enterprise Dat...Data Virtualization, a Strategic IT Investment to Build Modern Enterprise Dat...
Data Virtualization, a Strategic IT Investment to Build Modern Enterprise Dat...Denodo
 
A Study on the Application of Web-Scale IT in Enterprises in IoT Era
A Study on the Application of Web-Scale IT in Enterprises in IoT EraA Study on the Application of Web-Scale IT in Enterprises in IoT Era
A Study on the Application of Web-Scale IT in Enterprises in IoT Era Hassan Keshavarz
 
Thought Leader Interview: Dr. William Turner on the Software­-Defined Future ...
Thought Leader Interview: Dr. William Turner on the Software­-Defined Future ...Thought Leader Interview: Dr. William Turner on the Software­-Defined Future ...
Thought Leader Interview: Dr. William Turner on the Software­-Defined Future ...Iver Band
 
A Special Report on Infrastructure Futures: Keeping Pace in the Era of Big Da...
A Special Report on Infrastructure Futures: Keeping Pace in the Era of Big Da...A Special Report on Infrastructure Futures: Keeping Pace in the Era of Big Da...
A Special Report on Infrastructure Futures: Keeping Pace in the Era of Big Da...IBM India Smarter Computing
 
IDC: Selecting the Optimal Path to Private Cloud
IDC: Selecting the Optimal Path to Private CloudIDC: Selecting the Optimal Path to Private Cloud
IDC: Selecting the Optimal Path to Private CloudEMC
 
CSC Journey to the Digital Enterprise
CSC Journey to the Digital EnterpriseCSC Journey to the Digital Enterprise
CSC Journey to the Digital EnterpriseKristof Breesch
 
Digital Transformation and Application Decommissioning - THE RESEARCH
Digital Transformation and Application Decommissioning - THE RESEARCHDigital Transformation and Application Decommissioning - THE RESEARCH
Digital Transformation and Application Decommissioning - THE RESEARCHTom Rieger
 
DRIVERS AND IMPEDIMENTS TO DIGITAL TRANSFORMATION - THE RESEARCH
DRIVERS AND IMPEDIMENTS TO DIGITAL TRANSFORMATION - THE RESEARCHDRIVERS AND IMPEDIMENTS TO DIGITAL TRANSFORMATION - THE RESEARCH
DRIVERS AND IMPEDIMENTS TO DIGITAL TRANSFORMATION - THE RESEARCHTom Rieger
 

Similaire à next-generation-data-centers (20)

Ericsson hds 8000 wp 16
Ericsson hds 8000 wp 16Ericsson hds 8000 wp 16
Ericsson hds 8000 wp 16
 
The data center impact of cloud, analytics, mobile, social and security rlw03...
The data center impact of cloud, analytics, mobile, social and security rlw03...The data center impact of cloud, analytics, mobile, social and security rlw03...
The data center impact of cloud, analytics, mobile, social and security rlw03...
 
FINAL VER - 2015_09
FINAL VER - 2015_09FINAL VER - 2015_09
FINAL VER - 2015_09
 
Sa*ple
Sa*pleSa*ple
Sa*ple
 
Dynamic Hyper-Converged Future Proof Your Data Center
Dynamic Hyper-Converged Future Proof Your Data CenterDynamic Hyper-Converged Future Proof Your Data Center
Dynamic Hyper-Converged Future Proof Your Data Center
 
Enabling Hybrid Cloud Today With Microsoft-technologies-v1-0
Enabling Hybrid Cloud Today With Microsoft-technologies-v1-0Enabling Hybrid Cloud Today With Microsoft-technologies-v1-0
Enabling Hybrid Cloud Today With Microsoft-technologies-v1-0
 
Supply Chain Transformation on the Cloud |Accenture
Supply Chain Transformation on the Cloud |AccentureSupply Chain Transformation on the Cloud |Accenture
Supply Chain Transformation on the Cloud |Accenture
 
Clabby Analytics White Paper: Beyond Virtualization: Building A Long Term Inf...
Clabby Analytics White Paper: Beyond Virtualization: Building A Long Term Inf...Clabby Analytics White Paper: Beyond Virtualization: Building A Long Term Inf...
Clabby Analytics White Paper: Beyond Virtualization: Building A Long Term Inf...
 
Data Virtualization, a Strategic IT Investment to Build Modern Enterprise Dat...
Data Virtualization, a Strategic IT Investment to Build Modern Enterprise Dat...Data Virtualization, a Strategic IT Investment to Build Modern Enterprise Dat...
Data Virtualization, a Strategic IT Investment to Build Modern Enterprise Dat...
 
A Study on the Application of Web-Scale IT in Enterprises in IoT Era
A Study on the Application of Web-Scale IT in Enterprises in IoT EraA Study on the Application of Web-Scale IT in Enterprises in IoT Era
A Study on the Application of Web-Scale IT in Enterprises in IoT Era
 
Thought Leader Interview: Dr. William Turner on the Software-Defined Future ...
Thought Leader Interview:  Dr. William Turner on the Software-Defined Future ...Thought Leader Interview:  Dr. William Turner on the Software-Defined Future ...
Thought Leader Interview: Dr. William Turner on the Software-Defined Future ...
 
Thought Leader Interview: Dr. William Turner on the Software­-Defined Future ...
Thought Leader Interview: Dr. William Turner on the Software­-Defined Future ...Thought Leader Interview: Dr. William Turner on the Software­-Defined Future ...
Thought Leader Interview: Dr. William Turner on the Software­-Defined Future ...
 
FS Netmagic - Whitepaper
FS Netmagic - WhitepaperFS Netmagic - Whitepaper
FS Netmagic - Whitepaper
 
A Special Report on Infrastructure Futures: Keeping Pace in the Era of Big Da...
A Special Report on Infrastructure Futures: Keeping Pace in the Era of Big Da...A Special Report on Infrastructure Futures: Keeping Pace in the Era of Big Da...
A Special Report on Infrastructure Futures: Keeping Pace in the Era of Big Da...
 
IDC: Selecting the Optimal Path to Private Cloud
IDC: Selecting the Optimal Path to Private CloudIDC: Selecting the Optimal Path to Private Cloud
IDC: Selecting the Optimal Path to Private Cloud
 
CSC Journey to the Digital Enterprise
CSC Journey to the Digital EnterpriseCSC Journey to the Digital Enterprise
CSC Journey to the Digital Enterprise
 
Openstack
OpenstackOpenstack
Openstack
 
Digital Transformation and Application Decommissioning - THE RESEARCH
Digital Transformation and Application Decommissioning - THE RESEARCHDigital Transformation and Application Decommissioning - THE RESEARCH
Digital Transformation and Application Decommissioning - THE RESEARCH
 
DRIVERS AND IMPEDIMENTS TO DIGITAL TRANSFORMATION - THE RESEARCH
DRIVERS AND IMPEDIMENTS TO DIGITAL TRANSFORMATION - THE RESEARCHDRIVERS AND IMPEDIMENTS TO DIGITAL TRANSFORMATION - THE RESEARCH
DRIVERS AND IMPEDIMENTS TO DIGITAL TRANSFORMATION - THE RESEARCH
 
WTM Technical Writing example.pdf
WTM Technical Writing example.pdfWTM Technical Writing example.pdf
WTM Technical Writing example.pdf
 

next-generation-data-centers

  • 1. MAKING HYPERSCALE AVAILABLE In the Networked Society, enterprises will need 10 times their current IT capacity – but without 10 times the budget. Leading cloud providers have already changed the game by developing their own hyperscale computing approaches, and operators and enterprises can also adopt hyperscale infrastructure that enables a lower total cost of ownership. This paper discusses the opportunity for a competitive economic, operational and technical solution to deliver on the future needs of enterprises. ericsson White paper Uen 284 23-3264 | February 2015 Next-generation data center infrastructure This paper was developed in collaboration between Ericsson and Intel.
  • 2. NEXT-GENERATION DATA CENTER INFRASTRUCTURE • INTRODUCTION 2 Introduction The current rate of technical change is exponential, not linear. As the book Exponential Organizations by Salim Ismail explains: “Together, all indications are that we are shifting to an information-based paradigm…when you shift to an information-based environment, the pace of development jumps onto an exponential growth path and performance/price doubles every year or two” [1]. This means that when a technical challenge is 1 percent accomplished, such as mapping the human genome, it is actually halfway to completion.This also explains why the amount of change in one year is always overestimated, but the amount of change in 10 years is always underestimated. In the Networked Society, everything that benefits from being connected will be connected. As a result, in the near future the vast majority of end points on networks are going to be machines independent of humans, rather than humans holding machines. New use cases will include data collection and control signaling, alongside voice, media and messaging. And in terms of traffic flow, the new use cases will be predominantly upload (data collection), not download. We do not know exactly what these use cases will require from the underlying compute, storage and network infrastructures, but early indications are that today’s design assumptions will be challenged, and that the future will require a new hyperscale approach.
  • 3. NEXT-GENERATION DATA CENTER INFRASTRUCTURE • A DIFFERENT MODEL 3 A different model All businesses are becoming both software companies and “information enabled.” In the near future, companies will be dependent on 10 times the IT capacity of today, yet they will not have 10 times the budget to deliver.Their approach must therefore change.The companies that have already accomplished this transformation are the public cloud providers (Google, Facebook and Amazon, among others), which have developed their own “hyperscale” computing approaches, and changed the rules of the game in data center design, construction and management. They have moved on from traditional IT practice around data center design, hardware procurement and life cycle management and business operation.To deliver hyperscale computing, they no longer buy from traditional IT vendors but instead have created their own technology and operations in-house. By having discipline and focus on issues of power, cooling, server, storage, network, automation and governance, these leading cloud providers have reached new levels of efficiency, performance and agility to realize what the industry is starting to call “web-scale IT.” This approach enables them to rapidly scale up or down, to adapt and deliver highly focused and resilient customer-centric solutions, while dramatically minimizing resource wastage and operational costs. The problem for the rest of the world is that the leaders in this space are not vendors, while the traditional IT vendors have not stepped up to meet the new demand. Most enterprises do not have access to the resources needed to develop their own hyperscale architecture. Instead, they must rely on equipment from traditional IT vendors or custom management solutions with little control or vendor support.This puts them at risk of getting left behind economically. Meanwhile, the traditional IT vendors are using a model that ensures margin maintenance for the vendor, but at the expense of the customer that must compete against hyperscale cloud performance and characteristics. This means that most companies have a bad choice: either use the public hyperscale cloud providers and be constrained to their business mandates, or carry on buying from traditional IT vendors and risk not being competitive in their market.
  • 4. NEXT-GENERATION DATA CENTER INFRASTRUCTURE • DISAGGREGATED HARDWARE ARCHITECTURES 4 Disaggregated hardware architectures As the massive growth of information technology services places increasing demand on the data center, it is important to re-architect the underlying infrastructure, allowing companies and users to benefit from an increasingly services-oriented world. The core of a disaggregated architecture is a range of compute, fabric, storage and management modules that work together to build a wide range of virtual systems.This flexibility can be utilized differently by different solution stacks. In such a system, the following four pillars of capability are essential: manager for multi-rack management: this includes hardware, firmware and software application programming interfaces (APIs) that enable the management of resources and policies across total operation and expose a standard interface to both hardware below it and the orchestration layer above it. pooled system: this enables the creation of a system using pooled resources that include compute, network and storage based on workload requirements. scalable multi-rack storage: this includes Ethernet connected storage that can be loaded with storage algorithms to support a range of uses. Different configurations of storage hardware and compute nodes with local storage, as well as multi-rack resources, should be made available. efficient configurable network fabric: the networking hardware, interconnect (cables, backplane) and management that support a wide range of cost-effective network topologies. Designs should include current “top of rack” switch designs but also extend to designs that utilize distributed switches in the platforms, which remove levels of the switch hierarchy. Combining a disaggregated hardware architecture with optical interconnect removes the traditional distance and capacity limitations of electrical connections. It enables efficient pooling of resources while removing connectivity bottlenecks, which is critical for real-time services, for example. As data center demand grows, high-speed optical connectivity helps organizations to scale out by enabling software to span across large hardware “pools” that are geographically distant and do not simply scale up with bigger and faster hardware locally.
  • 5. NEXT-GENERATION DATA CENTER INFRASTRUCTURE • HYPERSCALE PERFORMANCE 5 Hyperscale performance In an exponentially changing world, hyperscale performance is not a static state, and in the real world, it has to start from where an organization finds itself today. One methodology to achieve continuously improving hyperscale performance comes from adoption and continuous iteration through the industrialization cycle.This drives organizations to have one approach to all they do, since this is where best performance has the optimum chance of being found. Adoption of the approach enables disruptive economics, uses “catalytic modernization” capabilities, and depends on having an infrastructure approach that can break the traditional refresh cycle. The ability to answer seemingly basic questions such as: “How many servers are in operation?”, “Where are they?”, and “What are they running?” highlights the maturity of any organization in its industrialization journey.The public cloud providers have this information in real time as a side effect of how they approach their total business and infrastructure management. The industrialization cycle shown in Figure 1 represents an operational improvement model continuously driven by business case argumentation. STANDARDIZE The first step to modernization is to standardize one set of technologies. Google is a great example of a company that has standardized on a single hardware, software, operational and economic strategy – all on an end-to-end basis. The key benefit of standardization is increased efficiency – the continuous focus on standardization enables continuous improvement. Relevant questions that organizations should ask here include: To what extent has your organization standardized along the dimensions of facilities, hardware, software, operations and economics? How effectively is your organization reaping the benefits of continuous improvement from standardization? COMBINE Organizations need a combine and consolidate strategy. “Consolidate” means taking everything and bringing it onto one platform, and “combine” means leaving selected legacy systems where they are. It is critical to recognize that some things sit perfectly fine on a legacy mainframe, and should remain there. However, data and capabilities need to be combined so that they are exposed through a single platform. Relevant questions that organizations should ask here include: As an organization, where are you selectively consolidating systems? Where and how are you pursuing a combination strategy? Are data center economics a problem? How are you achieving high occupancy rates, so that your data center economics are not dominated by real estate costs? Govern Automate Abstract Combine Standardize Data center and central office Figure 1: The industrialization cycle.
  • 6. NEXT-GENERATION DATA CENTER INFRASTRUCTURE • HYPERSCALE PERFORMANCE 6 ABSTRACT Organizations need to create proper levels of abstraction to expose cloud functionality. A key advantage of cloud systems is that they are not a black box; they are in fact highly accessible. Relevant questions that organizations should ask here include: How have you approached the abstraction of your cloud-based systems? At what layers in the cloud stack have you modularized and abstracted? How well is this working for you? Are there functions or capabilities where complete programmability is lacking? AUTOMATE The process of automation delivers benefits in terms of cost savings and agility. Taking out the human element provides opex benefits and facilitates responsiveness. However, it also introduces the question of security and control – automation and governance really go hand-in-hand. Relevant questions that organizations should ask here include: To what extent have you been able to reduce the human element in your cloud system? Where would you like to push harder to derive additional opex benefits? How much of your budget is spent on maintaining what you currently have? What needs to happen for your organization to accelerate its automation journey? How would you prioritize the opportunities for automation and the level of effort required? GOVERN Normally, security is managed by limiting access. But as already noted, clouds are highly accessible. Think about the public cloud – a situation where unknown people with unknown credentials are handling unknown workloads and have complete access to hardware.This is the very definition of compromise. Security is the biggest inhibitor to cloud adoption, and there is no question that robust security models have to complete the industrialization cycle. Relevant questions that organizations should ask here include: Security: How secure would you say your key cloud implementations are? Why? How are you approaching cloud security today? Governance: What trade-offs are you making in exchange for governance (for example, speed or agility)? Governance: How much time does it take for a new policy to be enacted across all your data centers? For example, would you be able to change all passwords with a single command? If not, how distant is this from your reality today? Compliance: Do you have processes for procedures that come up regularly, such as the disposal of a server? Compliance: At any point in time, have you risked breaking the law because of jurisdiction or data location issues? Short-term vendor management on “lowest price wins” will not translate to business success unless placed in the context of true economic cycle improvement and thus continuous business improvement. If there is no continuous focus on standardizing facility, hardware, software, operations and economic strategy, there will not be continuous improvement. And if there is no continuous combine/consolidate strategy, then there will not be capex savings. Similarly, if there is no focus on abstracting underlying complexity into simple APIs, then programmability will not be possible and automation will not happen, translating to no opex savings or agility since humans are still required. And if there is no focus on governance, then all decisions taken may work or may not, may cause liability or may not. It will simply not be known until the first problems arise.
  • 7. NEXT-GENERATION DATA CENTER INFRASTRUCTURE • DISRUPTIVE ECONOMICS 7 Disruptive Economics The effects of exponential technology change are all around us. Digital infrastructure is no different – and defines the future market opportunity – but delivering it requires adoption of best practice hyperscale approaches. Two backend engineers can now scale a system to support over 30 million active users [2]. Web-scale IT is a pattern of global-class computing that delivers the capabilities of large cloud service providers within an enterprise IT setting by rethinking positions across several dimensions. By 2017, web-scale IT will be an architectural approach found operating in 50 percent of global enterprises, up from less than 10 percent in 2013 [3]. The benefits of adoption of hyperscale hardware include: New levels of capex and opex savings while decreasing time to market. New levels of utilization and operating resource efficiencies. On-demand infrastructure is treated as a single system with one focus on building, managing and operating from an application delivery cost perspective. It is application-defined infrastructure. Lower total cost of ownership (TCO) results from full awareness of hardware infrastructure and system workloads as well as process change. In addition to designing hyperscale data centers, telecom operators have a particular opportunity to utilize under-occupied central office real estate to install modern data center infrastructure as a complement to (or replacement for) traditional IT equipment. Done the right way, this will result in huge increases in data center capacity and performance, as well as significant improvements inTCO and utilization per square meter of real estate. However, such an undertaking will require careful planning of the data center environment when it comes to, for example, power supply and cooling, which might look very different compared with central office usage. Designing for high-energy performance targeting reduced energy consumption is one of the key requirements in the development of hyperscale infrastructure.Traditionally, energy efficiency in data centers has primarily, or even exclusively, focused on the efficiency of the site equipment. Hyperscale infrastructure takes energy performance one step further by focusing on the network functionality platform itself. There are multiple reasons for the increased attention given to high network energy performance. Energy-related opex is a significant cost for data center operations. In addition, there is also an increasing awareness of – and concrete requests for – resource efficient and sustainable solutions among operators. Transparent asset management and server usage across the data center combined with software-defined infrastructure can take server utilization to the next step, beyond what has already been achieved with virtualization. This enables a more efficient packing of resources to fewer hardware units during low traffic periods. High server utilization, in combination with the ability to set unused servers in various types of sleep modes (or even to turn them off completely), has the potential to reduce power consumption – and thereby energy-related opex – significantly. This is enabled by modern server components, including the central processing unit (CPU), with state of the art energy saving features that enable significant reduction of energy consumption, at small loads in particular. Furthermore, if all intra-data center transmission solutions are optical, this excludes more energy consuming electrical transmission. This not only enables high performance but also minimizes energy consumption for extensive intra-data center transmissions. Although high-energy performance in hyperscale infrastructure starts with decreased energy consumption at the node and platform level, the site aspects remain crucial and must also be adequately addressed. Modern data center building practices for cooling and high voltage power feeds have in many places already realized significant savings in energy and cost. Modern cooling
  • 8. NEXT-GENERATION DATA CENTER INFRASTRUCTURE • DISRUPTIVE ECONOMICS 8 techniques involving the separation of hot and cold air, using free-cooling, cooling close to the source and sometimes liquid, can often reduce energy (and CO2 emissions) by a factor of nearly 50 percent for the complete data center compared with older techniques.This gives it significant potential in modernizing older central office building practices. For operators, this further supports industry initiatives around software-defined networking and Network Functions Virtualization. These initiatives enable a transformation to a unified hyperscale architecture, which brings significant benefits. Chief among these is greater agility in operations, application creation and delivery, and network provisioning and adaptation. TCO is greatly reduced when resources can be rapidly and dynamically shared across multiple processes, applications and customers. In summary, hyperscale infrastructure has a positive effect on the environment as well as on operational cost.
  • 9. NEXT-GENERATION DATA CENTER INFRASTRUCTURE • PERPETUAL REFRESH 9 Perpetual refresh Hyperscale infrastructure enables life cycle management and TCO to move from unit-based servers to individual components.The introduction of hardware disaggregation breaks the three- to-five-year refresh cycle, enables the replacement of components that benefit most from refresh, and eliminates forced replacement of entire systems. Compute and memory have the shortest refresh rates (one to two years) while the chassis has the longest (10 plus years). Disaggregation enables scale up or down at the component level, as well as the customization of hardware resources for unique application requirements. The customization can be defined per workload on an on-demand basis. Disaggregation also enables the use of the most up-to- date components, which drive continuous best in class return on investment and capabilities, as well as a higher degree of flexibility. For example, disaggregation makes it possible to run a data center in a multi-vendor environment in which components can be procured from the best source. Such a scenario requires advanced infrastructure management tools, both for managing the use of the resources and to automate processes. A unified infrastructure management architecture can improve utilization by enabling the allocation of shared resources when a new application or workload is introduced, rather than requiring additional data center hardware.
  • 10. NEXT-GENERATION DATA CENTER INFRASTRUCTURE • CATALYTIC MODERNIZATION 10 Catalytic modernization Not all infrastructure can be replaced at the same time. However, through innovative equipment management software, the old and new hardware can be brought under one intelligence, governance and life cycle model. For example, it should be possible to perform natural language search for every component in all infrastructure. It should be possible to interrogate the BIOS version across the total installation of, say, 10,000 machines and replace a vulnerable version with one that is secured – in one action. This should be possible across any vendor and any device, all managed from one single interface. To run high performance operations at hyperscale, it is important to enable analysis of all state information in real time. There is a need for comprehensive intelligence on performance, state and reliability. And to enable simple scale actions as referred to above, there is a need for simple, scalable workflows for driving complex, massive-scale configurations that can enable deployment of new machines in minutes, not months. When evaluating catalytic modernization, it is important to understand the six axes of scalability in hardware design. These represent independent “levers” that directly influence and address a variety of workloads, as shown in Figure 2. The six axes are: 1. CPU core count 2. dynamic random access memory (DRAM) size (memory available to applications) 3. storage capacity 4. storage performance (read/write performance) 5. network capacity (bandwidth available) 6. network performance (speed and latency of the network). The differential rates of change in these six components drive the need to consider them separately as part of overall data center life cycle management. Together with the disaggregated hardware approach, this enables organizations to implement a framework that systematically addresses all of the major workloads using a single system, as shown in Figure 3. The benefits extend beyond the positive capex and depreciation impacts that come from only replacing what needs to be replaced. The six axes of scalability – in combination with the governance, automation, introspection and control of an Directly impact the number of applications that the system can support Storage capacity Storage performance Storage performance CPU core count CPU core count DRAM size Maximize throughput across six axes of scalability DRAM size Storage capacity Network capacity Network capacity Network performance Network performance Axes of scalability Impacts the system disk I/O capabilities Impacts the amount of data the system can store Impact the inter and intra system messaging, communications and throughput Figure 2: The six axes of scalability. Standard workloads Standard High memory High CPU High network I/O High storage High disk I/O Large memory footprint Intensive CPU needs Bandwidth intensive Large storage, low I/O Fast disk read/write Single system architecture Figure 3: A single system architecture addresses all major workloads.
  • 11. NEXT-GENERATION DATA CENTER INFRASTRUCTURE • CATALYTIC MODERNIZATION 11 infrastructure manager – enable organizations to gain additional operational agility from large- scale resource orchestration and automation.They also help ensure optimal service experience by tailoring resources to meet specific workloads. Furthermore, they improve operational cost efficiency, with market-leading cloud providers having used these methodologies to move from the industry standard of 1:300 (system administrators-to-server ratio) to a new industry benchmark of 1:25,000. This kind of efficiency should be enabled by any new data center hardware.
  • 12. NEXT-GENERATION DATA CENTER INFRASTRUCTURE • CONCLUSION 12 Conclusion The rate of technical change is exponential, not linear. In the future, companies will be dependent on 10 times their current IT capacity, yet they will not have 10 times the budget to deliver. The leading public cloud providers (Amazon, Facebook and Google, among others) have developed their own hyperscale computing approaches and have changed the rules of the game in data center design, construction and management. They have moved on from traditional IT practice around data center design, hardware procurement and life cycle management, and business operation. The need for new infrastructure will enable life cycle management and TCO to move from unit-based servers to individual components.The introduction of hardware disaggregation breaks the three-to-five-year refresh cycle, enables the replacement of components that benefit most from refresh, and eliminates forced replacement of entire systems. It will also make it possible to perform natural language search for every component in all infrastructure, and to manage any vendor and any device from a single interface. The approach outlined in this paper creates the opportunity for a competitive economic, operational and technical solution to deliver on the future needs of hyperscale infrastructure. One aspect of this is improved resource utilization, leading to lower energy consumption and a positive environmental effect. The concluding industrialization cycle represents a continuous operational improvement model that, if followed, creates total improvement rather than short- term strategy. The future will combine the roads to 5G and the next generation cloud [4], and will give organizations the platform they need to transform from a sense of “deploy and hope” to one of trust and security.
  • 13. NEXT-GENERATION DATA CENTER INFRASTRUCTURE • GLOSSARY 13 GLOSSARY API application programming interface CPU central processing unit DRAM dynamic random access memory I/O input/output TCO total cost of ownership
  • 14. NEXT-GENERATION DATA CENTER INFRASTRUCTURE • REFERENCES 14 [1] Salim Ismail, 2014, Exponential Organizations: Why New Organizations Are Ten Times Better, Faster, and Cheaper thanYours (and What to Do About It), Diversion Books. [2] The Economic Times, May 2012, Cloud computing: How tech giants like Google, Facebook, Amazon store the world’s data, available at: http://articles.economictimes.indiatimes. com/2012-05-27/news/31860969_1_instagram-largest-online-retailer-users [3] Gartner, March 2014, Gartner Says By 2017 Web-Scale IT Will Be an Architectural Approach Found Operating in 50 Percent of Global Enterprises, available at: http://www.gartner.com/newsroom/id/2675916 [4] Ericsson, February 2015, White Paper: 5G systems – enabling industry and society transformation, available at: http://www.ericsson.com/news/150126-5g-systems-enabling- industry-and-society-transformation_244069647_c?categoryFilter=white_ papers_1270673222_c © 2015 Ericsson AB – All rights reserved References