2. A BRIEF HISTORY OF IT INFRASTRUCTURE
Where did we
come from?
The
Mainframe
World!
• Single hardware vendor for
CPU/Memory/Storage (well, mostly). It
was already Integrated.
• Very reliable, with native redundancy
• Well understood staffing model
AND … It got the job done. Why did we
change?
• Vendor lock-in
• Proprietary and Expensive
• Costly to expand
• Did not address the business drivers that
were emerging based on the advent of:
• The Personal Computer
So, where did we go
from the Mainframe?
What were its characteristics?
3. OPEN SYSTEMS!
We started buying -
-
Server 2 Server 3
Ethernet
Server 1
Application Application Application
Networking for
connectivity …
Servers with local storage
….
Big storage arrays …
And we found out that …
• The hardware was Cheaper
• The Software was Cheaper
But we found that the staffing costs were vastly higher
4. THE COST OF PROGRESS
For the next 10 to 15 years, IT spent
enormous amounts of $$’s and Time building:
• Massive datacenters designed to house
servers/networks/storage;
• Huge IT Operations organizations designed
to integrate and support these huge
datacenters;
• Huge IT Development organizations
designed to build/acquire software to run
in these huge datacenters;
5. THE ADVENT OF VIRTUALIZATION
In 2007-08, Virtualization *finally* began to change
the landscape of those datacenters and how IT
worked.
• Consolidation of “servers” began to happen in the
form of VM proliferation;
• Server provisioning could now occur in minutes
(not weeks);
• Hardware vendor “lock-in” became a moot point
BUT
IT organizations were still in the business of
integrating all of the hardware to support this
6. LETS TAKE A LOOK AT THAT BASTION OF
KNOWLEDGE:
WIKIPEDIA
• Converged Infrastructure operates by grouping
multiple information technology (IT) components into
a single, optimized computing package. Components
of a converged infrastructure may include servers,
data storage devices, networking equipment and
software for IT infrastructure management,
automation and orchestration.
7. (CI is..) Grouping multiple information technology
(IT) components into a single, optimized computing
package.
LETS PEEL THIS DEFINITION APART
8. Components of a converged infrastructure may
include servers, data storage devices, networking
equipment and software …
MAYBE THE SECOND PART HOLDS THE
CLUE
9. … may include … software for IT infrastructure
management, automation and orchestration.
AHH, HERE IT IS
10. SO, WHAT ARE WE LEFT WITH?
We are left with the vendors telling us what a
Converged Infrastructure really is.
The first successfully marketed Converged
Infrastructure was the vBlock by VCE (now Converged
Platforms Division)
12. THE VBLOCK
The vBlock design is defined by VCE, built by VCE
and rolled onto the Datacenter floor by VCE. The
IT staff have nothing to do with the integration of
the parts. Provisioning software is built by VCE
and is used to configure the entire “System”.
13. THE VBLOCK
But the vBlock is a (nearly) completely closed
system.
Did we
come
full
circle?
?
14. THE VBLOCK CANNOT BE THE ONLY
CONVERGED INFRASTRUCTURE!
In fact, it is not. There are others, and they fall
under the heading of:
REFERENCE ARCHITECTURES
15. CONVERGED INFRASTRUCTURE REFERENCE
ARCHITECTURE DEFINED
A reference architecture in the field of software
architecture or enterprise architecture provides a
template solution for an architecture for a particular
domain.
(again Wikipedia.org)
16. SO, WHAT DID ALL OF THIS BUY US?
One thing: A SINGLE SUPPORT MODEL.
With VCE, you call VCE for support regardless of
Cisco/EMC/VMware issues. The same goes for the
Reference Architecture vendors.
Missing Items:
• Single Management Plane
• Automation
• Orchestration
17. SO, WHAT’S THE VERDICT?
The Verdict is:
• Converged Infrastructure was and is an stepping stone
to get IT out of the “Integration” business and to start
focusing on Service Delivery.
• BUT, it falls short in:
• The Management/Automation and Orchestration
arena.
• Being Cost Effective
• Being Flexible
SO, WHERE DO WE GO FROM CI?
18. HYPERCONVERGED INFRASTRUCTURE (HCI)
HCI is an outgrowth of the older CI systems,
but where Storage is now as tightly woven
into the infrastructure fabric as
Virtualization was into the server/CPU
infrastructure.
WHAT MADE HCI PRACTICAL?
19. Really, two things:
- CPU’s with vastly more cores
- Flash or SSD’s
Flash technology has allowed IT to:
- Deliver more IOPS/$$ than any other storage
technology
- Eliminate planning about “RAID Groups” or
worrying about mixing workloads (ie., Random vs
Serial).
- Deliver “set and forget” Compression and
Deduplication.WHAT WE HAVE NOW ….
20. Node 2 Node NNode 1
Hypervisor Hypervisor Hypervisor
VM VM VM VM VM VM VM VM VM
Creation of a Single Storage Pool
HYPER CONVERGED INFRASTRUCTURE
10g Networking
Scale Out Storage
Add as needed to
match workload
SO, WHAT'S THE
CATCH?
Advantages of HCI:
- Your infrastructure is Managed as a whole. You
get closer to the “Single Pane of Glass”.
- The amount of Rackspace used is typically 3x
less when compared to Traditional or CI
systems.
- Vastly less complex
- Can use commodity or “whitebox” hardware.
21. HCI ISSUES
- Storage scales out – but at the cost of being forced to
add compute that I might not need.
- Adding HCI nodes is typically more expensive than
adding incremental compute/storage to traditional
systems or to CI systems.
- Despite the claim that HCI is all “Software Defined” – we
still need physical devices in order to get to the outside
world!
- Pushing the “We can do HCI on Whitebox Hardware”
agenda simply pulls IT staff back into the “Integrators”
sandbox.
22. WE SEE THE CLUE IN
……
THIS
!
• We keep trying to make the hardware better, but our
efforts fall dismally short.
• Our Application Development still treats our Infrastructure
as if it is that old Mainframe. CI and HCI simply made
things “a little easier” to handle.
• What we truly need is what we started long ago –
applications that care little about where things are running
(remember Client/Server?)
Which Means We Must Go Back To The
Beginning
23. LETS BACK UP A BIT…….
We had the 1st Paradigm – “Big Iron”
Monolithic software running in a tightly controlled
environment
Then the 2nd Paradigm – “The Intel Era – Personal
Computing”
Monolithic software running on islands of inexpensive
PCs.
Then came the 3rd Paradigm – “Web Servers and Web
Farms”
Monolithic software running lightweight interfaces on
24. “SERVERLESS COMPUTING”
What we have been working toward is an
Infrastructure that mimics the characteristics of
our home utilities.
We now have the beginnings of the “Serverless
Computing” environment that is built around
Intentional computing running on containerized
services.
These services are managed by systems like
Docker, Kubernetes, NGINX where small code
25. THIS IS KNOWN AS THE 4TH PARADIGM --
“CLOUD NATIVE APPLICATIONS”
• CPU, Memory, Storage are treated as UTILITIES.
• You buy/use what you need – and no more.
• Everything is “Serverless”
• The concept of spinning up a permanent “server” disappears
• Single Page Application HTML5, CSS, & JavaScript
• API calls to “cloud services”
• Single GIT continuous development for code, content, and
infrastructure
• Intent based and demand reactive
26. IN CLOSING ……
We see now that Converged and Hyper-Converged
Infrastructure were simply support models, designed to
make life easier for Operations Teams.
But we also see that the Infrastructure
(CPU/Memory/Storage/Networks) needed to catch up in
order to support the software methods attempted so long
ago.
HCI has actually proven to be a stepping stone to this “4th
Paradigm” in that is has shown that we can now build
“Software Defined Infrastructure” that will support this new
Notes de l'éditeur
- Many today remember the days of the mainframe
So, what is Converged Infrastructure?
Note these important defining terms:
MULTIPLE: We are assembling items that come from various areas of the tech world
OPTIMIZED: Somehow, these various items are expected to be enhanced and enabled to be “Greater than the parts”.
MAY: Hmmmmm, the various components are – Optional???
Then the Holy Grail – Management/Automation/Orchestration
Hasn’t IT been doing this for – decades?
Havent we been taking storage/networking/compute/software and doing our level best to optimize the operation?
But we never called this “Converged Infrastructure” – we called it “Our Job”. So, what makes CI different from our job?
Starting with the end of the definition – we see that the parts can be completely optional as noted by the term “MAY”. So does this mean we can remove any of these items and still have a CI?
No, not really.
So, what is optional?
What we find is the software that performs these wonderful features – Management, Automation, Orchestration are completely optional.
What are we left with? Our job. IT organizations, again, have been doing this work for ages. The vendors have simply tried to make it easier by pre-packaging the infrastructure.
What generally happens with pre-packaging? It becomes more expensive, with, in many cases, less bang for the buck.
VCE took EMC storage, Cisco Switches and Compute and VMware and built a preconfigured rack.
The vBlock/VxBock is built, installed, maintained, updated by VCE
This type of system is a closed architecture – meaning that the IT staff CANNOT make changes to the vBlock without approval by VCE. Versions of BIOS/VMware ESXi, etc are all encased in CONCRETE. This means that the underlying system is untouchable by the IT staff. The Vblock is designed to be fully self enclosed, with connections for outside power and networking.
Adding capacity means adding another vBlock, and again, and again.
Sounds strangely familiar …….. Unless you don’t remember these …….
All of these vendors created what are called “Validated Reference Architectures” – basically, whitepapers that listed the parts, how to plug the parts together and how to obtain support.
But, these are just “suggestions” -- there is no proscription and in fact IT is placed into being integrators once again.
In the end – we get one “Throat to Choke” – maybe.
We did this in an effort to remove the task of IT staff being integrators – to allow IT staff to better focus on the delivery of IT services rather than the task of figuring out how to make gear from different vendors work together.
But we forgot – being an integrator is what made IT fun – in the beginning.
A software centric architecture that combines two or more components into a single unit that is easily scalable
Namely the combination of the compute and storage modules
Datasets become localized to the node that is performing the processing
Storage does truly scale out ---- BUT, requires the addition of Compute as well. If all I need is storage then I am stuck once again adding stuff I don’t need
We haven’t solved the need to have externally managed networking, regardless of the hype from software vendors that “The Network is Irrelevant”
Still relies on outside networking, UNLESS the nodes are contained in a larger chassis (ie., original Nutanix on SuperMicro)
This becomes a tradeoff – do we sacrifice incremental increases in favor of less complexity and builtin integration?
This places IT staff back into the role of being an integrator.
Fine, if that is what you want your IT staff doing. Not fine if you are wanting to deliver business outcomes/results
Our infrastructure failed us – we demanded too much of the CPU’s, the networks and the storage. So we spent the last couple of decades making the infrastructure better, capable of handling
Water, electricity, gas – all sitting there posed to perform work, but for which we pay nothing until our INTENT is expressed in the form of flipping a switch or turning a knob.
There is no real “Server” – and no real need to even know where the computing is taking place. The computing platform can be anywhere and datasets are automatically copied to the locality, reduced, and data sets returned.