This document discusses the challenges faced by IT organizations in managing increasingly large amounts of data and complex legacy infrastructure systems. It proposes that hyperconverged infrastructure can help by reducing physical storage needs by 90%, eliminating separate backup devices and software, and providing global management from a single interface. The CEO of Fantasy Analytics, which uses predictive analytics for daily fantasy sports, explains how their implementation of hyperconverged infrastructure helped overcome challenges around data growth, backup windows, and staffing needs. He invites readers to try a proof of concept if claims of 3x TCO improvements are of interest.
Automating IT Analytics to Optimize Service Delivery and Cost at Safeway - A ...
HyperconvergedFantasyAnalytics
1. 1
Is Your IT Organization Delivering the Business
Outcomes You Need?
Jerry Jermann
CEO, Fantasy Analytics
(312) 218 3994
OR: Are they putting band aids on legacy technologies?
Reduce
physical
storage by 90%
Eliminate Backup
Windows / Devices
Maintain near
realtime
RTO/RPO’s
Seamlessly
backup to
cloud
No Special Software
– Just another
Virtual Resource
Global management
Via Single Vcenter
Screen
Eliminate 5 to 6
Devices / Licenses
From Legacy Stack
2. Ten Questions to ASK:
I need more IOPS?
BackUps are taking too long?
My Colo/DR costs are too high?
I need a new appliance to address an issue?
I need more time/staff?
My data center is too complex?
Software licenses are too much?
I have more than one tool/screen to manage my global infrastructure?
If you answered YES to any of these questions, read on – otherwise you are one of the elite.
My users want faster
response time to projects?
Yes No
I am compromising the quality
of business analytics?
Yes No
3. False Inhibitors To Change
We are an XXX shop
Our data is unique
We do not want to learn new tools
You are not on our approved vendor list
It’s Tuesday
Or just pick your excuse of the moment to close your mind
Disruptive claims:
Most (if not all) enterprise IT organizations have stopped thinking creatively (except WEB 2.0 companies that depend on data and
user experience as core to their business model and see IT as critical to their survival). – allow OEM’s and/or user groups to think
for them
OEM’s cannot fully embrace disruptive technologies without inflicting (maybe terminal damage) to their core business.
End User organizations will find and embrace solutions that they need to achieve their business objectives in direct conflict with
internal IT and shadow IT will continue to expand.
At least 5 levels of complexity can be removed from a typical data center – specialty appliances are too expensive, add complexity
to both operations and training, and are becoming obsolete.
4. 4
Who Is Fantasy Analytics?
Fantasy Analytics is a self-funded organization dedicated to predicting most
probable outcomes by storing and analyzing near realtime information feeds.
Our initial motivation was to prove that Daily Fantasy Sports is truly a game
of skill and you can consistently win contests by trusting the data and
removing emotion from decision process.
This project was so successful outcomes from the predictive engine are
published daily on multiple sports sites.
The engine is also used by a leading DFS site for contest information, as well
as, “What-If” analysis of new contest formats.
5. Fantasy Analytics. (312) 218 39945
Fantasy Analytics Successes
Daily Fantasy Lineup Optimizer ( www.rotopicker.com for a sample)
Project Duration: 3 months
Client Cost: $12,000 (Computer Resources, Algorithm Design, Automate Data Feeds, Program Development, Documentation)
• Fantasy Analytics' exclusive lineup optimizer analyses historical sports statistics and uses machine learning algorithms to provide
predictive insights beyond what has every been done before.
• We leverage cloud based database resources to store years and years of historical data which are fed into the machine based system
which determines which historical factors, as well as matchup statistics, are most important, and to what weights, to provide the most
accurate predictions available.
• The lineup optimizer then uses these projections combined with roster structure, player salaries, and scoring system of various popular
daily fantasy sites and sift through millions of lineup combinations, with a pruning function to increase speed of analysis, to output the
optimal combination of players for the highest projection, while remaining under the total salary cap. Our exclusive pruning function is
what sets Fantasy Analytics apart from other quantitative sports agencies allowing us to run our algorithms on a large scale without
sacrificing speed and efficiency of our analysis.
Point Threshold Contest Risk Analysis
Delivery of Results from Date of End-User Request: 10 Days
Client Cost: $3,200 (Computer Resources, Algorithm Design, Analytics, Generation of Risk Analysis Report, Acceptance by Liability Issuer)
• Leveraged our years and years of historical data to run high volume lineup simulation in regard to the implementation of a daily fantasy
sports Point Threshold contest
• For a leading Daily Fantasy site -- the likes of which is the first of its kind and revolutionary to the fantasy sports industry.
• Previously, daily fantasy sites needed high volume in order to offer the high payouts , however with our quantitative analysis, we
provide a leading Daily Fantasy site the freedom to offer up to Big Dollar payouts to a user playing against the house with point
thresholds risk mitigated by historical data.
6. Fantasy Analytics. (312) 218 39946
What Legacy IT challenges did we need to overcome?
• Existing Data Center Models were too expensive/prone to human error
• Data was growing at exponential rates
• The Maximum value of analytical data is at primary state - aggregation of
data limits value
• We needed tools that made provisioning of resources automatic – developers
able to provision their own resources
• Competitive business advantage directly correlated to time to results –
needed instant-on resources
• Large data sets required too much bandwidth/expense to replicate and
clone for test/dev
• Needed to be able to easily host large new data feeds easily without a
crowbar
7. Fantasy Analytics. (312) 218 39947
Goals
• Without system admin training, I needed to be able to manage and provision
resources for a project through a single screen
• A simple infrastructure not requiring niche devices (backup, replication, WAN
accelerators) to minimize license costs, training, and free staff for
innovation.
• Replicate large data sets across small pipes without the use of WAN
accelerators while maintaining near realtime RTO/RPO.
• Store at least 10X the virtual data in same physical capacity while increasing
inquiry performance by reducing IOPS
• Automate backup to the cloud, remote site, and locally without and additional
devices or software licensing.
• Deliver global unified management across a federated data store
8. Fantasy Analytics. (312) 218 39948
The Problem
• Disk Drives are too slow
• SSD/Flash are a band aid to the data problem
• Legacy infrastructures too complex and expensive
• Early converged architectures (i.e. FlexPod,Vplex,VCE) make
installation/ordering easier, but fail to address the data problem and they are
too expensive from both Opex and CapEx perspective
• Next generation architiectures (I.e. Nutanix) converge the storage and server
layers and start to integrate Enterprise features, but still do not address the
data problem or reduce software licenses
• Existing architectures cannot be retrofitted to optimize data problem and
technologies like SSD and Flash add to complexity without solving the data
problem.
9. 9
Evolution of Convergence
Legacy Stack
Delivers Enterprise
capabilities on x86
Eliminates Software
Optimizes data
Incorporates Cloud as just
another data center
Single view of infrastructure
Does not integrate:
SSD Array
Backup
Appliance
Wan
Optimization
Cloud
Gateway
Storage
Caching
SSD Array
Backup
Appliance
Wan
Optimization
Cloud
Gateway
Storage
Caching
SSD Array
Backup
Appliance
Wan
Optimization
Cloud
Gateway
Storage
Caching
Gen 1: Integrated Gen 2: Convergence Gen 3: Hyperconvergence
Still no answer for data
Still need a lot of software
$$$
Does not integrate Enterprise
capabilities: Data Protection,
Efficiency, performance $$$$$$
10X Less Physical Storage
Less IOPS = Performance
Transparent Integration into Infrastructure
SIMPLE, EFFICIENT, LESS OPEX/CAPEX
$
10. Fantasy Analytics. (312) 218 399410
Generation 3 Hyperconvergence Addresses the Data Problem
Servers + Vmware – NO CHANGES
10X Less Data Stored
3X TCO Savings
Less Data = Less IOPS
Data Protection Apps - $$$ Saved
• One Building Block – X86 and Vendor Agnostic
• Leverage Existing Enterprise Skill Sets
• At Inception – Once and Forever – Dedup,
Compression, and Data Optimization
• Global Unified View of IT Resources – Single
Screen
• Eliminate Backup Windows
• Less Rackspace / Energy
Storage switch - Gone Integrated
HA shared storage -
SSD Array
Backup
Appliance
Wan
Optimization
Cloud
Gateway
Storage
Caching
All Gone and Integrated – Dollars Saved
Transformation to Hyperconvergence
11. What If All Where Available to You Today?
Would you invest in a Proof of Concept to see if the claims are real?
Would you be interested in a 3X improvement in TCO?
Thanks for Reading
Comments Are Welcome
My cell is 312-218-3994 if my opinions make sense
Jerry
Key Points:
Convergence 1.0 only combines existing servers + storage under a management interface and sometimes a single support SKU for faster deployment
Convergence 2.0 virtualized storage and combined it with compute on x86, scale-out resources but they did nothing “Below the line” forcing a tradeoff between “web scale” vs. enterprise capabilities
Only SimpliVity provides convergence 3.0 with the best of both worlds: x86 cloud economics without sacrificing enterprise capabilities
Script:
Let’s take a look at the evolution of convergence.
CLICK
In 2009, some of the integrated systems and reference architectures started being built. These are the VCE and FlexPods of the world. Now, they deserve a lot of credit for really kick starting this industry. But, they did nothing to fundamentally change the data architecture. What they did was basically take the top half of the legacy stack, the servers and storage, and package it.
CLICK
At around the same time, convergence 2.0 companies like Nutanix started their development. There’s some good innovation here as they took servers and storage and created a single shared resource pool, mostly for the purpose of VDI.
There are definitely some benefits, but they didn’t take it far enough. They stopped at server and storage and didn’t innovate below the line. No deduplication, compression and optimization. No global unified management. No integrated data protection at the VM level.
The only company that provides a single shared resource pool across the entire legacy stack; the only company that provides 40:1 data efficiency on average while increasing performance; the only company that combines all IT below the hypervisor, including built-in data protection. The only company that stands in the category of Convergence 3.0
CLICK
is SimpliVity.
We often use this metaphor: if you’re baking a cake, you have to do everything by design, up-front. If you take the cake out of the oven, and then realize you forgot an egg, it’s too late. You physically cannot get that egg into the cake without starting from scratch. This applies perfectly to SimpliVity. We baked the egg in the cake, and it’s why we have deduplication, compression, and optimization for all data, inline, in real-time at inception once and forever, and others are trying to catch up.
CLICK
The convergence 1.0 vendors provide enterprise capabilities.
CLICK
The convergence 2.0 vendors provide some cloud economics.
CLICK
Only SimpliVity offers the best of both worlds.
CLICK
All three waves of convergence started their development in 2009. VCE first came to market after 8 months. They were able to come to market so quickly because they didn’t actually build anything new. They took existing product and they packaged it. They didn’t build a new architecture and they stopped at servers + storage.
CLICK
After 18 months, Nutanix came to market. And they were able to do this as they also only focused on servers + storage, for the purposes of VDI.
From experience, we knew that to start a new project with VC funding, or to start a new project within a big enterprise like EMC or IBM, you must demonstrate revenue within 18-24 months or the project doesn’t get funded. Therefore, the problem you set out to solve must fit within that 18-24 months timeframe. Well, what if the problem you are trying to solve is bigger than that?
You have two options: 1. you release anyways and then try to bolt-on features afterwards (aka, you try to put the egg into the cake after it is already finished baking); or 2. you follow SimpliVity’s model and you fund the project in other creative ways to build what needs to be built by design, from the beginning.
You bake the cake right the first time, with the egg in the recipe from the start.
CLICK
SimpliVity is able to offer true hyperconvergence because we took our time, 43 months to be exact, and we got it right.