SlideShare une entreprise Scribd logo
1  sur  12
Télécharger pour lire hors ligne
FLASH IMPLICATIONS IN
ENTERPRISE STORAGE
ARRAY DESIGNS

	
  
	
  
	
  
	
  
	
  

	
  

	
  
	
  

ABSTRACT
This white paper examines some common practices in enterprise storage array
design and their resulting trade-offs and limitations. The goal of the paper is to

	
  

help the reader understand the opportunities XtremIO found to improve

	
  

in mind, and to illustrate why similar advancements simply cannot be achieved

enterprise storage systems by thinking with random, rather than sequential I/O
using a hard drive or hybrid SSD/hard drive array design.

	
  
	
  
	
  
	
  
	
  

	
  
Copyright © 2013 EMC Corporation. All Rights Reserved.
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change
without notice.
The information in this publication is provided “as is.” EMC Corporation makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a
particular purpose.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
EMC2, EMC, the EMC logo, and the RSA logo are registered trademarks or trademarks of EMC Corporation in the United States
and other countries. VMware is a registered trademark of VMware, Inc. in the United States and/or other jurisdictions. All other
trademarks used herein are the property of their respective owners. © Copyright 2013 EMC Corporation. All rights reserved.
Published in the USA. 01/13 White Paper

2

FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS
TABLE OF CONTENTS
TABLE OF CONTENTS ............................................................................................................................. 3
EXECUTIVE SUMMARY ........................................................................................................................... 4
STORAGE ARRAY DESIGN PRIMER ......................................................................................................... 5
THE I/O BLENDER ................................................................................................................................. 6
STORAGE ENLIGHTENMENT ................................................................................................................... 7
DEDUPLICATION ................................................................................................................................... 7
SNAPSHOTS ........................................................................................................................................... 9
THIN PROVISIONING .......................................................................................................................... 10
DATA PROTECTION .............................................................................................................................. 11
THE INEVITABLE CONCLUSION............................................................................................................ 12
HOW TO LEARN MORE ......................................................................................................................... 12

	
  

3

FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS
EXECUTIVE SUMMARY
Enterprise storage arrays are highly sophisticated systems that have evolved over decades to provide levels of performance,
reliability, and functionality unavailable in the underlying hard disk drives upon which they’re based. A significant aspect of the
engineering work to make this possible is designing the array hardware and software stack to extract the most capability out of
hard drives, which are physical machines with mechanically moving parts and vastly different performance profiles for
sequentially (good performance) versus randomly (poor performance) accessed data. A good enterprise storage array attempts
to sequentialize the workload seen by the hard drives, regardless of the native data pattern at the host level.
In recent years, Solid State Drives (SSDs) based on flash media and not bound by mechanical limitations have come to market.
A single SSD is capable of random I/O that would require hundreds of hard drives to match. However, obtaining this
performance from an SSD, much less in an enterprise storage array filled with them, is challenging. Existing storage arrays
designed to sequentialize workloads are simply optimized along a dimension that is no longer relevant. Furthermore, since an
array based on flash media is capable of approximately two orders of magnitude greater performance than an array with a
comparable number of hard drives, array controller designs must be entirely rethought, or they simply become the bottleneck.
The challenge isn’t just limited to performance. Modern storage arrays offer a wide variety of features such as deduplication,
snapshots, clones, thin provisioning, and replication. These features are built on top of the underlying disk management engine,
and are based on the same rules and limitations favoring sequential I/O. Simply substituting flash for hard drives won’t break
these features, but neither does it enhance them.
XtremIO has developed a new class of enterprise data storage system based entirely on flash media. XtremIO’s approach was
not simply to substitute flash in an existing storage controller design or software stack, but rather to engineer an entirely new
array from the ground-up to unlock flash’s full performance potential and deliver array-based capabilities that are
counterintuitive when thought about in the context of current storage system features and limitations. This approach requires a
100% commitment to flash (or more accurately a storage media that can be accessed randomly without performance penalty)
and XtremIO’s products use flash SSDs exclusively.
This white paper examines some common practices in enterprise storage array design and their resulting trade-offs and
limitations. The goal of the paper is to help the reader understand the opportunities XtremIO found to improve enterprise
storage systems by thinking with random, rather than sequential I/O in mind, and to illustrate why similar advancements simply
cannot be achieved using a hard drive or hybrid SSD/hard drive array design.

	
  

4

FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS
STORAGE ARRAY DESIGN PRIMER
At their core, storage arrays perform a few basic functions:
•

Aggregate several physical disk drives for combined performance

•

Provide data protection in the event of a disk drive failure (RAID)

•

Present several physical disk drives as a single logical volume to host computers

•

Enable multiple hosts to share the array

Since physical disk drives perform substantially better when accessed sequentially, array designs hold incoming data in cache
until such time as enough data accumulates to enable a sequential write to disk.
When multiple hosts share the array the workload is inherently randomized since each host is acting independently of the others
and the array sees a mix of traffic from all hosts.

I/O RANDOMIZATION
Multiple hosts sharing storage
causes I/O randomization. Here,
three hosts, A, B, and C each have
logical volumes on the same
physical disk drives. As the array
fulfills I/O requests to each host,
the drive heads must seek to the
portion of the drive allocated to
each host, lowering performance.
Maintaining sequential I/O to the
drives is all but impossible in this
common scenario.

Storage arrays worked well enough for many years but have become increasingly challenged due to advancements in server
design and virtualization technology that create what has come to be known in the industry as the “I/O Blender”. In short, the
I/O Blender is an effect that causes high levels of workload randomization as seen by a storage array.

	
  

5

FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS
THE I/O BLENDER
CPUs have historically gained power through increases in transistor count and clock speed. More recently, a shift has been made
to multi-core CPUs and multi-threading. This, combined with server virtualization technology, allows massive consolidation of
applications onto a single physical server. The result is intensive randomization of the workload as seen by the storage array.
Imagine a dual socket server with six cores per socket and two threads per core. With virtualization technology this server can
easily present shared storage with a workload that intermixes twenty-four unique data streams (2 sockets x 6 cores per socket
x 2 threads per core). Now imagine numerous servers on a SAN sharing that same storage array. The array’s workload very
quickly becomes completely random I/O coming from hundreds or thousands of intermixed sources. This is the I/O Blender.

THE IO BLENDER
Multi-core CPUs, multithreading, and virtualization
all randomize the workload
seen by the storage array.
Three physical hosts no longer
present three unique
workloads, instead presenting
dozens.

Storage array designs, which depend on the ability to sequentialize I/O to disk, very quickly face performance challenges in
these increasingly common server environments. Common techniques to address this challenge are:
•

Increase array cache sizes: It is not uncommon for enterprise arrays to support 1TB or more of cache. With more cache
capacity there is a better chance to aggregate disparate I/O since more data can be held in cache, opportunistically waiting
for enough random data to accumulate to allow sequential writes. The downside to this approach is that cache memory is
very expensive and read performance is not addressed (random reads result in cache misses and random disk I/O).

•

Spread the workload across more spindles: If the workload becomes more randomized, then more disk IOPS are needed
to service it. Adding more spindles to the array yields more potential IOPS. The downsides to this approach are numerous –
cost, space, power consumption, inefficient capacity utilization (having much more capacity than the applications require,
simply because so many spindles are needed for performance), and the need to plan application deployments based on how
their I/O requirements will map to the array.

•

Add a flash cache or flash tier: This approach adds a higher performing media as a tier inside the array. There are
numerous benefits to this approach, however array controllers designed to work with hard drives are internally optimized,
both in hardware and software, to work with hard drives. Adding a flash cache or tier simply allows the performance
potential of the array controller to be reached with fewer drives, but it doesn’t improve the overall capabilites of the array.

Of note, when flash is added as a cache, it will experience write cycles more frequently (data is moved in and out of the cache as
it becomes “hot” or “cold”), necessitating expensive SLC flash that provides longer endurance.
The efficacy of caching and tiering can vary based on application requirements. Even if only 10% of I/Os go to spinning disk, the
application will often become limited by the response time for those transactions due to serialization. Maintaining consistently
high and predicatable performance is not always possible in caching/tiering designs.

	
  

6

FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS
Attempting to resolve the I/O Blender problem with these methods helps, but what happens next year when CPU core counts
double again? And again the year after that? Eventually the size of the array RAM cache and flash cache tiers need to become so
large that effectively all data (excepting purely archival information) will need to reside on them. With array controller designs
sized and optimized for the performance of hard drives, the full potential of flash will not be realized.
This fact alone is enough to warrant a ground-up new storage system design. But so far we have only discussed the drivers
around performance. Enterprise storage systems also have many advanced features for data management. Yet just as
performance can be enhanced by a ground-up flash approach, so may advanced array features.

STORAGE ENLIGHTENMENT
In this section we’ll examine four common array-based data management
features: deduplication, snapshots, thin provisioning and data protection. For
each feature we’ll examine how it works, how hard disk drives impact the
performance of the feature, and why a simple substitution of flash media for
hard drives doesn’t unlock the full potential of the feature. While we’ve chosen
four examples here for illustrative purposes, the implications are common
across nearly all array-based features including replication, cloning, and more.

DEDUPLICATION
Deduplication has predominately been deployed as a backup technology for two
reasons. First, it is clear that multiple backup jobs run over a period of days,
weeks, and months will deduplicate very well. But more importantly for this
discussion, deduplication has never been suitable in primary storage
applications because it has a negative impact on performance, always for
reading data, and in most implementations, on writing it.
Deduplication affects access performance on primary data because it leads to
logical fragmentation of the data volume. Whenever duplicate data is eliminated
and replaced with a pointer to a unique data block, the data stream for the
duplicate blocks is no longer sequentially laid out on disk.

7

FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS

FLASH MANAGEMENT
All types of flash media support a
limited number of write cycles.
And while SSD vendors are adept
at managing endurance within the
drive itself, much more can be
done at the array level to extend
the useful life of the drives. A
flash-based array must be
designed to:
• Minimize the number of writes to
flash by performing as much realtime data processing as possible
and avoiding post-processing
operations
• Spread writes across all SSDs in
the array to ensure even wear
• Avoid movement of data once it is
written
These guiding principles are
another reason why a clean-sheet
design is warranted with flash.
The processing operations and
data placement algorithms of
disk-based storage systems did
not need to consider flash
endurance.
DEDUPLICATION PERFORMANCE IMPACT
Once data is deduplicated, it is no longer possible to perform sequential reads as any deduplicated block requires a disk
seek to the location of the stored block. In this example, what would have required a single disk seek followed by a
sequential data read now requires 21 disk seeks to accomplish. With a disk seek taking roughly 15ms, a single read
has just increased from 15ms to 315ms, far outside acceptable performance limits.

	
  

But how does deduplication affect write performance? Most deduplication implementations are post-process operations.
Incoming data is first written to disk without being deduplicated. Then, the array (either on a schedule or by administrator
command) processes the data to deduplicate it. This processing necessitates reading the data and writing it back to disk in
deduplicated form. Every I/O operation consumed during the deduplication processing is unavailable for servicing host I/O.
Furthermore, the complex processing involved in deduplication is computationally intensive and slows the performance of the
array controller. This has the additional drawback of requiring a large amount of capacity to “land” the data while waiting for the
deduplication process to reduce it.
When flash is substituted for hard drives, performance improves as the random I/O deduplication creates is more favorably
handled by SSDs. And the extra operations of the deduplication post-processing could be completed more quickly. But the postprocessing operation itself requires multiple writes to disk – the initial landing of data on disk, and then the subsequent write for
the deduplicated data. This consumes IOPS that would otherwise be available for host I/O, but more importantly the extra writes
negatively impact flash endurance (see sidebar on previous page), which is an undesirable outcome. Other problems relating to
the computational burden deduplication places on the controller, the inability of many deduplication implementations to operate
globally across all array volumes or to scale across any array capacity are also not addressed by merely swapping from disk
drives to flash.

	
  

8

FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS
SNAPSHOTS
Snapshots are point-in-time images of a storage volume allowing the volume to be rolled back or replicated within the array.
While some snapshot implementations literally make full copies of the source volume (split mirror snapshots), we will focus our
discussion on space-efficient snapshots, which only write changes to the source volume. There are two types of space-efficient
snapshots; copy-on-write and redirect-on-write.
In a copy-on-write snapshot, changes to the source volume invoke a copy operation in the array. The original block is first
copied to a designated snapshot location, and then the new block is written in its place. Copy-on-write snapshots keep the
source volume layout intact, but “old” blocks in the snapshot volume are heavily fragmented.

COPY-ON-WRITE
SNAPSHOT
A change to an existing data
block (1) requires first that
the existing data block be
copied to the snapshot
volume (2). The new data
block then takes its place (3).

This results in two penalties. First, there are two array writes for every host write – the copy operation and the new write. This
severely impacts write performance. Secondly, reads from the snapshots become highly fragmented, causing disk I/O both from
the source volume and random locations in the snapshot reserve volume. This makes the snapshots unsuitable for any
performance oriented application (for example, as a test/development copy of a database volume).
The other type of space-efficient snapshot is redirect-on-write. In a redirect-on-write snapshot, once the snapshot is taken, new
writes are redirected to a snapshot reserve volume rather than copying the existing volume’s data. This avoids the extra write
penalty of copy-on-write designs.

REDIRECT-ON-WRITE
SNAPSHOT
In a redirect-on-write snapshot, new
writes (1) are redirected to the
snapshot reserve volume. This avoids
the extra write penalty of copy-onwrite snapshots, but results in
fragmentation of the active volume and
poor read performance.

However, now reads from the source volume must occur from two locations; the original source volume and the snapshot
reserve volume. This results in poor read performance as changes in the source volume accumulate and as more snapshots are
taken since the data is no longer stored contiguously and numerous disk seeks ensue.
As with deduplication, a simple substitution of flash in place of hard drives will yield some benefits, but the full potential
of flash cannot be achieved. In a copy-on-write design, there are still two writes for every new write (bad for flash endurance).
A redirect-on-write design can take better advantage of flash, but challenges come in other areas. For example, mixing
deduplication with snapshots is extremely complex from a metadata management perspective. Blocks of data may exist on disk,
or only as pointers. They may be in a source volume or in a snapshot. Figuring out which blocks need to be kept in the face of
changing data, deleting data, and creating, deleting, and cloning snapshots is non-trivial. In deduplicating storage arrays, the
performance impact when snapshots are utilized can be acute.
9

FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS
A final area of concern in snapshot technology is cloning. A clone is a copy of a volume creating using a snapshot, but that may
be subsequently altered and can diverge from the source volume. Clones are particularly challenging to implement with harddrive based storage because changes to the source volume and the clone must both be tracked and lead to fragmentation of
both volumes.

CLONES
When clones (writeable snapshots) are
created, the clone may diverge from the
source volume. This causes inherent
fragmentation as both the source volume and
clone now contain their own unique change
sets, neither of which is complete without the
source volume’s snapshot. Reading from the
source volume or the clone now requires
seeking to various locations on disk.

Cloning is an especially complex feature that often does not live up to performance expectations (and thus intended use cases)
in disk-based arrays. As previously noted, merely substituting flash media doesn’t solve the inherent way in which snapshots or
clones are implemented (e.g. copy-on-write or redirect-on-write) or the underlying metadata structures, and thus does not solve
the underlying problems.

THIN PROVISIONING
Thin provisioning allows volumes to be created of any arbitrary size without having to pre-allocate storage space to the volume.
The array dynamically allocates free space to the volume as data is written, improving storage utilization rates. Storage arrays
allocate free space in fairly large chunks, typically 1MB or more, in order to preserve the ability to write sequentially into the
chunks. However, most operating system and application I/O does not align perfectly to the storage system’s allocation
boundaries. The result is thin provisioning “creep”, where the storage system has allocated more space to the volume than the
OS or application thinks it has used. Some more advanced thin provisioning arrays have post-processing operations that
periodically re-pack data to reclaim the dead space, but this results in performance issues, both from the repacking operation
itself, and the resultant volume fragmentation and random I/O.
Substituting flash for hard drives provides no benefit because the underlying allocation size in the storage array hasn’t changed.
Allocation sizes are not simple parameters that can be tuned. They are deeply woven into the storage system design, data
structures, and layouts on disk.

Thin
Provisioning

Substituting flash for hard drives provides no benefit because the underlying allocation size in the
storage array hasn’t changed. Allocation sizes are not simple parameters that can be tuned. They
are deeply woven into the storage system design, data structures, and layouts on disk.

10 FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS

Problems
resulting from
large allocation
sizes – wasted
space (creep)
and volume
fragmentation
leading to
additional disk
seeks during
read operations.
DATA PROTECTION
The RAID algorithms developed for disk-based storage arrays protect data in the event of one or more disk drive failures. Each
algorithm has a trade-off between its protection level, its performance and its capacity overhead. Consider the following
comparison of typical RAID levels used in current storage systems:
RAID Algorithm

Relative Read

Relative Write

Relative Capacity

Performance

Performance

Overhead

Data Protection

RAID 1 – Mirroring

Excellent

Good

Poor

Single disk failure

RAID 5 – single parity

Good

OK

Good

Single disk failure

RAID 6 – double parity

Good

Poor

OK

Single disk failure

Regardless of which RAID level is chosen, there is no way to achieve the best performance, the lowest capacity overhead, and
the best data protection simultaneously. Much of the reason for this is rooted in the way these RAID algorithms place data on
disk and the need for data to be laid out sequentially.
Using standard RAID algorithms with flash media is possible, but it doesn’t change these trade-offs. There are clear gains to be
realized by rethinking data protection algorithms to leverage random-access media that would be all but impossible to replicate
using hard drives. In fact, a storage system designed for flash can provide nearly optimal read performance, write performance,
capacity overhead, and data protection simultaneously.
Another important consideration in the data protection design is how to handle flash “hiccups”, or periods where the flash drive
does not respond as expected due to internal operations. RAID schemes created for hard drives sometimes have “on-the-fly”
drive rebuilding and/or journaling to handle hard drive “hiccups”. However, these hiccup management techniques are
implemented within the constraints of the core RAID algorithm and would not function properly with flash media.

11 FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS
THE INEVITABLE CONCLUSION
What we have shown is that while utilizing flash in existing storage systems will lead to some performance gains, the true
potential of flash media cannot be realized without a ground-up array design specifically engineered for its unique random I/O
capabilities. The implications for an all flash storage array include:
•

It must be a scale-out design. Flash simply delivers performance levels beyond the capabilities of any scale-up (i.e. dual
controller) architecture.

•

The system must automatically balance itself. If balancing does not take place then adding capacity will not increase
performance.

•

Attempting to sequentialize workloads no longer makes sense since flash media easily handles random I/O. Rather, the
unique ability of flash to perform random I/O should be exploited to provide new capabilities.

•

Cache-centric architectures should be rethought since today’s data patterns are increasingly random and inherently do not
cache well.

•

Any storage feature that performs multiple writes to function must be completely rethought for two reasons. First, the extra
writes steal available I/O operations from serving hosts. And second, with flash’s finite write cycles, extra writes must be
avoided to extend the usable life of the array.

•

Array features that have been implemented over time are typically functionally separate. For example, in most arrays
snapshots have existed for a long time and deduplication (if available) is fairly new. The two features do not overlap or
leverage each other. But there are significant benefits to be realized through having a unified metadata model for
deduplication, snapshots, thin provisioning, replication, and other advanced array features.

XtremIO was founded because of a realization not just that storage arrays must be redesigned to handle the performance
potential flash offers, but that practically every aspect of storage array functionality can be improved by fully exploiting flash’s
random-access nature. XtremIO’s products are thus about much more than higher IOPS and lower latency.
They are about delivering business value through new levels of array functionality.

HOW TO LEARN MORE
For a detailed presentation explaining XtremIO’s storage array capabilities and how it substantially improves performance,
operational efficiency, ease-of-use, and total cost of ownership, please contact XtremIO at info@xtremio.com. We will schedule a
private briefing in person or via web meeting. XtremIO has benefits in many environments, but is particularly effective for virtual
server, virtual desktop, and database applications.

	
  
	
  

CONTACT US
To learn more about how
EMC products, services, and
solutions can help solve your
business and IT challenges,
contact your local
representative or authorized
reseller—or visit us at
www.EMC.com.

	
  

EMC2, EMC, the EMC logo, XtremIO and the XtremIO logo are registered trademarks or
trademarks of EMC Corporation in the United States and other countries. VMware is a registered
trademark of VMware, Inc., in the United States and other jurisdictions. © Copyright 2013
EMC Corporation. All rights reserved. Published in the USA. 10/13 EMC White Paper H12459
EMC believes the information in this document is accurate as of its publication date.
The information is subject to change without notice.

12 FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS

Contenu connexe

Tendances

EMC FAST Cache
EMC FAST Cache EMC FAST Cache
EMC FAST Cache EMC
 
Fulcrum Group Storage And Storage Virtualization Presentation
Fulcrum Group Storage And Storage Virtualization PresentationFulcrum Group Storage And Storage Virtualization Presentation
Fulcrum Group Storage And Storage Virtualization PresentationSteve Meek
 
Optimize oracle on VMware (April 2011)
Optimize oracle on VMware (April 2011)Optimize oracle on VMware (April 2011)
Optimize oracle on VMware (April 2011)Guy Harrison
 
Optimize Oracle On VMware (Sep 2011)
Optimize Oracle On VMware (Sep 2011)Optimize Oracle On VMware (Sep 2011)
Optimize Oracle On VMware (Sep 2011)Guy Harrison
 
Understanding Solid State Disk and the Oracle Database Flash Cache (older ver...
Understanding Solid State Disk and the Oracle Database Flash Cache (older ver...Understanding Solid State Disk and the Oracle Database Flash Cache (older ver...
Understanding Solid State Disk and the Oracle Database Flash Cache (older ver...Guy Harrison
 
White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices   White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices EMC
 
Boosting virtualization performance with Intel SSD DC Series P3600 NVMe SSDs ...
Boosting virtualization performance with Intel SSD DC Series P3600 NVMe SSDs ...Boosting virtualization performance with Intel SSD DC Series P3600 NVMe SSDs ...
Boosting virtualization performance with Intel SSD DC Series P3600 NVMe SSDs ...Principled Technologies
 
Making the most of ssd in oracle11g
Making the most of ssd in oracle11gMaking the most of ssd in oracle11g
Making the most of ssd in oracle11gGuy Harrison
 
White paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware EnvironmentsWhite paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware EnvironmentsthinkASG
 
White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...
White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...
White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...EMC
 
H10986 emc its-oracle-br-wp
H10986 emc its-oracle-br-wpH10986 emc its-oracle-br-wp
H10986 emc its-oracle-br-wpsmdsamee384
 
Using preferred read groups in oracle asm michael ault
Using preferred read groups in oracle asm michael aultUsing preferred read groups in oracle asm michael ault
Using preferred read groups in oracle asm michael aultLouis liu
 
Microsoft Windows Server 2012 Early Adopter Guide
Microsoft Windows Server 2012 Early Adopter GuideMicrosoft Windows Server 2012 Early Adopter Guide
Microsoft Windows Server 2012 Early Adopter GuideKingfin Enterprises Limited
 
Unitrends Overview 2012
Unitrends Overview 2012Unitrends Overview 2012
Unitrends Overview 2012Tracy Hawkey
 
Vmware documentação tecnica
Vmware documentação tecnicaVmware documentação tecnica
Vmware documentação tecnicaALEXANDRE MEDINA
 
Introduction to storage technologies
Introduction to storage technologiesIntroduction to storage technologies
Introduction to storage technologiesNuno Alves
 
SANsymphony-v r9.0.2
SANsymphony-v r9.0.2SANsymphony-v r9.0.2
SANsymphony-v r9.0.2mashenta
 
Using VMware Infrastructure for Backup and Restore
Using VMware Infrastructure for Backup and RestoreUsing VMware Infrastructure for Backup and Restore
Using VMware Infrastructure for Backup and Restorewebhostingguy
 
Accelerating Oracle on Red Hat Enterprise Linux with ioMemory
Accelerating Oracle on Red Hat Enterprise Linux with ioMemoryAccelerating Oracle on Red Hat Enterprise Linux with ioMemory
Accelerating Oracle on Red Hat Enterprise Linux with ioMemorySumeet Bansal
 

Tendances (19)

EMC FAST Cache
EMC FAST Cache EMC FAST Cache
EMC FAST Cache
 
Fulcrum Group Storage And Storage Virtualization Presentation
Fulcrum Group Storage And Storage Virtualization PresentationFulcrum Group Storage And Storage Virtualization Presentation
Fulcrum Group Storage And Storage Virtualization Presentation
 
Optimize oracle on VMware (April 2011)
Optimize oracle on VMware (April 2011)Optimize oracle on VMware (April 2011)
Optimize oracle on VMware (April 2011)
 
Optimize Oracle On VMware (Sep 2011)
Optimize Oracle On VMware (Sep 2011)Optimize Oracle On VMware (Sep 2011)
Optimize Oracle On VMware (Sep 2011)
 
Understanding Solid State Disk and the Oracle Database Flash Cache (older ver...
Understanding Solid State Disk and the Oracle Database Flash Cache (older ver...Understanding Solid State Disk and the Oracle Database Flash Cache (older ver...
Understanding Solid State Disk and the Oracle Database Flash Cache (older ver...
 
White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices   White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices
 
Boosting virtualization performance with Intel SSD DC Series P3600 NVMe SSDs ...
Boosting virtualization performance with Intel SSD DC Series P3600 NVMe SSDs ...Boosting virtualization performance with Intel SSD DC Series P3600 NVMe SSDs ...
Boosting virtualization performance with Intel SSD DC Series P3600 NVMe SSDs ...
 
Making the most of ssd in oracle11g
Making the most of ssd in oracle11gMaking the most of ssd in oracle11g
Making the most of ssd in oracle11g
 
White paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware EnvironmentsWhite paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware Environments
 
White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...
White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...
White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...
 
H10986 emc its-oracle-br-wp
H10986 emc its-oracle-br-wpH10986 emc its-oracle-br-wp
H10986 emc its-oracle-br-wp
 
Using preferred read groups in oracle asm michael ault
Using preferred read groups in oracle asm michael aultUsing preferred read groups in oracle asm michael ault
Using preferred read groups in oracle asm michael ault
 
Microsoft Windows Server 2012 Early Adopter Guide
Microsoft Windows Server 2012 Early Adopter GuideMicrosoft Windows Server 2012 Early Adopter Guide
Microsoft Windows Server 2012 Early Adopter Guide
 
Unitrends Overview 2012
Unitrends Overview 2012Unitrends Overview 2012
Unitrends Overview 2012
 
Vmware documentação tecnica
Vmware documentação tecnicaVmware documentação tecnica
Vmware documentação tecnica
 
Introduction to storage technologies
Introduction to storage technologiesIntroduction to storage technologies
Introduction to storage technologies
 
SANsymphony-v r9.0.2
SANsymphony-v r9.0.2SANsymphony-v r9.0.2
SANsymphony-v r9.0.2
 
Using VMware Infrastructure for Backup and Restore
Using VMware Infrastructure for Backup and RestoreUsing VMware Infrastructure for Backup and Restore
Using VMware Infrastructure for Backup and Restore
 
Accelerating Oracle on Red Hat Enterprise Linux with ioMemory
Accelerating Oracle on Red Hat Enterprise Linux with ioMemoryAccelerating Oracle on Red Hat Enterprise Linux with ioMemory
Accelerating Oracle on Red Hat Enterprise Linux with ioMemory
 

En vedette

How Does Mobile Compare?
How Does Mobile Compare?How Does Mobile Compare?
How Does Mobile Compare?Research Now
 
EMC Isilon Best Practices for Hadoop Data Storage
EMC Isilon Best Practices for Hadoop Data StorageEMC Isilon Best Practices for Hadoop Data Storage
EMC Isilon Best Practices for Hadoop Data StorageEMC
 
Program imunisasi
Program imunisasiProgram imunisasi
Program imunisasihendrassite
 
роль методиста по профориентации в формировании готовности
роль методиста по профориентации в формировании готовностироль методиста по профориентации в формировании готовности
роль методиста по профориентации в формировании готовностиТатьяна Глинская
 
Fri evaluate crusades
Fri evaluate crusadesFri evaluate crusades
Fri evaluate crusadesTravis Klein
 
HTTP 완벽가이드- 19장 배포시스템
HTTP 완벽가이드- 19장 배포시스템HTTP 완벽가이드- 19장 배포시스템
HTTP 완벽가이드- 19장 배포시스템박 민규
 
Keeping a Marriage Healthy and Happy
Keeping a Marriage Healthy and HappyKeeping a Marriage Healthy and Happy
Keeping a Marriage Healthy and HappyMuslims4Marriage
 
Fri trans american slavery
Fri trans american slaveryFri trans american slavery
Fri trans american slaveryTravis Klein
 
Reinforcedslab 100917010457-phpapp02
Reinforcedslab 100917010457-phpapp02Reinforcedslab 100917010457-phpapp02
Reinforcedslab 100917010457-phpapp02Laquiao Ganagan
 
Выпускники 2011
Выпускники 2011Выпускники 2011
Выпускники 2011lexa0784
 
Freddy bolivar
Freddy bolivarFreddy bolivar
Freddy bolivarescolante
 
Target audience feedback
Target audience feedbackTarget audience feedback
Target audience feedbackharryronchetti
 
Women CEO’s in Family Business: Challenges & Differentiating Styles
Women CEO’s in Family Business: Challenges & Differentiating StylesWomen CEO’s in Family Business: Challenges & Differentiating Styles
Women CEO’s in Family Business: Challenges & Differentiating StylesDaleCarnegieIndia1
 

En vedette (19)

How Does Mobile Compare?
How Does Mobile Compare?How Does Mobile Compare?
How Does Mobile Compare?
 
EMC Isilon Best Practices for Hadoop Data Storage
EMC Isilon Best Practices for Hadoop Data StorageEMC Isilon Best Practices for Hadoop Data Storage
EMC Isilon Best Practices for Hadoop Data Storage
 
Program imunisasi
Program imunisasiProgram imunisasi
Program imunisasi
 
роль методиста по профориентации в формировании готовности
роль методиста по профориентации в формировании готовностироль методиста по профориентации в формировании готовности
роль методиста по профориентации в формировании готовности
 
Fri evaluate crusades
Fri evaluate crusadesFri evaluate crusades
Fri evaluate crusades
 
Jn wp wcpn2012
Jn wp wcpn2012Jn wp wcpn2012
Jn wp wcpn2012
 
HTTP 완벽가이드- 19장 배포시스템
HTTP 완벽가이드- 19장 배포시스템HTTP 완벽가이드- 19장 배포시스템
HTTP 완벽가이드- 19장 배포시스템
 
El paisaje holandes
El paisaje holandesEl paisaje holandes
El paisaje holandes
 
Hyper-V Dynamic Memory in Depth
Hyper-V Dynamic Memory in Depth Hyper-V Dynamic Memory in Depth
Hyper-V Dynamic Memory in Depth
 
Keeping a Marriage Healthy and Happy
Keeping a Marriage Healthy and HappyKeeping a Marriage Healthy and Happy
Keeping a Marriage Healthy and Happy
 
Fri trans american slavery
Fri trans american slaveryFri trans american slavery
Fri trans american slavery
 
Presentation1
Presentation1Presentation1
Presentation1
 
การนำเสนอในการประชุม
การนำเสนอในการประชุมการนำเสนอในการประชุม
การนำเสนอในการประชุม
 
Reinforcedslab 100917010457-phpapp02
Reinforcedslab 100917010457-phpapp02Reinforcedslab 100917010457-phpapp02
Reinforcedslab 100917010457-phpapp02
 
Выпускники 2011
Выпускники 2011Выпускники 2011
Выпускники 2011
 
Freddy bolivar
Freddy bolivarFreddy bolivar
Freddy bolivar
 
Target audience feedback
Target audience feedbackTarget audience feedback
Target audience feedback
 
Women CEO’s in Family Business: Challenges & Differentiating Styles
Women CEO’s in Family Business: Challenges & Differentiating StylesWomen CEO’s in Family Business: Challenges & Differentiating Styles
Women CEO’s in Family Business: Challenges & Differentiating Styles
 
Euskal Herria
Euskal HerriaEuskal Herria
Euskal Herria
 

Similaire à Flash Implications in Enterprise Storage Array Designs

White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices   White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices EMC
 
Flash-Specific Data Protection
Flash-Specific Data ProtectionFlash-Specific Data Protection
Flash-Specific Data ProtectionEMC
 
What's New in VMware vSphere 5.0 - Storage
What's New in VMware vSphere 5.0 - StorageWhat's New in VMware vSphere 5.0 - Storage
What's New in VMware vSphere 5.0 - StorageVMware
 
IBM SONAS Enterprise backup and remote replication solution in a private cloud
IBM SONAS Enterprise backup and remote replication solution in a private cloudIBM SONAS Enterprise backup and remote replication solution in a private cloud
IBM SONAS Enterprise backup and remote replication solution in a private cloudIBM India Smarter Computing
 
IBM SONAS Enterprise backup and remote replication solution in a private cloud
IBM SONAS Enterprise backup and remote replication solution in a private cloudIBM SONAS Enterprise backup and remote replication solution in a private cloud
IBM SONAS Enterprise backup and remote replication solution in a private cloudIBM India Smarter Computing
 
Considerations for Testing All-Flash Array Performance
Considerations for Testing All-Flash Array PerformanceConsiderations for Testing All-Flash Array Performance
Considerations for Testing All-Flash Array PerformanceEMC
 
Deploying oracle rac 10g with asm on rhel and sles with svc
Deploying oracle rac 10g with asm on rhel and sles with svcDeploying oracle rac 10g with asm on rhel and sles with svc
Deploying oracle rac 10g with asm on rhel and sles with svcwikiwei
 
Cloud-Ready, Scale-Out Storage
Cloud-Ready, Scale-Out StorageCloud-Ready, Scale-Out Storage
Cloud-Ready, Scale-Out Storageryanwakeling
 
Deduplication Solutions Are Not All Created Equal: Why Data Domain?
Deduplication Solutions Are Not All Created Equal: Why Data Domain?Deduplication Solutions Are Not All Created Equal: Why Data Domain?
Deduplication Solutions Are Not All Created Equal: Why Data Domain?EMC
 
Dell whitepaper busting solid state storage myths
Dell whitepaper busting solid state storage mythsDell whitepaper busting solid state storage myths
Dell whitepaper busting solid state storage mythsNatalie Cerullo
 
Oracle Virtualization Best Practices
Oracle Virtualization Best PracticesOracle Virtualization Best Practices
Oracle Virtualization Best PracticesEMC
 
EMC VNX FAST VP
EMC VNX FAST VP EMC VNX FAST VP
EMC VNX FAST VP EMC
 
EMC FAST VP for Unified Storage Systems
EMC FAST VP for Unified Storage Systems EMC FAST VP for Unified Storage Systems
EMC FAST VP for Unified Storage Systems EMC
 
WHITE PAPER▶ Software Defined Storage at the Speed of Flash
WHITE PAPER▶ Software Defined Storage at the Speed of FlashWHITE PAPER▶ Software Defined Storage at the Speed of Flash
WHITE PAPER▶ Software Defined Storage at the Speed of FlashSymantec
 
IBM Upgrades SVC with Solid State Drives — Achieves Better Storage Utilization
IBM Upgrades SVC with Solid State Drives — Achieves Better Storage UtilizationIBM Upgrades SVC with Solid State Drives — Achieves Better Storage Utilization
IBM Upgrades SVC with Solid State Drives — Achieves Better Storage UtilizationIBM India Smarter Computing
 
IBM Upgrades SVC with Solid State Drives — Achieves Better Storage Utilization
IBM Upgrades SVC with Solid State Drives — Achieves Better Storage UtilizationIBM Upgrades SVC with Solid State Drives — Achieves Better Storage Utilization
IBM Upgrades SVC with Solid State Drives — Achieves Better Storage UtilizationIBM India Smarter Computing
 
Virtualizing Business Critical Applications
Virtualizing Business Critical ApplicationsVirtualizing Business Critical Applications
Virtualizing Business Critical ApplicationsDataCore Software
 
VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2
VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2 VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2
VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2 VMworld
 
Why is Virtualization Creating Storage Sprawl? By Storage Switzerland
Why is Virtualization Creating Storage Sprawl? By Storage SwitzerlandWhy is Virtualization Creating Storage Sprawl? By Storage Switzerland
Why is Virtualization Creating Storage Sprawl? By Storage SwitzerlandINFINIDAT
 

Similaire à Flash Implications in Enterprise Storage Array Designs (20)

White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices   White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices
 
Flash-Specific Data Protection
Flash-Specific Data ProtectionFlash-Specific Data Protection
Flash-Specific Data Protection
 
What's New in VMware vSphere 5.0 - Storage
What's New in VMware vSphere 5.0 - StorageWhat's New in VMware vSphere 5.0 - Storage
What's New in VMware vSphere 5.0 - Storage
 
IBM SONAS Enterprise backup and remote replication solution in a private cloud
IBM SONAS Enterprise backup and remote replication solution in a private cloudIBM SONAS Enterprise backup and remote replication solution in a private cloud
IBM SONAS Enterprise backup and remote replication solution in a private cloud
 
IBM SONAS Enterprise backup and remote replication solution in a private cloud
IBM SONAS Enterprise backup and remote replication solution in a private cloudIBM SONAS Enterprise backup and remote replication solution in a private cloud
IBM SONAS Enterprise backup and remote replication solution in a private cloud
 
Considerations for Testing All-Flash Array Performance
Considerations for Testing All-Flash Array PerformanceConsiderations for Testing All-Flash Array Performance
Considerations for Testing All-Flash Array Performance
 
Deploying oracle rac 10g with asm on rhel and sles with svc
Deploying oracle rac 10g with asm on rhel and sles with svcDeploying oracle rac 10g with asm on rhel and sles with svc
Deploying oracle rac 10g with asm on rhel and sles with svc
 
Cloud-Ready, Scale-Out Storage
Cloud-Ready, Scale-Out StorageCloud-Ready, Scale-Out Storage
Cloud-Ready, Scale-Out Storage
 
Deduplication Solutions Are Not All Created Equal: Why Data Domain?
Deduplication Solutions Are Not All Created Equal: Why Data Domain?Deduplication Solutions Are Not All Created Equal: Why Data Domain?
Deduplication Solutions Are Not All Created Equal: Why Data Domain?
 
Dell whitepaper busting solid state storage myths
Dell whitepaper busting solid state storage mythsDell whitepaper busting solid state storage myths
Dell whitepaper busting solid state storage myths
 
Bb sql serverdell
Bb sql serverdellBb sql serverdell
Bb sql serverdell
 
Oracle Virtualization Best Practices
Oracle Virtualization Best PracticesOracle Virtualization Best Practices
Oracle Virtualization Best Practices
 
EMC VNX FAST VP
EMC VNX FAST VP EMC VNX FAST VP
EMC VNX FAST VP
 
EMC FAST VP for Unified Storage Systems
EMC FAST VP for Unified Storage Systems EMC FAST VP for Unified Storage Systems
EMC FAST VP for Unified Storage Systems
 
WHITE PAPER▶ Software Defined Storage at the Speed of Flash
WHITE PAPER▶ Software Defined Storage at the Speed of FlashWHITE PAPER▶ Software Defined Storage at the Speed of Flash
WHITE PAPER▶ Software Defined Storage at the Speed of Flash
 
IBM Upgrades SVC with Solid State Drives — Achieves Better Storage Utilization
IBM Upgrades SVC with Solid State Drives — Achieves Better Storage UtilizationIBM Upgrades SVC with Solid State Drives — Achieves Better Storage Utilization
IBM Upgrades SVC with Solid State Drives — Achieves Better Storage Utilization
 
IBM Upgrades SVC with Solid State Drives — Achieves Better Storage Utilization
IBM Upgrades SVC with Solid State Drives — Achieves Better Storage UtilizationIBM Upgrades SVC with Solid State Drives — Achieves Better Storage Utilization
IBM Upgrades SVC with Solid State Drives — Achieves Better Storage Utilization
 
Virtualizing Business Critical Applications
Virtualizing Business Critical ApplicationsVirtualizing Business Critical Applications
Virtualizing Business Critical Applications
 
VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2
VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2 VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2
VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2
 
Why is Virtualization Creating Storage Sprawl? By Storage Switzerland
Why is Virtualization Creating Storage Sprawl? By Storage SwitzerlandWhy is Virtualization Creating Storage Sprawl? By Storage Switzerland
Why is Virtualization Creating Storage Sprawl? By Storage Switzerland
 

Plus de EMC

INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
INDUSTRY-LEADING  TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUDINDUSTRY-LEADING  TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUDEMC
 
Cloud Foundry Summit Berlin Keynote
Cloud Foundry Summit Berlin Keynote Cloud Foundry Summit Berlin Keynote
Cloud Foundry Summit Berlin Keynote EMC
 
EMC GLOBAL DATA PROTECTION INDEX
EMC GLOBAL DATA PROTECTION INDEX EMC GLOBAL DATA PROTECTION INDEX
EMC GLOBAL DATA PROTECTION INDEX EMC
 
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIOTransforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIOEMC
 
Citrix ready-webinar-xtremio
Citrix ready-webinar-xtremioCitrix ready-webinar-xtremio
Citrix ready-webinar-xtremioEMC
 
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES EMC
 
EMC with Mirantis Openstack
EMC with Mirantis OpenstackEMC with Mirantis Openstack
EMC with Mirantis OpenstackEMC
 
Modern infrastructure for business data lake
Modern infrastructure for business data lakeModern infrastructure for business data lake
Modern infrastructure for business data lakeEMC
 
Force Cyber Criminals to Shop Elsewhere
Force Cyber Criminals to Shop ElsewhereForce Cyber Criminals to Shop Elsewhere
Force Cyber Criminals to Shop ElsewhereEMC
 
Pivotal : Moments in Container History
Pivotal : Moments in Container History Pivotal : Moments in Container History
Pivotal : Moments in Container History EMC
 
Data Lake Protection - A Technical Review
Data Lake Protection - A Technical ReviewData Lake Protection - A Technical Review
Data Lake Protection - A Technical ReviewEMC
 
Mobile E-commerce: Friend or Foe
Mobile E-commerce: Friend or FoeMobile E-commerce: Friend or Foe
Mobile E-commerce: Friend or FoeEMC
 
Virtualization Myths Infographic
Virtualization Myths Infographic Virtualization Myths Infographic
Virtualization Myths Infographic EMC
 
Intelligence-Driven GRC for Security
Intelligence-Driven GRC for SecurityIntelligence-Driven GRC for Security
Intelligence-Driven GRC for SecurityEMC
 
The Trust Paradox: Access Management and Trust in an Insecure Age
The Trust Paradox: Access Management and Trust in an Insecure AgeThe Trust Paradox: Access Management and Trust in an Insecure Age
The Trust Paradox: Access Management and Trust in an Insecure AgeEMC
 
EMC Technology Day - SRM University 2015
EMC Technology Day - SRM University 2015EMC Technology Day - SRM University 2015
EMC Technology Day - SRM University 2015EMC
 
EMC Academic Summit 2015
EMC Academic Summit 2015EMC Academic Summit 2015
EMC Academic Summit 2015EMC
 
Data Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education ServicesData Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education ServicesEMC
 
Using EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsUsing EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsEMC
 
Using EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookUsing EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookEMC
 

Plus de EMC (20)

INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
INDUSTRY-LEADING  TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUDINDUSTRY-LEADING  TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
 
Cloud Foundry Summit Berlin Keynote
Cloud Foundry Summit Berlin Keynote Cloud Foundry Summit Berlin Keynote
Cloud Foundry Summit Berlin Keynote
 
EMC GLOBAL DATA PROTECTION INDEX
EMC GLOBAL DATA PROTECTION INDEX EMC GLOBAL DATA PROTECTION INDEX
EMC GLOBAL DATA PROTECTION INDEX
 
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIOTransforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
 
Citrix ready-webinar-xtremio
Citrix ready-webinar-xtremioCitrix ready-webinar-xtremio
Citrix ready-webinar-xtremio
 
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
 
EMC with Mirantis Openstack
EMC with Mirantis OpenstackEMC with Mirantis Openstack
EMC with Mirantis Openstack
 
Modern infrastructure for business data lake
Modern infrastructure for business data lakeModern infrastructure for business data lake
Modern infrastructure for business data lake
 
Force Cyber Criminals to Shop Elsewhere
Force Cyber Criminals to Shop ElsewhereForce Cyber Criminals to Shop Elsewhere
Force Cyber Criminals to Shop Elsewhere
 
Pivotal : Moments in Container History
Pivotal : Moments in Container History Pivotal : Moments in Container History
Pivotal : Moments in Container History
 
Data Lake Protection - A Technical Review
Data Lake Protection - A Technical ReviewData Lake Protection - A Technical Review
Data Lake Protection - A Technical Review
 
Mobile E-commerce: Friend or Foe
Mobile E-commerce: Friend or FoeMobile E-commerce: Friend or Foe
Mobile E-commerce: Friend or Foe
 
Virtualization Myths Infographic
Virtualization Myths Infographic Virtualization Myths Infographic
Virtualization Myths Infographic
 
Intelligence-Driven GRC for Security
Intelligence-Driven GRC for SecurityIntelligence-Driven GRC for Security
Intelligence-Driven GRC for Security
 
The Trust Paradox: Access Management and Trust in an Insecure Age
The Trust Paradox: Access Management and Trust in an Insecure AgeThe Trust Paradox: Access Management and Trust in an Insecure Age
The Trust Paradox: Access Management and Trust in an Insecure Age
 
EMC Technology Day - SRM University 2015
EMC Technology Day - SRM University 2015EMC Technology Day - SRM University 2015
EMC Technology Day - SRM University 2015
 
EMC Academic Summit 2015
EMC Academic Summit 2015EMC Academic Summit 2015
EMC Academic Summit 2015
 
Data Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education ServicesData Science and Big Data Analytics Book from EMC Education Services
Data Science and Big Data Analytics Book from EMC Education Services
 
Using EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsUsing EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere Environments
 
Using EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookUsing EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBook
 

Dernier

"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfPrecisely
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 

Dernier (20)

"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 

Flash Implications in Enterprise Storage Array Designs

  • 1. FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS                 ABSTRACT This white paper examines some common practices in enterprise storage array design and their resulting trade-offs and limitations. The goal of the paper is to   help the reader understand the opportunities XtremIO found to improve   in mind, and to illustrate why similar advancements simply cannot be achieved enterprise storage systems by thinking with random, rather than sequential I/O using a hard drive or hybrid SSD/hard drive array design.            
  • 2. Copyright © 2013 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided “as is.” EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. EMC2, EMC, the EMC logo, and the RSA logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. VMware is a registered trademark of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners. © Copyright 2013 EMC Corporation. All rights reserved. Published in the USA. 01/13 White Paper 2 FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS
  • 3. TABLE OF CONTENTS TABLE OF CONTENTS ............................................................................................................................. 3 EXECUTIVE SUMMARY ........................................................................................................................... 4 STORAGE ARRAY DESIGN PRIMER ......................................................................................................... 5 THE I/O BLENDER ................................................................................................................................. 6 STORAGE ENLIGHTENMENT ................................................................................................................... 7 DEDUPLICATION ................................................................................................................................... 7 SNAPSHOTS ........................................................................................................................................... 9 THIN PROVISIONING .......................................................................................................................... 10 DATA PROTECTION .............................................................................................................................. 11 THE INEVITABLE CONCLUSION............................................................................................................ 12 HOW TO LEARN MORE ......................................................................................................................... 12   3 FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS
  • 4. EXECUTIVE SUMMARY Enterprise storage arrays are highly sophisticated systems that have evolved over decades to provide levels of performance, reliability, and functionality unavailable in the underlying hard disk drives upon which they’re based. A significant aspect of the engineering work to make this possible is designing the array hardware and software stack to extract the most capability out of hard drives, which are physical machines with mechanically moving parts and vastly different performance profiles for sequentially (good performance) versus randomly (poor performance) accessed data. A good enterprise storage array attempts to sequentialize the workload seen by the hard drives, regardless of the native data pattern at the host level. In recent years, Solid State Drives (SSDs) based on flash media and not bound by mechanical limitations have come to market. A single SSD is capable of random I/O that would require hundreds of hard drives to match. However, obtaining this performance from an SSD, much less in an enterprise storage array filled with them, is challenging. Existing storage arrays designed to sequentialize workloads are simply optimized along a dimension that is no longer relevant. Furthermore, since an array based on flash media is capable of approximately two orders of magnitude greater performance than an array with a comparable number of hard drives, array controller designs must be entirely rethought, or they simply become the bottleneck. The challenge isn’t just limited to performance. Modern storage arrays offer a wide variety of features such as deduplication, snapshots, clones, thin provisioning, and replication. These features are built on top of the underlying disk management engine, and are based on the same rules and limitations favoring sequential I/O. Simply substituting flash for hard drives won’t break these features, but neither does it enhance them. XtremIO has developed a new class of enterprise data storage system based entirely on flash media. XtremIO’s approach was not simply to substitute flash in an existing storage controller design or software stack, but rather to engineer an entirely new array from the ground-up to unlock flash’s full performance potential and deliver array-based capabilities that are counterintuitive when thought about in the context of current storage system features and limitations. This approach requires a 100% commitment to flash (or more accurately a storage media that can be accessed randomly without performance penalty) and XtremIO’s products use flash SSDs exclusively. This white paper examines some common practices in enterprise storage array design and their resulting trade-offs and limitations. The goal of the paper is to help the reader understand the opportunities XtremIO found to improve enterprise storage systems by thinking with random, rather than sequential I/O in mind, and to illustrate why similar advancements simply cannot be achieved using a hard drive or hybrid SSD/hard drive array design.   4 FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS
  • 5. STORAGE ARRAY DESIGN PRIMER At their core, storage arrays perform a few basic functions: • Aggregate several physical disk drives for combined performance • Provide data protection in the event of a disk drive failure (RAID) • Present several physical disk drives as a single logical volume to host computers • Enable multiple hosts to share the array Since physical disk drives perform substantially better when accessed sequentially, array designs hold incoming data in cache until such time as enough data accumulates to enable a sequential write to disk. When multiple hosts share the array the workload is inherently randomized since each host is acting independently of the others and the array sees a mix of traffic from all hosts. I/O RANDOMIZATION Multiple hosts sharing storage causes I/O randomization. Here, three hosts, A, B, and C each have logical volumes on the same physical disk drives. As the array fulfills I/O requests to each host, the drive heads must seek to the portion of the drive allocated to each host, lowering performance. Maintaining sequential I/O to the drives is all but impossible in this common scenario. Storage arrays worked well enough for many years but have become increasingly challenged due to advancements in server design and virtualization technology that create what has come to be known in the industry as the “I/O Blender”. In short, the I/O Blender is an effect that causes high levels of workload randomization as seen by a storage array.   5 FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS
  • 6. THE I/O BLENDER CPUs have historically gained power through increases in transistor count and clock speed. More recently, a shift has been made to multi-core CPUs and multi-threading. This, combined with server virtualization technology, allows massive consolidation of applications onto a single physical server. The result is intensive randomization of the workload as seen by the storage array. Imagine a dual socket server with six cores per socket and two threads per core. With virtualization technology this server can easily present shared storage with a workload that intermixes twenty-four unique data streams (2 sockets x 6 cores per socket x 2 threads per core). Now imagine numerous servers on a SAN sharing that same storage array. The array’s workload very quickly becomes completely random I/O coming from hundreds or thousands of intermixed sources. This is the I/O Blender. THE IO BLENDER Multi-core CPUs, multithreading, and virtualization all randomize the workload seen by the storage array. Three physical hosts no longer present three unique workloads, instead presenting dozens. Storage array designs, which depend on the ability to sequentialize I/O to disk, very quickly face performance challenges in these increasingly common server environments. Common techniques to address this challenge are: • Increase array cache sizes: It is not uncommon for enterprise arrays to support 1TB or more of cache. With more cache capacity there is a better chance to aggregate disparate I/O since more data can be held in cache, opportunistically waiting for enough random data to accumulate to allow sequential writes. The downside to this approach is that cache memory is very expensive and read performance is not addressed (random reads result in cache misses and random disk I/O). • Spread the workload across more spindles: If the workload becomes more randomized, then more disk IOPS are needed to service it. Adding more spindles to the array yields more potential IOPS. The downsides to this approach are numerous – cost, space, power consumption, inefficient capacity utilization (having much more capacity than the applications require, simply because so many spindles are needed for performance), and the need to plan application deployments based on how their I/O requirements will map to the array. • Add a flash cache or flash tier: This approach adds a higher performing media as a tier inside the array. There are numerous benefits to this approach, however array controllers designed to work with hard drives are internally optimized, both in hardware and software, to work with hard drives. Adding a flash cache or tier simply allows the performance potential of the array controller to be reached with fewer drives, but it doesn’t improve the overall capabilites of the array. Of note, when flash is added as a cache, it will experience write cycles more frequently (data is moved in and out of the cache as it becomes “hot” or “cold”), necessitating expensive SLC flash that provides longer endurance. The efficacy of caching and tiering can vary based on application requirements. Even if only 10% of I/Os go to spinning disk, the application will often become limited by the response time for those transactions due to serialization. Maintaining consistently high and predicatable performance is not always possible in caching/tiering designs.   6 FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS
  • 7. Attempting to resolve the I/O Blender problem with these methods helps, but what happens next year when CPU core counts double again? And again the year after that? Eventually the size of the array RAM cache and flash cache tiers need to become so large that effectively all data (excepting purely archival information) will need to reside on them. With array controller designs sized and optimized for the performance of hard drives, the full potential of flash will not be realized. This fact alone is enough to warrant a ground-up new storage system design. But so far we have only discussed the drivers around performance. Enterprise storage systems also have many advanced features for data management. Yet just as performance can be enhanced by a ground-up flash approach, so may advanced array features. STORAGE ENLIGHTENMENT In this section we’ll examine four common array-based data management features: deduplication, snapshots, thin provisioning and data protection. For each feature we’ll examine how it works, how hard disk drives impact the performance of the feature, and why a simple substitution of flash media for hard drives doesn’t unlock the full potential of the feature. While we’ve chosen four examples here for illustrative purposes, the implications are common across nearly all array-based features including replication, cloning, and more. DEDUPLICATION Deduplication has predominately been deployed as a backup technology for two reasons. First, it is clear that multiple backup jobs run over a period of days, weeks, and months will deduplicate very well. But more importantly for this discussion, deduplication has never been suitable in primary storage applications because it has a negative impact on performance, always for reading data, and in most implementations, on writing it. Deduplication affects access performance on primary data because it leads to logical fragmentation of the data volume. Whenever duplicate data is eliminated and replaced with a pointer to a unique data block, the data stream for the duplicate blocks is no longer sequentially laid out on disk. 7 FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS FLASH MANAGEMENT All types of flash media support a limited number of write cycles. And while SSD vendors are adept at managing endurance within the drive itself, much more can be done at the array level to extend the useful life of the drives. A flash-based array must be designed to: • Minimize the number of writes to flash by performing as much realtime data processing as possible and avoiding post-processing operations • Spread writes across all SSDs in the array to ensure even wear • Avoid movement of data once it is written These guiding principles are another reason why a clean-sheet design is warranted with flash. The processing operations and data placement algorithms of disk-based storage systems did not need to consider flash endurance.
  • 8. DEDUPLICATION PERFORMANCE IMPACT Once data is deduplicated, it is no longer possible to perform sequential reads as any deduplicated block requires a disk seek to the location of the stored block. In this example, what would have required a single disk seek followed by a sequential data read now requires 21 disk seeks to accomplish. With a disk seek taking roughly 15ms, a single read has just increased from 15ms to 315ms, far outside acceptable performance limits.   But how does deduplication affect write performance? Most deduplication implementations are post-process operations. Incoming data is first written to disk without being deduplicated. Then, the array (either on a schedule or by administrator command) processes the data to deduplicate it. This processing necessitates reading the data and writing it back to disk in deduplicated form. Every I/O operation consumed during the deduplication processing is unavailable for servicing host I/O. Furthermore, the complex processing involved in deduplication is computationally intensive and slows the performance of the array controller. This has the additional drawback of requiring a large amount of capacity to “land” the data while waiting for the deduplication process to reduce it. When flash is substituted for hard drives, performance improves as the random I/O deduplication creates is more favorably handled by SSDs. And the extra operations of the deduplication post-processing could be completed more quickly. But the postprocessing operation itself requires multiple writes to disk – the initial landing of data on disk, and then the subsequent write for the deduplicated data. This consumes IOPS that would otherwise be available for host I/O, but more importantly the extra writes negatively impact flash endurance (see sidebar on previous page), which is an undesirable outcome. Other problems relating to the computational burden deduplication places on the controller, the inability of many deduplication implementations to operate globally across all array volumes or to scale across any array capacity are also not addressed by merely swapping from disk drives to flash.   8 FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS
  • 9. SNAPSHOTS Snapshots are point-in-time images of a storage volume allowing the volume to be rolled back or replicated within the array. While some snapshot implementations literally make full copies of the source volume (split mirror snapshots), we will focus our discussion on space-efficient snapshots, which only write changes to the source volume. There are two types of space-efficient snapshots; copy-on-write and redirect-on-write. In a copy-on-write snapshot, changes to the source volume invoke a copy operation in the array. The original block is first copied to a designated snapshot location, and then the new block is written in its place. Copy-on-write snapshots keep the source volume layout intact, but “old” blocks in the snapshot volume are heavily fragmented. COPY-ON-WRITE SNAPSHOT A change to an existing data block (1) requires first that the existing data block be copied to the snapshot volume (2). The new data block then takes its place (3). This results in two penalties. First, there are two array writes for every host write – the copy operation and the new write. This severely impacts write performance. Secondly, reads from the snapshots become highly fragmented, causing disk I/O both from the source volume and random locations in the snapshot reserve volume. This makes the snapshots unsuitable for any performance oriented application (for example, as a test/development copy of a database volume). The other type of space-efficient snapshot is redirect-on-write. In a redirect-on-write snapshot, once the snapshot is taken, new writes are redirected to a snapshot reserve volume rather than copying the existing volume’s data. This avoids the extra write penalty of copy-on-write designs. REDIRECT-ON-WRITE SNAPSHOT In a redirect-on-write snapshot, new writes (1) are redirected to the snapshot reserve volume. This avoids the extra write penalty of copy-onwrite snapshots, but results in fragmentation of the active volume and poor read performance. However, now reads from the source volume must occur from two locations; the original source volume and the snapshot reserve volume. This results in poor read performance as changes in the source volume accumulate and as more snapshots are taken since the data is no longer stored contiguously and numerous disk seeks ensue. As with deduplication, a simple substitution of flash in place of hard drives will yield some benefits, but the full potential of flash cannot be achieved. In a copy-on-write design, there are still two writes for every new write (bad for flash endurance). A redirect-on-write design can take better advantage of flash, but challenges come in other areas. For example, mixing deduplication with snapshots is extremely complex from a metadata management perspective. Blocks of data may exist on disk, or only as pointers. They may be in a source volume or in a snapshot. Figuring out which blocks need to be kept in the face of changing data, deleting data, and creating, deleting, and cloning snapshots is non-trivial. In deduplicating storage arrays, the performance impact when snapshots are utilized can be acute. 9 FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS
  • 10. A final area of concern in snapshot technology is cloning. A clone is a copy of a volume creating using a snapshot, but that may be subsequently altered and can diverge from the source volume. Clones are particularly challenging to implement with harddrive based storage because changes to the source volume and the clone must both be tracked and lead to fragmentation of both volumes. CLONES When clones (writeable snapshots) are created, the clone may diverge from the source volume. This causes inherent fragmentation as both the source volume and clone now contain their own unique change sets, neither of which is complete without the source volume’s snapshot. Reading from the source volume or the clone now requires seeking to various locations on disk. Cloning is an especially complex feature that often does not live up to performance expectations (and thus intended use cases) in disk-based arrays. As previously noted, merely substituting flash media doesn’t solve the inherent way in which snapshots or clones are implemented (e.g. copy-on-write or redirect-on-write) or the underlying metadata structures, and thus does not solve the underlying problems. THIN PROVISIONING Thin provisioning allows volumes to be created of any arbitrary size without having to pre-allocate storage space to the volume. The array dynamically allocates free space to the volume as data is written, improving storage utilization rates. Storage arrays allocate free space in fairly large chunks, typically 1MB or more, in order to preserve the ability to write sequentially into the chunks. However, most operating system and application I/O does not align perfectly to the storage system’s allocation boundaries. The result is thin provisioning “creep”, where the storage system has allocated more space to the volume than the OS or application thinks it has used. Some more advanced thin provisioning arrays have post-processing operations that periodically re-pack data to reclaim the dead space, but this results in performance issues, both from the repacking operation itself, and the resultant volume fragmentation and random I/O. Substituting flash for hard drives provides no benefit because the underlying allocation size in the storage array hasn’t changed. Allocation sizes are not simple parameters that can be tuned. They are deeply woven into the storage system design, data structures, and layouts on disk. Thin Provisioning Substituting flash for hard drives provides no benefit because the underlying allocation size in the storage array hasn’t changed. Allocation sizes are not simple parameters that can be tuned. They are deeply woven into the storage system design, data structures, and layouts on disk. 10 FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS Problems resulting from large allocation sizes – wasted space (creep) and volume fragmentation leading to additional disk seeks during read operations.
  • 11. DATA PROTECTION The RAID algorithms developed for disk-based storage arrays protect data in the event of one or more disk drive failures. Each algorithm has a trade-off between its protection level, its performance and its capacity overhead. Consider the following comparison of typical RAID levels used in current storage systems: RAID Algorithm Relative Read Relative Write Relative Capacity Performance Performance Overhead Data Protection RAID 1 – Mirroring Excellent Good Poor Single disk failure RAID 5 – single parity Good OK Good Single disk failure RAID 6 – double parity Good Poor OK Single disk failure Regardless of which RAID level is chosen, there is no way to achieve the best performance, the lowest capacity overhead, and the best data protection simultaneously. Much of the reason for this is rooted in the way these RAID algorithms place data on disk and the need for data to be laid out sequentially. Using standard RAID algorithms with flash media is possible, but it doesn’t change these trade-offs. There are clear gains to be realized by rethinking data protection algorithms to leverage random-access media that would be all but impossible to replicate using hard drives. In fact, a storage system designed for flash can provide nearly optimal read performance, write performance, capacity overhead, and data protection simultaneously. Another important consideration in the data protection design is how to handle flash “hiccups”, or periods where the flash drive does not respond as expected due to internal operations. RAID schemes created for hard drives sometimes have “on-the-fly” drive rebuilding and/or journaling to handle hard drive “hiccups”. However, these hiccup management techniques are implemented within the constraints of the core RAID algorithm and would not function properly with flash media. 11 FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS
  • 12. THE INEVITABLE CONCLUSION What we have shown is that while utilizing flash in existing storage systems will lead to some performance gains, the true potential of flash media cannot be realized without a ground-up array design specifically engineered for its unique random I/O capabilities. The implications for an all flash storage array include: • It must be a scale-out design. Flash simply delivers performance levels beyond the capabilities of any scale-up (i.e. dual controller) architecture. • The system must automatically balance itself. If balancing does not take place then adding capacity will not increase performance. • Attempting to sequentialize workloads no longer makes sense since flash media easily handles random I/O. Rather, the unique ability of flash to perform random I/O should be exploited to provide new capabilities. • Cache-centric architectures should be rethought since today’s data patterns are increasingly random and inherently do not cache well. • Any storage feature that performs multiple writes to function must be completely rethought for two reasons. First, the extra writes steal available I/O operations from serving hosts. And second, with flash’s finite write cycles, extra writes must be avoided to extend the usable life of the array. • Array features that have been implemented over time are typically functionally separate. For example, in most arrays snapshots have existed for a long time and deduplication (if available) is fairly new. The two features do not overlap or leverage each other. But there are significant benefits to be realized through having a unified metadata model for deduplication, snapshots, thin provisioning, replication, and other advanced array features. XtremIO was founded because of a realization not just that storage arrays must be redesigned to handle the performance potential flash offers, but that practically every aspect of storage array functionality can be improved by fully exploiting flash’s random-access nature. XtremIO’s products are thus about much more than higher IOPS and lower latency. They are about delivering business value through new levels of array functionality. HOW TO LEARN MORE For a detailed presentation explaining XtremIO’s storage array capabilities and how it substantially improves performance, operational efficiency, ease-of-use, and total cost of ownership, please contact XtremIO at info@xtremio.com. We will schedule a private briefing in person or via web meeting. XtremIO has benefits in many environments, but is particularly effective for virtual server, virtual desktop, and database applications.     CONTACT US To learn more about how EMC products, services, and solutions can help solve your business and IT challenges, contact your local representative or authorized reseller—or visit us at www.EMC.com.   EMC2, EMC, the EMC logo, XtremIO and the XtremIO logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. VMware is a registered trademark of VMware, Inc., in the United States and other jurisdictions. © Copyright 2013 EMC Corporation. All rights reserved. Published in the USA. 10/13 EMC White Paper H12459 EMC believes the information in this document is accurate as of its publication date. The information is subject to change without notice. 12 FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS