19. Business Management Applications – Microsoft Dynamics NAV, Microsoft CRM, SharePoint
20.
21. Our partners… Blue Chip recognises that to deliver the best, we must work with the best! Through carefully selected and managed alliances, Blue Chip holds strategic partnerships with the world's best of breed manufacturers.
23. vSphere vSphere vSphere Virtualisation is the Foundation for Cloud “Virtualization is a modernization catalyst and unlocks cloud computing.” ―Gartner, May 2010
32. New vSphere vSphere vSphere In 2011 VMware has Introduced a major upgrade of the entire Cloud Infrastructure Stack Cloud Infrastructure Launch vCloud Director 1.5 vCloud Director vShield 5.0 vShield Security vCenter SRM 5.0 vCenter Management vSphere 5.0
42. Demo New Hardware Hot Add CPU Memory Resources – guest memory lock VMware Hardware status monitor Web Client Linux client or MAC can now mange vCentre Resume tasks Advanced search - history of vm's Customise view IPAD Client
43. Storage vMotion – Introduction In vSphere 5.0, a number of new enhancements were made to Storage vMotion. Storage vMotion will now work with Virtual Machines that have snapshots, which means coexistence with other VMware products & features such as VCB, VDR & HBR. Storage vMotion will support the relocation of linked clones. Storage vMotion has a new use case – Storage DRS – which uses Storage vMotion for Storage Maintenance Mode & Storage Load Balancing (Space or Performance).
61. Storage Capabilities & VM Storage Profiles Compliant Not Compliant VM Storage Profile associated with VM VM Storage Profile referencing Storage Capabilities Storage Capabilities surfaced by VASA or user-defined
62. VM Storage Profile Compliance Policy Compliance is visible from the Virtual Machine Summary tab.
63. Demo Storage Driven Profiles Show Datastore storage profile Assign storage profile to a VM Profile compliance Create a new VM and place on storage cluster - will then place depending on load Storage DRS Storage Load balancing Storage Anti affinity Storage I/O
69. More Auto Deploy New host deployment method introduced in vSphere 5.0: Based on PXE Boot Works with Image Builder, vCenter Server, and Host Profiles How it works: PXE boot the server ESXi image profile loaded into host memory via Auto Deploy Server Configuration applied using Answer File / Host Profile Host placed/connected in vCenter Benefits: No boot disk Quickly and easily deploy large numbers of ESXi hosts Share a standard ESXi image across many hosts Host image decoupled from the physical server Recover host w/out recovering hardware or having to restore from backup
70. Host Profiles Enhancements New feature enables greater flexibility and automation Using an Answer File, administrators can configure host-specific settings to be used in conjunction with the common settings in the Host Profile, avoiding the need to type in any host-specific parameters. This feature enables the use of Host Profiles to fully configure a host during an automated deployment. Host Profiles now has support for a greatly expanded set of configurations, including: iSCSI FCoE Native Multipathing Device Claiming and PSP Device Settings Kernel Module Settings And more
74. Enhanced scalabilityStorage vMotion VMware Fault Tolerance, High Availability,DRS Maintenance Mode, vMotion Benefits NIC Teaming, Multipathing Component Server Storage
75. What’s New in vSphere 5 High Availability? Complete re-write of vSphere HA: Provides a foundation for increased scale and functionality Eliminates common issues (DNS resolution) Multiple Communication Paths Can leverage storage as well as the management network for communications Enhances the ability to detect certain types of failures and provides redundancy IPv6 Support Enhanced Error Reporting One log file per host eases troubleshooting efforts Enhanced User Interface
76. vSphere HA Primary Components FDM FDM FDM FDM Every host runs an Agent Referred to as ‘FDM’ or Fault Domain Manager One of the agents within the cluster is chosen to assume the role of the Master There is only one Master per cluster during normal operations All other agents assume the role of Slaves There is no more Primary/Secondary concept with vSphere HA ESX 02 ESX 01 ESX 03 ESX 04 vCenter
77. Storage-Level Communications FDM FDM FDM FDM One of the most exciting new features of vSphere HA is its ability to use a storage subsystem for communication. The datastores used for this are referred to as ‘Heartbeat Datastores’. This provides for increased communication redundancy. Heartbeat datastores are used as a communication channel only when the management network is lost - such as in the case of isolation or network partitioning. ESX 02 ESX 01 ESX 03 ESX 04 vCenter
90. Remote office / branch officesTier 2 / 3 Apps – Not protected APP APP APP APP APP APP APP APP APP APP Small sites – Not protected OS OS OS OS OS OS OS OS OS OS Small Business Remote Office / Branch Office Corporate Datacenter
91. SRM Provides Broad Choice of Replication Options vCenter Server Site Recovery Manager vCenter Server Site Recovery Manager vSphere Replication VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM vSphere vSphere Storage-based replication vSphere Replication: simple, cost-efficient replication for Tier 2 applications and smaller sites Storage-based replication: High-performance replication for business-critical applications in larger sites
105. Key Concepts - Example Each vSphere Enterprise Edition license entitles to 64GB of vRAM. 4 licenses of vSphere Enterprise Edition provide a vRAM pool of 256GB (4 * 64 GB) 64GB 64GB 64GB 64GB vRAM Pool (256GB) Consumed vRAM = 80 GB Customer creates 20 VMs with 4GB vRAM each vSphere Ent vSphere Ent 1 1 1 1 CPU CPU CPU CPU Host A Host B Compliance = 12 month rolling average of Consumed vRAM < Pooled vRAM Entitlement
112. EMC: The VMware Choice 2 out of 3 CIOs pick EMC for their VMware environments “Which vendor(s) supplied the networked (SAN or NAS) storage used for your virtual server environment?” Trusted storage platform for the most critical and demanding VMware environments Advanced integration and functionality that maximizes the value of a virtualized data center Flexibility to meet infrastructure to business and technical needs Knowledge, experience, and partnerships to make your virtual data center a reality “Which is your storage vendor of choice in a virtual server environment?” “EMC remains the clear storage leader in virtualized environments.”
113. 3x Better Performance More users, more transactions, better response time FAST Cache FAST VP 3X VNXPlatform Faster CX/NS Platforms
114.
115.
116. FAST VP supporting both file and block optimizes storage pools automatically, ensuring only active data is being served from SSDs, while cold data is moved to lower-cost disk tiers
117. Togetherthey deliver a fully automated FLASH 1st storage strategy for optimal performance at the lowest cost attainableReal-time caching withFAST Cache FlashSSD High Perf. HDD High Cap. HDD Scheduled optimization with FAST VP
118.
119. If not, FAST Cache driver checks map to determine where page is located
123. Dirty pages are copied back to disk drives as background activityDRAM FAST Cache Policy Engine Driver Disk Drives
124. FAST VP for Block & File Access Optimise VNX for minimum TCO BEFORE AFTER LUN 1 Automatesmovement of hot or cold blocks Optimizesuse of high performance and high capacity drives Improves cost and performance Pool Tier 0 LUN 2 Tier 1 Most activity Neutral activity Least activity Tier 2
125. User B10 GB User A 10 GB User C10 GB Logical application and user view Physical allocation 4 GB Physical consumed storage 2 GB 2 GB VNX Thin Provisioning Only allocate the actual capacity required by the application Capacity oversubscription allows intelligent use of resources File systems FC and iSCSI LUNs Logical size greater than physical size VNX Thin Provisioning safeguards to avoid running out of space Monitoring and alerting Automatic and dynamic extension past logical size Automatic NAS file system extension FC and iSCSI dynamic LUN extension VNX THIN PROVISIONING Capacity on demand
126. VNX Virtual Provisioning Thick pool LUN: Full capacity allocation Near RAID-Group LUN performance Capacity reserved at LUN creation 1 GB chunks allocated as relative block address is written Thin pool LUN: Only allocates capacity as data is written by the host Capacity allocated in 1 GB chunks 8 KB blocks contiguously written within 1 GB 8 KB mapping incurs some performance overhead
127. VNX Series Software Software Solutions Made Simple Attractively Priced Packsand Suites Total Efficiency Pack FAST Suite Security and Compliance Suite TotalProtection Pack Local Protection Suite Remote Protection Suite Application Protection Suite
128. VNX: Faster than the Rest 12 Highest number of transactions and lowest response time 10 IBM 8 3X FASTER THAN IBM 6 HP 4 RESPONSE TIME IN MS—LOWER IS BETTER NetApp 2 0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000 TRANSACTIONS—HIGHER IS BETTER Note: SPECsfs2008 NFSv3
129. VNX Series for Virtual Desktop 4x the number of Virtual Desktop users with VNX Series, FAST VP & FAST Cache at Sustained Performance Up to 70% reduction in storage cost for same I/O performance Boot Storm: 3x Faster: Boot & settle 500 desktops in 8 min vs. 27 min FAST Cache absorbs the majority of the Boot work-load (i.e. I/O to spinning drives) Desktop Refresh: Refresh 500 desktops in 50min vs. 130min Fast Cache serviced the majority of the IO during refresh and prevents Linked clones from overloading Celerra NS 183x 300GB 15K FC Disks VNX series 5x 100GB SSD 21x 300GB 15H SAS 15x 2TB NL-SAS
130. VNX Demo UnisphereConsole: Dashboard Customised view System Disks System Properties Fast Cache Storage Pools LUNS Compression – compression on LUN Thin Provisioning Auto tiering Hosts/Storage Groups/Virtualisation Analyser – Monitor and Alerting USM
132. vSphere 5 Training Offers Take advantage of any of the below VMware course offers which are taking place at our Southampton Training Centre and receive a FREE place on Deploying & Managing Microsoft System Center Virtual Machine Manager, worth £895.VMware vSphere: TroubleshootingDuration: 4 DaysCost: £2,075.00 + VAT per delegateDates: 03-06 October Offer: Book 1 space and save 20% or book 2 spaces and save 30% VMware vSphere: Install, Configure & ManagerDuration: 5 DaysCost: £2,595.00 + VAT per delegateDates: 10-14 October (v4.1), 17-21 October (v5) & 12-16 December (v5)Offer: Book 1 space and save 15% or book 2 spaces and save 25%Exam: Includes Free Exam VoucherVMware vSphere: Skills for Operators?Duration: 2 DaysCost: £1,095.00 + VAT per delegateDates: 29-30 September & 07-08 NovemberOffer: Buy 2 Spaces Get 1 Free
133. For further information on vSphere 5, or to book a one to one consultation, please contact your account manager or email ict@bluechip.uk.com
134. Blue Chip Change is the only constant in business... ...evolution is the key to survival www.bluechip.uk.com
Notes de l'éditeur
Every one of our customers has existing applications, running in existing datacenters, that represents significant investments and ongoing value. The first thing we are doing with these customers, is helping them stand-up a Private Cloud, to get the most efficiency and agility out of their existing assets. And this can be done in a pragmatic, evolutionary way. We have over 250,000 customers worldwide that are already on this path, because they are leveraging vSphere to virtualize the entire fabric of the datacenter, including CPU & memory, storage, and networking. And because they are using vSphere, they get built-in high-availability, and automated, dynamic resource scheduling to give them the cloud attributes of elastic, pooled capacity. <click>With virtualization in place, the independent silos are broken down, enabling us to automate many of the mundane, repetitive administration tasks with our vCenter management suite, further decreasing opex in the datacenter.
With vSphere 5.0, multiple enhancements have been introduced to increase efficiency of the Storage vMotion process, to improve overall performance, and for enhanced supportability. Storage vMotion in vSphere 5.0 now also supports the migration of virtual machines with a vSphere Snapshot and the migration of linked clones.
The Storage vMotion control logic is in the VMXThe Storage VMotion thread first creates the destination disk.After that, a stun/unstun of the VM allows the SVM Mirror Driver to be installed. I/Os to source will be mirrored to destination. The new driver will leverage the Data Mover to implement a single-pass block copy of the source to the destination disk. In additional to this it will mirror I/O between the two disks. This is a synchronous write meaning that the mirror driver will acknowledge the write to the Guest OS when it has received the acknowledgement from both the source and destination
Accelerate VM storage placement decision to a storage pod by:Capturing VM storage SLA requirementsMapping to the storage with the right characteristics and spare space
Storage DRS provides initial placement recommendations to datastores in a Storage DRS-enabled datastore cluster based on I/O and space capacity. During the provisioning of a virtual machine, a datastore cluster can be selected as the target destination for this virtual machine or virtual machine disk after which a recommendation for initial placement is done based on I/O and space capacity. As just mentioned Initial placement in a manual provisioning process has proven to be very complex in most environments and as such important provisioning factors like current I/O load or space utilization are often ignored. Storage DRS ensures initial placement recommendations are made in accordance with space constraints and with respect to the goals of space and I/O load balancing. Although people are really excited about automated load balancing… It is Initial Placement where most people will start off with and where most people will benefit from the most as it will reduce operational overhead associated with the provisioning of virtual machines.
Ongoing balancing recommendations are made when one or more datastores in a datastore cluster exceeds the user-configurable space utilization or I/O latency thresholds. These thresholds are typically defined during the configuration of the datastore cluster. Storage DRS utilizes vCenter Server’s datastore utilization reporting mechanism to make recommendations whenever the configured utilized space threshold is exceeded. I/O load is evaluated by default every 8 hours currently with a default latency threshold of 15ms. Only when this I/O latency threshold is exceeded Storage DRS will calculate all possible moves to balance the load accordingly while considering the cost and the benefit of the migration. If the benefit doesn’t at least last for 24 hours Storage DRS will not make the recommendation.
Ongoing balancing recommendations are made when one or more datastores in a datastore cluster exceeds the user-configurable space utilization or I/O latency thresholds. These thresholds are typically defined during the configuration of the datastore cluster. Storage DRS utilizes vCenter Server’s datastore utilization reporting mechanism to make recommendations whenever the configured utilized space threshold is exceeded. I/O load is evaluated by default every 8 hours currently with a default latency threshold of 15ms. Only when this I/O latency threshold is exceeded Storage DRS will calculate all possible moves to balance the load accordingly while considering the cost and the benefit of the migration. If the benefit doesn’t at least last for 24 hours Storage DRS will not make the recommendation.
Today:Currently we identify the requirements of the virtual machine, try to find the optimal datastore based on the requirements and create the virtual machine or disk. In some cases customers even periodically check if VMs are compliant but in many cases this is neglected.Storage DRS:Storage DRS only solves that problem partly. As still manually we will need to identify the correct datastore cluster and even when grouping datastores in to a cluster we need to manually verify if all LUNs are “alike”…. And again there is that manual periodically checkStorage DRS and Profile Driven Storage:When using Profile Driven Storage and Storage DRS in conjunction these problems are solved. Datastore cluster can be created based on the characteristics provided through vasa or the custom tags. When deploying virtual machines a storage profile can be selected ensuring that the virtual will be compliant!
Step 1The diagram we just showed gave a total overview, but most customers are concerned about just one thing: compliancy so how does this work? As mentioned Capabilities are surfaced through VASAStep2:And these capabilties are linked to a specific VM Storage ProfileStep 3:When a new is created or a excisting virtual machine is tagged the resultStep 4:Will be either complaint or not compliant it is as simple as that.
Auto Deploy is a new method for provisioning ESXi hosts in vSphere 5.1. At a high level the ESXi host boots over the network (using PXE/gPXE), contacts the Auto Deploy Server which loads ESXi into the hosts memory. After loading the ESXi image the Auto Deploy Server coordinates with vCenter Server to configure the host (using Host Profiles and Answer Files (answer files are new in 5.0). Auto Deploy eliminates the need for a dedicated boot device, enables rapid deployment for many hosts, and also simplifies ESXi host management by eliminating the need to maintain a separate “boot image” for each host.
Agent is ~50Kb in size. FDM Agent is not tied to vpxd at all
Customers are getting hit by core and physical memory restrictions“How will I license vSphere when my CPUs are over 6 or 12 cores?”CPU cores and physical entitlements are tied to a single server and cannot be shared among multiple ones reducing flexibility and utilizationRapid introduction of new hardware technologies require constant amendments to the licensing mode creating uncertainty over planning“What happens if I use SSD or hyperthreading or etc.?”Hardware based entitlements make it difficult for customers to transition to the usage based cost and chargeback models that characterize cloud computing and IT as a Service
Customers are getting hit by core and physical memory restrictions“How will I license vSphere when my CPUs are over 6 or 12 cores?”CPU cores and physical entitlements are tied to a single server and cannot be shared among multiple ones reducing flexibility and utilizationRapid introduction of new hardware technologies require constant amendments to the licensing mode creating uncertainty over planning“What happens if I use SSD or hyperthreading or etc.?”Hardware based entitlements make it difficult for customers to transition to the usage based cost and chargeback models that characterize cloud computing and IT as a Service
The FAST Suite improves performance and maximizes storage efficiency by deploying this FLASH 1st strategy. FAST Cache, an extendable cache of up to 2 TB, gives a real-time performance boost by ensuring the hottest data is served from the highest performing Flash drives for as long as needed. FAST VP then complements FAST Cache by optimizing storage pools on a regular, scheduled basis. You define how and when data is tiered using policies that dynamically move the most active data to high-performance drives (e.g., Flash), and less active data to high-capacity drives, all in one-gigabyte increments for both block and file data.Together, they automatically optimize for the highest system performance and the lowest storage cost simultaneously.
The slide above shows how FAST cache works. FAST Cache is based on the locality of reference of the dataset requested by a host. Systems with high locality of reference confine the majority of IOs to a relatively small capacity, where systems with low locality of reference spread IOs more evenly across the total capacity – this is also sometimes referred to as skew. The dataset with high locality of reference/skew (blocks close to one another tending to be accessed together) is a good candidate to be copied to FAST Cache. By promoting this dataset to FAST Cache, any subsequent access to this data for read-write is serviced faster from Flash drives. This reduces workload on back-end disk drives.A write operation works in similar fashion. Writes with high locality of reference are directed to Flash drives. When the time comes to flush this data to disk, the flushing operation is significantly faster as writes are now at Flash drive speeds. This can have a big impact in heavy-write workloads that require a large system cache to be flushed to the underlying disks more frequentlyThe FAST Cache map is maintained in the DRAM cache and consumes DRAM space so care should be taken to choose which pools and RAID-group LUNs it should be enabled for. EMC TS resources have tools which are available to our direct and channel champions community, to analyze existing environments for the best candidates.FAST Cache operates at a 64KB granularity for increased efficiency. If a 64KB block is referenced 3 times in a given period of time (the time will depend on the IO activity of the system), the block will be promoted into FAST Cache. As the data ages and becomes less active, it will fall out of FAST Cache to be replaced by a more active chunk of data.
The slide above shows how FAST cache works. FAST Cache is based on the locality of reference of the dataset requested by a host. Systems with high locality of reference confine the majority of IOs to a relatively small capacity, where systems with low locality of reference spread IOs more evenly across the total capacity – this is also sometimes referred to as skew. The dataset with high locality of reference/skew (blocks close to one another tending to be accessed together) is a good candidate to be copied to FAST Cache. By promoting this dataset to FAST Cache, any subsequent access to this data for read-write is serviced faster from Flash drives. This reduces workload on back-end disk drives.A write operation works in similar fashion. Writes with high locality of reference are directed to Flash drives. When the time comes to flush this data to disk, the flushing operation is significantly faster as writes are now at Flash drive speeds. This can have a big impact in heavy-write workloads that require a large system cache to be flushed to the underlying disks more frequentlyThe FAST Cache map is maintained in the DRAM cache and consumes DRAM space so care should be taken to choose which pools and RAID-group LUNs it should be enabled for. EMC TS resources have tools which are available to our direct and channel champions community, to analyze existing environments for the best candidates.FAST Cache operates at a 64KB granularity for increased efficiency. If a 64KB block is referenced 3 times in a given period of time (the time will depend on the IO activity of the system), the block will be promoted into FAST Cache. As the data ages and becomes less active, it will fall out of FAST Cache to be replaced by a more active chunk of data.The second feature provided in the FAST Suite, which is highly complementary to FAST Cache is FAST for Virtual Pools. The combination of FAST Cache and FAST VP addresses the perennial storage management problem: the cost of optimizing the storage system. In many cases prior to FAST and FAST Cache, it was simply too resource intensive to perform manual optimization and many customers simply overprovisioned storage to ensure the performance requirements of a data set were met. With the arrival of Flash drives and the FAST Suite, we have a better way to achieve this fine cost/performance balance:The classic approach to storage provisioning can be repetitive and time-consuming and often produces uncertain results. It is not always obvious how to match capacity to the performance requirements of a workload’s data. Even when a match is achieved, requirements change, and a storage system’s provisioning may require constant adjustment. Storage tiering is one solution. Storage tiering puts several different types of storage devices into an automatically managed storage pool. LUNs use the storage capacity they need from the pool, on the devices with the performance they need. Fully Automated Storage Tiering for Virtual Pools (FAST VP) is the EMC® VNX® feature that allows a single LUN to leverage the advantages of Flash, SAS, and Near-line SAS drives through the use of pools. FAST solves theses issues by providing automated sub-LUN-level tiering. FAST collects I/O activity statistics at the 1 GB granularity level (known as a slice). The relative activity level of each slice is used to determine which slices should be promoted to higher tiers of storage. Relocation is initiated at the user’s discretion through either manual initiation or an automated scheduler. Through the frequent relocation of 1 GB slices, FAST continuously adjusts to the dynamic nature of modern storage environments. This removes the need for manual, resource-intensive LUN Migrations while still providing the performance levels required by the most active dataset, thereby optimizing for cost and performance simultaneously.
The slide above shows how FAST cache works. FAST Cache is based on the locality of reference of the dataset requested by a host. Systems with high locality of reference confine the majority of IOs to a relatively small capacity, where systems with low locality of reference spread IOs more evenly across the total capacity – this is also sometimes referred to as skew. The dataset with high locality of reference/skew (blocks close to one another tending to be accessed together) is a good candidate to be copied to FAST Cache. By promoting this dataset to FAST Cache, any subsequent access to this data for read-write is serviced faster from Flash drives. This reduces workload on back-end disk drives.A write operation works in similar fashion. Writes with high locality of reference are directed to Flash drives. When the time comes to flush this data to disk, the flushing operation is significantly faster as writes are now at Flash drive speeds. This can have a big impact in heavy-write workloads that require a large system cache to be flushed to the underlying disks more frequentlyThe FAST Cache map is maintained in the DRAM cache and consumes DRAM space so care should be taken to choose which pools and RAID-group LUNs it should be enabled for. EMC TS resources have tools which are available to our direct and channel champions community, to analyze existing environments for the best candidates.FAST Cache operates at a 64KB granularity for increased efficiency. If a 64KB block is referenced 3 times in a given period of time (the time will depend on the IO activity of the system), the block will be promoted into FAST Cache. As the data ages and becomes less active, it will fall out of FAST Cache to be replaced by a more active chunk of data.The second feature provided in the FAST Suite, which is highly complementary to FAST Cache is FAST for Virtual Pools. The combination of FAST Cache and FAST VP addresses the perennial storage management problem: the cost of optimizing the storage system. In many cases prior to FAST and FAST Cache, it was simply too resource intensive to perform manual optimization and many customers simply overprovisioned storage to ensure the performance requirements of a data set were met. With the arrival of Flash drives and the FAST Suite, we have a better way to achieve this fine cost/performance balance:The classic approach to storage provisioning can be repetitive and time-consuming and often produces uncertain results. It is not always obvious how to match capacity to the performance requirements of a workload’s data. Even when a match is achieved, requirements change, and a storage system’s provisioning may require constant adjustment. Storage tiering is one solution. Storage tiering puts several different types of storage devices into an automatically managed storage pool. LUNs use the storage capacity they need from the pool, on the devices with the performance they need. Fully Automated Storage Tiering for Virtual Pools (FAST VP) is the EMC® VNX® feature that allows a single LUN to leverage the advantages of Flash, SAS, and Near-line SAS drives through the use of pools. FAST solves theses issues by providing automated sub-LUN-level tiering. FAST collects I/O activity statistics at the 1 GB granularity level (known as a slice). The relative activity level of each slice is used to determine which slices should be promoted to higher tiers of storage. Relocation is initiated at the user’s discretion through either manual initiation or an automated scheduler. Through the frequent relocation of 1 GB slices, FAST continuously adjusts to the dynamic nature of modern storage environments. This removes the need for manual, resource-intensive LUN Migrations while still providing the performance levels required by the most active dataset, thereby optimizing for cost and performance simultaneously.
The slide above shows how FAST cache works. FAST Cache is based on the locality of reference of the dataset requested by a host. Systems with high locality of reference confine the majority of IOs to a relatively small capacity, where systems with low locality of reference spread IOs more evenly across the total capacity – this is also sometimes referred to as skew. The dataset with high locality of reference/skew (blocks close to one another tending to be accessed together) is a good candidate to be copied to FAST Cache. By promoting this dataset to FAST Cache, any subsequent access to this data for read-write is serviced faster from Flash drives. This reduces workload on back-end disk drives.A write operation works in similar fashion. Writes with high locality of reference are directed to Flash drives. When the time comes to flush this data to disk, the flushing operation is significantly faster as writes are now at Flash drive speeds. This can have a big impact in heavy-write workloads that require a large system cache to be flushed to the underlying disks more frequentlyThe FAST Cache map is maintained in the DRAM cache and consumes DRAM space so care should be taken to choose which pools and RAID-group LUNs it should be enabled for. EMC TS resources have tools which are available to our direct and channel champions community, to analyze existing environments for the best candidates.FAST Cache operates at a 64KB granularity for increased efficiency. If a 64KB block is referenced 3 times in a given period of time (the time will depend on the IO activity of the system), the block will be promoted into FAST Cache. As the data ages and becomes less active, it will fall out of FAST Cache to be replaced by a more active chunk of data.The second feature provided in the FAST Suite, which is highly complementary to FAST Cache is FAST for Virtual Pools. The combination of FAST Cache and FAST VP addresses the perennial storage management problem: the cost of optimizing the storage system. In many cases prior to FAST and FAST Cache, it was simply too resource intensive to perform manual optimization and many customers simply overprovisioned storage to ensure the performance requirements of a data set were met. With the arrival of Flash drives and the FAST Suite, we have a better way to achieve this fine cost/performance balance:The classic approach to storage provisioning can be repetitive and time-consuming and often produces uncertain results. It is not always obvious how to match capacity to the performance requirements of a workload’s data. Even when a match is achieved, requirements change, and a storage system’s provisioning may require constant adjustment. Storage tiering is one solution. Storage tiering puts several different types of storage devices into an automatically managed storage pool. LUNs use the storage capacity they need from the pool, on the devices with the performance they need. Fully Automated Storage Tiering for Virtual Pools (FAST VP) is the EMC® VNX® feature that allows a single LUN to leverage the advantages of Flash, SAS, and Near-line SAS drives through the use of pools. FAST solves theses issues by providing automated sub-LUN-level tiering. FAST collects I/O activity statistics at the 1 GB granularity level (known as a slice). The relative activity level of each slice is used to determine which slices should be promoted to higher tiers of storage. Relocation is initiated at the user’s discretion through either manual initiation or an automated scheduler. Through the frequent relocation of 1 GB slices, FAST continuously adjusts to the dynamic nature of modern storage environments. This removes the need for manual, resource-intensive LUN Migrations while still providing the performance levels required by the most active dataset, thereby optimizing for cost and performance simultaneously.
The Culham is managed by Unisphere, and the base software includes file deduplication & compression, block compression, virtual provisioning and SAN copy.Rather than ordering a number of “a la carte” products, we’ve simplified the optional software into five attractively priced suites: The FAST Suite improves performance and maximizes storage efficiency. It includes FAST VP, FAST Cache, Unisphere Analyzer, and Unisphere Quality of Service manager.The Security and Compliance Suite helps ensure that data is protected from unwanted changes, deletions, and malicious activity. It includes the event enabler for anti-virus, quota management & auditing, file-level retention and Host Encryption. The Local Protection Suite delivers any point-in-time recovery with DVR-like roll-back capabilities. Copies of production data can also be used for development, testing, decision support and backup. This suite includes: SnapView, SnapSure and RecoverPoint/SE CDP.The Remote Protection Suite delivers Unified block and file replication, giving customers one way to protect everything better. It includes Replicator, MirrorView and RecoverPoint/SE CRR.The Application Protection Suite automates application-consistent copies and proves customers can recover to defined service levels. This suite includes Replication Manager and Data Protection Advisor for Replication.Finally, the total efficiency pack and protection packs bundle the suites to further simplify ordering and lower costs.
The EMC VNX Series also had the lowest overall response time (ORT) of systems tested, taking the top spot with a response time of .96 milliseconds. EMC’s response time is 3 times faster than the IBM offering in second place. Faster response times enable end-users to access information quicker and more efficiently. Chris Mellor in The Register blog entry EMC kills SPEC benchmark with all-flash VNX (http://www.theregister.co.uk/2011/02/23/enc_vnx_secsfs2008_benchmark/) writes about IBM, HP and NetApp: “For all three companies, any ideas they previously had of having top-level SPECsfs2008 results using disk drives have been blown out of the water by this EMC result. It is a watershed benchmark moment. ®”