Note to Presenter: The presenter should review the following white papers before presenting this solution:EMC Performance for Oracle—EMC VNX, Enterprise Flash Drives, FAST Cache, VMware vSpherehttp://www.emc.com/collateral/hardware/white-papers/h8850-oracle-performance-vnx-fastcache-wp.pdfThis white paper describes the benefits of using EMC FAST Cache for Oracle OLTP databases in both physical and virtual environments. The Oracle RAC 11g database was configured to access EMC VNX7500 file storage over NFS, using the Oracle dNFS Client. VMware vSphere provided the virtualization platform for the virtual environment.Deploying Oracle Database Applications on EMC VNX Unified Storagehttp://www.emc.com/collateral/hardware/white-papers/h8242-deploying-oracle-vnx-wp.pdfThis white paper introduces how the EMC VNX unified storage platform can be effectively used for deploying enterprise Oracle Database applications. This paper also captures most of the Oracle performance testing done by EMC performance engineering. Some best practices for deploying database applications are also covered.EMC CLARiiON, Celerra Unified, and VNX FAST Cachehttp://www.emc.com/collateral/software/white-papers/h8046-clariion-celerra-unified-fast-cache-wp.pdfThis white paper is an introduction to the EMC FAST Cache technology in CLARiiON, Celerra unified, and VNX storage systems. It describes implementation of the FAST Cache feature and provides details of using it with Unisphere Manager and CLI. Usage guidelines and major customer benefits are also included.
One example of a customer who has virtualized their Oracle environments and realized tremendous cost savings is CLAL Insurance, one of the largest insurance providers in Israel.“Before virtualizing, utilization rates were typically in the range of 15-20% for our Oracle database servers. Since deploying VMware vSphere® on EMC Symmetrix VMAX storage platform we have almost doubled our server utilization rates through leveraging more Oracle instances on the same infrastructure, increasing our return on investment. In addition, migrating our Oracle database servers to the VMware platform and Linux has led to increased database performance and we can now leverage the functionality of VMware vSphere for faster server failover and high availability,” said, HaimInger, Chief Technology Officer, CLAL.
One example of a customer who has virtualized their Oracle environments and realized tremendous cost savings is CLAL Insurance, one of the largest insurance providers in Israel.“Before virtualizing, utilization rates were typically in the range of 15-20% for our Oracle database servers. Since deploying VMware vSphere® on EMC Symmetrix VMAX storage platform we have almost doubled our server utilization rates through leveraging more Oracle instances on the same infrastructure, increasing our return on investment. In addition, migrating our Oracle database servers to the VMware platform and Linux has led to increased database performance and we can now leverage the functionality of VMware vSphere for faster server failover and high availability,” said, HaimInger, Chief Technology Officer, CLAL.
Note to Presenter: View in Slide Show mode for animation.While FAST provides automated and efficient tiering over time, FAST Cache leverages enterprise Flash drives to extend existing cache capacities to automatically absorb unpredicted spikes in application workloads, and thereby speeds system and application performance for data that is not already at the Flash tier.Where FAST with Sub-LUN Tiering works at a very granular level of 1 GB chucks, FAST Cache takes this concept one step further by working at the 64K I/O level. By doing so, FAST Cache acts more like dynamic, but persistent, controller cache. By extending controller cache with Flash, the cache-hit ratio is dramatically improved. As a result, the new goal for most application workloads is to strive for a 90 to 95 percent cache hit rate. This is achievable because the size of Flash-based cache is up to 64-times larger than the controller’s original DRAM (dynamic random access memory)cache.Cache hit rates will typically go from one out of five I/Os served from cache to nine out of 10 I/Os served from cache, a 4.5-times improvement.FAST Cache may be added to existing LUN configurations and acts as a system-wide resource. With FAST Cache, you now have multi-terabyte, read-write, non-volatile cache—an absolute first for storage platforms in the midtier market. The fact that data is written to enterprise Flash drives means that when the system returns from a power failure or planned outage, the cache is already warmed up and service levels can readily resume at the point they were before the disruption.The size of FAST Cache is more than ample to catch transitory spikes in I/O demand. Should large amounts of Flash be needed to meet service level agreements, it is important to know that FAST Cache works in unison with FAST Sub-LUN Tiering and that the two technologies complement each other fully.Note to Presenter: EMC’s FAST Cache works for both reads and writes. Competitors, like NetApp, frequently only implement Flash as proprietary read-only schemes.
Note to Presenter: View in Slide Show mode for animation. Over the past two to three years, EMC has had 30 PhDs in its engineering department work with hundreds of customers and has analyzed thousands of applications' access patterns to information. EMC has collected over 2 PB of information and analyzed more than 80 billion I/O transactions of information. And a result, EMC discovered two 80/20 rules. The first 80/20 rule is that 20 percent of the volumes of an infrastructure are hot, and the other 80 percent are not. And the second 80/20 rule is that when you look inside a volume, 20 percent of the datasets inside the volume are hot, and the other 80 percent are not.So if you do the math and multiply 20 percent by 20 percent, you actually find that 4 percent of the volumes of your information infrastructure are active, and the other 96 percent are not. This means that an important design aspect in the building of FAST technology is that smaller is better. In this case, the smaller the granularity of data movement, the better the overall solution is for performance and cost.
Let’s walk through, at a very high level, how EMC implemented FAST.EMC’s FAST technology is architected around three engines: The first is a statistics engine, which gathers statistics about how your applications are accessing information. Note to Presenter: Click now in Slide Show mode for animation. The second engine is the analytics engine. The statistics are fed into the analytics engine, and it analyzes which datasets are hot and cold, and how that rate of change is occurring. So, how hot they're getting, or how cold they're getting.Note to Presenter: Click now in Slide Show mode for animation. The third engine is the movement engine. Recommendations from the analytics engine are fed into a data movement engine, which will actually provide the recommendations of which datasets to promote to the better performing storage, like enterprise Flash technology, and which datasets should be demoted to the more cost-efficient, energy-efficient, and space-efficient storage like SATA.Note to Presenter: Click now in Slide Show mode for animation. Then the cycle repeats as it continues to optimize your information even while you’re at your desk checking email, sleeping, and on vacation.
Note to Presenter: View in Slide Show mode for animation. Enginuity 5875 builds on the original EMC Fully Automated Storage Tiering (FAST) technology introduced in 2009.By employing user-defined polices, FAST is able to automatically transition hot and cold data spots to appropriate storage tiers, thereby taking advantage of enterprise flash drives for heavily utilized LUNs and low-cost SATA drives for less frequently utilized LUNs. By automatically recognizing hot and cold data activity—since some LUNs are more active than others—and taking appropriate actions, FAST is able to get the right data to the right place at the right time.
This diagram shows three methods to do storage tuning/tiering:Method 1 is Manual Method, which was not done in this use caseMethod 2 is deploying FAST VP (that is—automated tiering for virtual pools), which was not done in this use caseMethod 3 is the FAST Cache method, which was done in this use caseAs can be seen, the Manual method is over 9 hours and has to be done repeatedly manually versus the FAST Cache method which is done once to analyze the application workload and the rest is done automatically and continuously.To summarize:Manual tiering involves a repeated process that takes 9 hours or more to complete each time. In contrast, both FAST VP and FAST Cache operate automatically, eliminating the need for manually identifying and moving or caching hot data. As shown in Figure, configuring FAST Cache is a one-off process taking 50 minutes or less, and hot and cold data is then cached in and out of FAST Cache continuously and automatically.
This bar chart shows the impact of FAST Cache on IOPS in both the physical and virtual RAC deployments. For both deployments, it shows the IOPS without and with FAST Cache enabled.The figure shows the increase in average IOPS for the data file systems. In both the physical and virtual environments, as more and more hot data was cached in by FAST Cache, a 170% improvement in IOPS was observed.
SummaryThe testing discussed in this presentation demonstrates that when FAST Cache is introduced into the physical and virtual Oracle RAC OLTP environments, it reduces I/O accesses to the HDDs and directs them to the Flash drives, which dramatically increases the OLTP throughput and maintains very low response times. The overall application performance improves significantly as a result. FAST Cache technology creates a faster medium, on Flash drives, for accessing frequently accessed data at lower latencies. Hot data is cached in and cold data flushed out of FAST Cache automatically and transparently, depending on data usage patterns. This eliminates the need for administrators to manually classify the hot and cold data.The Solution Benefits are the following:PerformanceBy creating a FAST Cache with just four Flash drives, the performance of transactions per minute improved by over 100% for both the physical and virtual environments.Enabling FAST Cache improved the average response time by 84% in the physical environment, and by 79% in the virtual environment. Using FAST Cache as a secondary cache delivered a 170% improvement in IOPS. FAST Cache serviced approximately 95% of the read and write IOPS in both the physical and virtual environments. FAST Cache misses could still be a cache hit if the data is in the storage processor (SP) cache.Cost SavingsAnother important FAST Cache benefit is improved TCO. Using FAST Cache reduces the I/O to the back-end HDDs. This means that an existing set of HDDs can deliver the performance typically provided by a faster drive configuration, such as more HDDs or a different RAID type. In fact, over a period of time, the number of faster SAS drives may be reduced or replaced with slower NL-SAS drives, while maintaining the same application performance.Ease of UseFAST Cache configured with a few simple steps — enable/disable for individual LUNs with single clickEfficiencyData cached in and out of FAST Cache automatically and non-disruptivelyNondisruptiveLive migration of the Oracle RAC 11g R2 database from a physical to a virtual environment was achieved without loss of service.
And here’s an example of what that means to customers. Compared to single tier systems, FAST VP delivers up to 40% more application performance at a 40% lower cost while requiring 87% fewer disks 65% less foot print, and 75% less power. In addition to the cost savings, FAST greatly simplifies management by allowing customers to create tiering policies and allowing the VMAX to optimize the data placement across storage tiers with no additional storage administration.Not only does Smart storage make life easier for IT organizations, it actually makes life better for everyone by being more “green”. Power, cooling, and footprint continue to be a major Data Center concern. The energy efficiencies of VMAX combined with features like FAST will enable customers to reduce power consumption in 2011 by over 270 million kilowatt hours – enough to power 24,400 homes.
Historically, DSS workloads have not been a sweet spot for the CX4, except possibly for the CX4-960. The VNX changes this position. This solution does not leverage Flash drives or the FAST suite as the large sequential workloads do not lend themselves to this technology. The huge improvements in total throughput (particularly in the lower end platforms) can drive up to 4.5x the bandwidth. The CX4-120 can achieve around 750MB/s and the VNX5300 can achieve around 3,500MB/s! The cost of the comparable configuration in this case (Block-only VNX5300) is 84% higher than the CX4, however to achieve the throughput provided by that platform with CX4 would require a CX4-960 platform, which would be considerably more expensive than the VNX5300. Main takeaway: VNX is a GREAT solution for DSS workloads.
VFCache puts Flash in the server as a cache to dramatically improve application performance. It is a hardware and software solution that leverages PCIe Flash technology and intelligent caching software to reduce latency and increase throughput. VFCache works in conjunction with the back-end storage array to provide two significant benefits. First, as an extension of EMC FAST array-based technology, it facilitates an intelligent end-to-end data tiering strategy from the storage to the server. Second, it provides this performance and intelligence with protection. EMC has been the leader in Flash since it introduced solid state drives in 2008. Now, with VFCache, that lead has been extended yet again.
Next, let’s look at event-based backup, a NetWorker and NetWorker Module for Databases and Applications capability for protection. Event-based backups use a feature in NetWorker known as Probes to trigger backups based on some external event other than a scheduled time. As data can be quite dynamic, it may be that the best time to back up may not always be on a scheduled day and time. It may be that some condition of the application server is a better indicator of when a backup is needed, for instance, when a disk is full, a business condition exists, or a log count requires backup.
RecoverPoint is an advanced enterprise-class disaster recovery solution designed with the performance, reliability, and flexibility required for enterprise applications in heterogeneous storage and server environments. It provides bi-directional local and remote data replication, without distance limits and minimal performance degradation. RecoverPoint data protection optionsRecoverPoint provides the following replication options for both physical and VMwarevirtualized environments:Continuous remote replication (CRR): CRR supports synchronous and asynchronous replication between remote sites over FC wide area network (WAN). Synchronous replication supported when the remote sites are connected through FC and provides an RPO of zero. Asynchronous replication provides crash-consistent protection and recovery to specific points in time, with a limited data loss RPO. Continuous data protection (CDP): CDP continuously captures and stores data modifications locally, enabling local recovery from any point in time, with no data loss.Continuous local and remote (CLR) data protection: CLR is a combination of CRR and CDP and provides concurrent local and remote data protection.
So far we’ve only talked of how to failover using SRM with RecoverPoint. Let’s look at other recovery scenarios.Failover is usually a last resort. There are many situations that do not merit a full site failover and must be dealt with using other recovery methods – for example, logical corruption. Failover in the event of logical corruption doesn’t make much sense. RecoverPoint is constantly recording consistent bookmarks in its journals that can be utilized at any time to roll back your application or database to previous points in time. To take advantage of these bookmarks in an environment controlled by SRM, you must first put the consistency group into maintenance mode. Prior to testing recovery, we took a bookmark and named it Pre-Corruption. This was for ease of use only – any image prior to the corruption event could be selected.To test recovery of the Oracle Database using RecoverPoint images, we introduced some corruption into our database environment. What we actually did was go into the database and delete all of the data files. This meant that the database could no longer be started and was in effect corrupt.Once corrupt, recovery was necessary to be able to start the database. On the local site the Production VM was shutdown.From the RecoverPoint GUI, using either the local or remote site, select Enable Image Access from the menu. From here a list of possible images are presented. The administrator can select an image and recover to production.
Once the data has rolled back to the image on the production site, the Enable Image Access icon appears beside the production copy. The administrator must select an image to present to the production host following the synchronization of data to production. It’s worth noting that depending on how much data is going to be rolled back the length of time this will take can vary.The administrator can then select Resume Productionand start the database to verify data integrity.
Please go to emc.com and download the accompanying white paper:EMC Performance for Oracle - EMC VNX, Enterprise Flash Drives, FAST Cache, VMware vSpherehttp://www.emc.com/collateral/hardware/white-papers/h8850-oracle-performance-vnx-fastcache-wp.pdfThis white paper describes the benefits of using EMC FAST Cache for Oracle OLTP databases in both physical and virtual environments. The Oracle RAC 11g database was configured to access EMC VNX7500 file storage over NFS, using the Oracle dNFS Client. VMware vSphere provided the virtualization platform for the virtual environment.Thank you.