20. The Price of Underutilized Servers Average CPU Utilization Rate Actual Software Cost Per CPU Purchase Price: $20K Per Processor Underutilized Premium
21. Grid Automated Quality of Service Resource (CPU) Sales Pools Most Important Least Important Search Pools BI Pools EMEA NA APAC Response Time Objectives Storage J2EE Web DB
22. The Price of Underutilized Storage 48 TB of Raw Storage Purchased at $5/GB Storage Utilization Rate
36. Oracle Maximum Availability Architecture Eliminate the cost of planned downtime Add/Remove Storage Redefine and Reorganize Tables Online Production Testing Reporting Add/Remove Nodes and CPUS Undo Human Error Online Upgrades Online Patching
47. Oracle Database Strategy Oracle Database 11g Charlie Garry, Director, Product Management Oracle Server Technologies
Editor's Notes
Reduce capital costs by factor of 5x Reduce storage costs by factor of 4x Improve performance by at least 10x Eliminate redundancy And much more….
This simplified schematic is still representative of most enterprise data centers today. It is characterized by multiple networks, multiple storage technologies and hardware platforms, multiple operating systems and software technologies in a silo environment. For large enterprises this picture is duplicated across multiple data centers and it isn’t unusual for large enterprises to have 1000s servers and for even medium-sized enterprises to have 100s. Just keeping all these individual machines running is a mammoth task, but is made far more complex when multiple versions of operating systems are taken into account. Infrastructure Complexity arises from heterogeneous infrastructure + multiple OSs and S/W stack. This greatly complicates configuration management Infrastructure is inefficient because it is sized for peak loading, with redundancy and provision for failover. The single shot budget process means that IT wants to buy for the worst scenario based on estimated growth over a 3-5 year period. Typically each server only runs 1 or 2 applications. Requires multiple skill sets to manage and maintain Apps Monolithic – consequence of best-of-breed policies which also contributes to infrastructure heterogeneity Customisation and integration – impact on complexity and cost of ownership, retest everything when a change occurs Consequences Cost and effort involved in maintaining stability Inefficient and slow to react to changing needs in an increasingly unpredictable business world The silo approach created an isolated, static, expensive monolithic SMP and storage environment. While providing application owners greater control, they consume financial, human and environmental resources in a wasteful fashion compared to the alternative Oracle’s Grid Architecture offers. For large enterprises this picture is duplicated across multiple data centers and it isn’t unusual for large enterprises to have thousands of heterogeneous servers and for medium-sized enterprises to have hundreds. Just keeping all these individual machines running is a mammoth task, but is made far more complex when multiple versions of different operating systems are taken into account. The silo’d infrastructure is inefficient because it is sized for peak loading, with redundancy and provisioning of additional, often idle, resources for fail over. This results in the enterprise provisioning hardware and software for the worst-case scenario based on estimated workload growth over a 3-5 year period – an inefficient use of capital that in the early years generates little or no return. These heterogeneous environments also create problems when it’s time to upgrade or patch various system components. Invariably, multiple long outages are required to maintain these environments. Also, with the number of applications and organizations that comprise today’s businesses, it is hard to define, prioritize measure and ultimately maintain service level agreements. As the overall systems become stressed, what applications or users should give way to higher priority workloads? Does this happen in a predictable, disciplined way or is every occasion a fire drill? Oftentimes, these organizations are caught by surprise because of a slowly growing and unnoticeable change in workload volumes that all of the sudden emerges as a significant performance problem.
On the right is a depiction of the Grid Computing Infrastructure Grid Computing is a technology architecture that virtualizes and pools IT resources, such as compute power, storage and network capacity into a set of shared services that can be distributed and re-distributed as needed. It is applicable for database, system, and storage administrators who seek a high performance, scalable, manageable systems infrastructure that offers industry-leading cost savings.
Slide Goal: To provide a virtual demonstration of the product in action. SLIDE IS ANIMATED Modern application performance is made up of several interlocking pieces that span the technology stack. Much effort has been focused on delivering and deploying an application. However, this is not ultimately what an end-user sees. The end-user experience is defined by the runtime performance of an application. While many tools allow for monitoring an application’s this, it is not enough. What is required is Active runtime quality of service management that can both identify bottlenecks and adjust resources to ensure the most important applications maintain their required levels across ever-changing demand. Here we have an RTI datacenter with 3-tier and 2-tier systems operating within their response time objectives. We have 3 Pools in each of the top three tiers and a common storage pool for a total of 10 managed pools. <CLICK> Demand for the EMEA Sales application rises the SLO is violated. 2. <CLICK> The QoS system compensates by adjusting a resource such as CPU shares while still meeting objectives. 3. <CLICK>Suddenly our most important DB server pool goes red for all Sales apps. 4. <CLICK>Resources, such as a server, are reallocated from our least important DB server group to restore performance We are instrumenting the entire Oracle stack to enable us to provide true QoS management thereby allowing you in the end to effectively run your applications on “cruise control”.
Leverage disk performance regions on disk drives 50% performance difference from outer to inner tracks Mark an ASM file to be HOT/COLD Alter diskgroup dgname modify file ‘xxx’ attributes HOT/COLD or based on a template at creation time Rebalance to migrate the ‘file’ to HOT/COLD ODP region ODP regions are dynamic New V$ASMFILE recording IO stats The ODP feature better leveraged when ASM disks are whole disks Dynamic, fast, space efficient, “point in time” copies of ASM file system files captures ASM FS file block/extent updates An enabler for: On-line backups On-line, disk-based, file backup model using snapshots and individual file recoveries Up to 64 snapshot images per ASM file system Policy based snapshots: Schedule snapshots on an interval basis: every 5 seconds, every 30 minutes, daily, … with recycling (using EM) ACFS CLIs support creation and removal of snapshots ACFS Snapshot functions integrated with EM
Compression ratios based on Hybrid Columnar Compression “Query Default” and “Archive High”
Automated:
Reduce capital costs by factor of 5x Reduce storage costs by factor of 4x Improve performance by at least 10x Eliminate redundancy And much more….