SlideShare une entreprise Scribd logo
1  sur  13
Télécharger pour lire hors ligne
1
EEXADATAXADATA CCONSOLIDATION SUCCESSONSOLIDATION SUCCESS SSTORYTORY
Karl Arao, Enkitec
ABSTRACT
In today’s competitive business climate companies are under constant pressure to reduce costs without sacrificing quality.
Many companies see database and server consolidation as the key to meeting this goal. Since its introduction, Exadata has
become the obvious choice for database and server consolidation projects. It is the next step in the evolutionary process. But
managing highly consolidated environments is difficult, especially for mixed workload environments. If not done properly the
quality of service suffers. In this paper we will touch on how to do accurate provisioning and capacity planning and the tools
you’ll need to ensure that your consolidation story has a happy ending.
TARGET AUDIENCE
Target audiences are DBAs, Architects, Performance Engineers, and Capacity Planners
Learner will be able to:
• Describe the provisioning process and implementation challenges
• Apply the tools and methodology to successfully migrate on an Exadata platform
• Develop a resource management model (CPU instance caging & IORM) – detailed on the presentation
BACKGROUND
The whole consolidation workflow relies on the basic capacity planning formula
Utilization = Requirements / Capacity
Capacity planning plays a very important role to ensure proper resources are available and be able to handle expected and
unexpected workloads. And Exadata is not really a different animal when it comes to provisioning, although it has an
intelligent storage the database servers still has limited capacity. The primary principle is to ensure the application workload
requirements will fit into the available capacity of the database server.
A SIMPLE CONSOLIDATION SCENARIO
Let’s say we have a half rack Exadata. That would consist of four (4) database servers and seven (7) storage cells. Each
database server has a total CPU capacity of 24 Logical CPUs (cat /proc/cpuinfo) and then multiply that to the number of
nodes (4) you’ll get the total CPU capacity for the whole cluster (96 CPUs).
When we migrate and consolidate these databases on Exadata, each database has a CPU requirement. For now treat the “CPU
requirement” as the amount of CPUs it needs to run on Exadata. We would like to have a good balance between the CPU
requirements of the databases and the available CPU capacity across the database nodes, and also making sure that we don’t
max out the CPU capacity on any of the nodes.
2
Let’s say we have the following databases to migrate on Exadata:
On the above diagram each database (A to G) has a requirement of four (4) CPUs and will run on two nodes. The first row
will be read as “Database A has a CPU requirement of 4, and will run on nodes 1 and 2”. Each of the databases is essentially a
two node RAC that is spread out across four database servers. On a RAC environment the users will be load balanced
between the available RAC instances, so if a database running on two node RAC has a CPU requirement of 4 given that the
load is equally distributed across the nodes then they get 50/50 percent share of CPU requirement.
On the bottom part is the grand total of CPU requirement for all the databases, which is 28 CPUs. We can then say that out of
96 Total CPUs across four nodes, we are only using 29%. Well that is correct, but we also want to know how each server is
utilized that is depicted by the red circles across the same node numbers because we may be having a node that’s 80% utilized
where the rest of the nodes are on the 10% range and an equal balance of utilization is critical to capacity planning.
Here’s another view of the node layout where we distribute the CPUs of the instances based on their node assignments.
3
Each block on the right side is one CPU. That’s a total of 24 Logical CPUs which accounts the number of CPU threads on the
server which is based on the CPU_COUNT database parameter and /proc/cpuinfo. We also set or take the number of CPUs
from CPU_COUNT whenever we do instance caging and we are using this metric just to be consistent with the monitoring of
OEM and AWR.
From the image above the cluster level utilization it’s 29.2% while on the per compute node they are still below the 70%
threshold which is the ideal utilization we want for the compute nodes.
Now what we don’t want to happen is to have an unbalanced utilization if we change the node layout and assign more
instances on node 2 and still make use of the same number of CPU requirement across the databases.
4
On the cluster level it will be the same utilization but on the per compute node we end up having node2 with 83% utilization
while the rest are pretty much idle.
THE PROVISIONING WORKSHEET
Enkitec has developed a tool called Provisioning Worksheet that is mainly used for sizing and consolidation of databases. The
sections below describes the overall workflow of the tool.
General Tour and Workflow
The Provisioning Worksheet has four main tabs that also represent the workflow of the whole provisioning project.
Below you’ll see the mind map and the overview of the 4-step process:
• Data gathering
o The “System & DB Detail” tab on the worksheet. It’s the section that we send to the customers to fill out.
The sheet will contain all the important information of the databases that will be migrated to the destination
(target) server.
• Define the target server
o The “Exadata Capacity” tab on the worksheet, the default capacity will be a half rack Exadata. But depending
on the customer capacity requirements the default values may not be enough so a review of the end resource
5
utilization is a must. At the end of the provisioning process there will be a section where the minimum
“Recommended Hardware” capacity will be shown and should be matched.
• Create a provisioning plan
o The “Exadata Layout” tab on the worksheet. This is where we input the final list of databases that will be
migrated to the target server. The instance mapping where we spread out the instances if they’ll be running as
two or four node RAC and the failure scenarios will also be done here.
• Review resource utilization
o The “Summary & Graphs” tab on the worksheet. Once everything is finalized this section will give you the
visualization and summary report of the end utilization of the target server. A red highlight on the utilization
number on any of the resources means there’s not enough capacity and the “Exadata Capacity” section
should be revisited and add more resources.
6
7
Data gathering
The first step on the provisioning process is getting all the capacity requirements. This section of the worksheet is extracted
and saved on a separate excel file then sent to the customers for them to fill out. And once the customer sent this back we can
now start grouping the servers according to platform or workload type and also get started with the migration planning with
the customer. Ultimately this sheet serves as a scratchpad and you are free to add more columns to help with the
categorization of the environments that will be migrated.
Below is a sample output:
The data gathering sheet is divided into four parts:
Host details CPU Memory Storage
• DB Name
• DB Version
• App Type/Front
End
• Workload Type
(OLTP/DW/Mix)
• Node count (single
instance/RAC)
• Hostname
• Server Make &
Model
• OS
• Cpu Type
• CPU Speed
• # of Logical CPUs
• CPU Utilization
Avg/Peak
• Physical Memory
(GB)
• SGA Size (GB)
• PGA Size (GB)
• Storage Make &
Model
• Disk RAID Level
• # of IO Channels
• Database Size (GB)
• Backup Size (GB)
• Peak R + W IOPS
• Peak R IOPS
• Peak W IOPS
• Peak R + W MB/s
• Peak R MB/s
• Peak W MB/s
• Peak R/W Ratio
The gathered raw data points will then be feed to the provisioning plan and will be accounted to the available capacity.
8
Define the target server
This is the section of the provisioning worksheet where we input the capacity of the Exadata that the customer currently have
or the hardware that they have in mind. What we input here is just the initial hardware capacity where the requirements will be
accounted and at the end of the provisioning process we can go back here and add more resources if there are any exceptions
(red highlight) on the utilization summary report.
• Node Count
o On the node count you put 2,4, or 8 which is equivalent to quarter, half, full rack of Exadata
• Exadata Speed (SPEC)
o Then get the SPECint_rate equivalent of the Exadata processor so we will be able to compare the SPEED of
the Exadata CPU against the source servers
o See the section CPU -> The “Speed SPEC” or the “SPECint_rate2006/core” of the Appendix B for more
details on how to get the value for a particular server.
• Exadata CPUs/node
o The number of Logical CPUs which is equivalent to the CPU_COUNT parameter or /proc/cpuinfo
• Exadata Mem/node (G)
o Each node has 96GB of memory on an Exadata
• Exadata Storage (G)
o Disk space is dependent on ASM redundancy and DATA/RECO allocation. Input the raw GB space
capacity.
• Backup Factor
o The backup factor is usually set to 1, which means we want to have a space for at least 1 full backup of the
database
• Table Compression Factor
o The table compression factor lets us gain more disk space as we compress the big tables. I usually set this to
zero for a conservative sizing.
• Offload Factor
o The OFFLOAD FACTOR which is the amount of CPU resources that will be offloaded to the storage cells.
I usually set this to zero for a conservative sizing.
9
Create a provisioning plan
This is the section where we input the data points from the Data Gathering (“System & DB Detail” tab). Then we play around
with the node layout where we spread out the instances across the compute nodes according to the customer’s preferences and
what transpired from the migration planning. There’s an underlying capacity planning math that’s processing the data points of
each of the database according to their node layout which will be accounted to the overall and node level capacity. (See
Appendix B for the formulas)
The node layout
There are three values for the node layout:
• P = preferred node (green)
o primary node
o accepts client connections
• F = failover node (red)
o secondary node
o does not accept client connections
o instance is down and does not consume resources
• A = available node (blue)
10
o secondary node
o client connections are just pre-connected, sessions will failover only when the preferred node
fails/shutdown
o instance is up and running and have provisioned resources
Here’s how to interpret the image above:
o The DBFS database runs across 4 nodes
o hcm2tst only runs on node1 and has a failover instance on node2
o bi2tst runs on node3 and has a pre-connect instance on node4
Above the node layout are node level utilization reports which contains the following:
• The number of instances running on that node
• CPU utilization
• Memory utilization
• Recommended minimum Huge Pages allocation
o The value shown has 10% allowance for SGA growth/resize
o To convert the GB to actual Huge Pages settings make use of the following formula
§ (HPages GB * 1024) / 2
Most of the columns came from the Data Gathering (“System & DB Detail” tab) sheet but there’s a Host Speed (SPEC)
column which represents the “SPECint_rate2006/core” value of the source server. See the section CPU -> The “Speed
SPEC” or the “SPECint_rate2006/core” of the Appendix B for more details on how to get the value for a particular
server.
Node failure scenario
The node layout will also depend on the failure scenarios where let’s say if one node goes down the end resource utilization of
the remaining nodes should still be on an acceptable range (80% below).
Below you’ll see a scenario that when the node1 goes down (changed from P to F) the remaining preferred nodes will catch all
the failed over sessions and will cause an increase in resource utilization in terms of CPU and memory. Here the CPU is at the
critical level which is at 120% utilization when this scenario happens.
11
The node failure scenario is essential to the availability planning of the whole cluster. This is the part where we do trial and
error until we get to the sweet spot of the node layout where we have already failed each of the node and the end utilization of
the remaining nodes are on an acceptable utilization range (80% below).
Review resource utilization
As we change the node layout we can quickly check on the “Summary & Graphs” tab to see the effects of the layout change.
The graphs will be rendered in a split second and we’ll be able to visually check the allocation of resources and quickly see any
imbalance on the provisioning plan in terms of CPU, memory, and storage. On the top section of the sheet are the Overall
Utilization and the Recommended Hardware.
12
Overall Utilization
While the node layout section has the “per node” utilization which quickly alerts us on any resource imbalance between nodes
the “Overall Utilization” is very useful for monitoring the resource allocation on a cluster level. Here are some important
points about this summary section:
• It has a conditional formatting where it will have a “red highlight” for any resource component that reaches 75% and
above
• A “red highlight” means revisiting the provisioning plan (removing databases or reducing allocations) or adding more
resources to the capacity which could be as follows:
o Additional compute nodes
o Memory upgrade
o Additional storage
o All of the above
• The goal is to get rid of the “red highlight”
Recommended Hardware
All the data points gathered translates to resource requirements and then to the amount or size of hardware it needs to run
smoothly. This section is very helpful for validating if the hardware that we currently have or planning on buying is enough to
run all the databases that will be migrated or consolidated. Here are some important points about this summary section:
• The Equivalent compute nodes has 35% allowance on the CPUs, which means if the Total CPU used is 64.15 which
translates to 2.67 compute nodes (divide by 24 Logical CPUs) with the 35% allowance it will be 86.6 which translates
to 3.6 compute nodes. This allowance for any workload growth/spikes or any unforeseen CPU workload profile
change that could be caused by the migration (plan change, etc.).
• The Equivalent compute nodes only accounts for the CPU
• On the example below, the 3.6 nodes satisfies the minimum required number of nodes to run the 64.15 CPU
requirement and still be at below 75% overall CPU utilization. But for the memory it’s not the case, so we either have
to upgrade to more compute nodes or upgrade the memory from 96GB to 128GB. These resource capacity values can
be modified on the “Exadata Capacity” tab.
13

Contenu connexe

Tendances

Oracle Rac Performance Tunning Tips&Tricks
Oracle Rac Performance Tunning Tips&TricksOracle Rac Performance Tunning Tips&Tricks
Oracle Rac Performance Tunning Tips&TricksZekeriya Besiroglu
 
PGConf.ASIA 2019 Bali - AppOS: PostgreSQL Extension for Scalable File I/O - K...
PGConf.ASIA 2019 Bali - AppOS: PostgreSQL Extension for Scalable File I/O - K...PGConf.ASIA 2019 Bali - AppOS: PostgreSQL Extension for Scalable File I/O - K...
PGConf.ASIA 2019 Bali - AppOS: PostgreSQL Extension for Scalable File I/O - K...Equnix Business Solutions
 
Netezza workload management
Netezza workload managementNetezza workload management
Netezza workload managementBiju Nair
 
Inside HDFS Append
Inside HDFS AppendInside HDFS Append
Inside HDFS AppendYue Chen
 
Connecting Hadoop and Oracle
Connecting Hadoop and OracleConnecting Hadoop and Oracle
Connecting Hadoop and OracleTanel Poder
 
Accumulo Summit 2015: Performance Models for Apache Accumulo: The Heavy Tail ...
Accumulo Summit 2015: Performance Models for Apache Accumulo: The Heavy Tail ...Accumulo Summit 2015: Performance Models for Apache Accumulo: The Heavy Tail ...
Accumulo Summit 2015: Performance Models for Apache Accumulo: The Heavy Tail ...Accumulo Summit
 
AWR Ambiguity: Performance reasoning when the numbers don't add up
AWR Ambiguity: Performance reasoning when the numbers don't add upAWR Ambiguity: Performance reasoning when the numbers don't add up
AWR Ambiguity: Performance reasoning when the numbers don't add upJohn Beresniewicz
 
SOS: Optimizing Shuffle I/O with Brian Cho and Ergin Seyfe
SOS: Optimizing Shuffle I/O with Brian Cho and Ergin SeyfeSOS: Optimizing Shuffle I/O with Brian Cho and Ergin Seyfe
SOS: Optimizing Shuffle I/O with Brian Cho and Ergin SeyfeDatabricks
 
An Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai Yu
An Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai YuAn Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai Yu
An Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai YuDatabricks
 
Testing Delphix: easy data virtualization
Testing Delphix: easy data virtualizationTesting Delphix: easy data virtualization
Testing Delphix: easy data virtualizationFranck Pachot
 
Accelerating Apache Spark Shuffle for Data Analytics on the Cloud with Remote...
Accelerating Apache Spark Shuffle for Data Analytics on the Cloud with Remote...Accelerating Apache Spark Shuffle for Data Analytics on the Cloud with Remote...
Accelerating Apache Spark Shuffle for Data Analytics on the Cloud with Remote...Databricks
 
ORACLE 12C DATA GUARD: FAR SYNC, REAL-TIME CASCADE STANDBY AND OTHER GOODIES
ORACLE 12C DATA GUARD: FAR SYNC, REAL-TIME CASCADE STANDBY AND OTHER GOODIESORACLE 12C DATA GUARD: FAR SYNC, REAL-TIME CASCADE STANDBY AND OTHER GOODIES
ORACLE 12C DATA GUARD: FAR SYNC, REAL-TIME CASCADE STANDBY AND OTHER GOODIESLudovico Caldara
 
Apache Arrow-Based Unified Data Sharing and Transferring Format Among CPU and...
Apache Arrow-Based Unified Data Sharing and Transferring Format Among CPU and...Apache Arrow-Based Unified Data Sharing and Transferring Format Among CPU and...
Apache Arrow-Based Unified Data Sharing and Transferring Format Among CPU and...Databricks
 
Oracle RAC 12c and Policy-Managed Databases, a Technical Overview
Oracle RAC 12c and Policy-Managed Databases, a Technical OverviewOracle RAC 12c and Policy-Managed Databases, a Technical Overview
Oracle RAC 12c and Policy-Managed Databases, a Technical OverviewLudovico Caldara
 
How Impala Works
How Impala WorksHow Impala Works
How Impala WorksYue Chen
 
Multi-Tenant Data Cloud with YARN & Helix
Multi-Tenant Data Cloud with YARN & HelixMulti-Tenant Data Cloud with YARN & Helix
Multi-Tenant Data Cloud with YARN & HelixKishore Gopalakrishna
 
The Top 12 Features new to Oracle 12c
The Top 12 Features new to Oracle 12cThe Top 12 Features new to Oracle 12c
The Top 12 Features new to Oracle 12cDavid Yahalom
 
Oracle Database Performance Tuning Concept
Oracle Database Performance Tuning ConceptOracle Database Performance Tuning Concept
Oracle Database Performance Tuning ConceptChien Chung Shen
 
Hadoop operations-2015-hadoop-summit-san-jose-v5
Hadoop operations-2015-hadoop-summit-san-jose-v5Hadoop operations-2015-hadoop-summit-san-jose-v5
Hadoop operations-2015-hadoop-summit-san-jose-v5Chris Nauroth
 

Tendances (20)

Oracle Rac Performance Tunning Tips&Tricks
Oracle Rac Performance Tunning Tips&TricksOracle Rac Performance Tunning Tips&Tricks
Oracle Rac Performance Tunning Tips&Tricks
 
PGConf.ASIA 2019 Bali - AppOS: PostgreSQL Extension for Scalable File I/O - K...
PGConf.ASIA 2019 Bali - AppOS: PostgreSQL Extension for Scalable File I/O - K...PGConf.ASIA 2019 Bali - AppOS: PostgreSQL Extension for Scalable File I/O - K...
PGConf.ASIA 2019 Bali - AppOS: PostgreSQL Extension for Scalable File I/O - K...
 
Netezza workload management
Netezza workload managementNetezza workload management
Netezza workload management
 
Inside HDFS Append
Inside HDFS AppendInside HDFS Append
Inside HDFS Append
 
Connecting Hadoop and Oracle
Connecting Hadoop and OracleConnecting Hadoop and Oracle
Connecting Hadoop and Oracle
 
Accumulo Summit 2015: Performance Models for Apache Accumulo: The Heavy Tail ...
Accumulo Summit 2015: Performance Models for Apache Accumulo: The Heavy Tail ...Accumulo Summit 2015: Performance Models for Apache Accumulo: The Heavy Tail ...
Accumulo Summit 2015: Performance Models for Apache Accumulo: The Heavy Tail ...
 
AWR Ambiguity: Performance reasoning when the numbers don't add up
AWR Ambiguity: Performance reasoning when the numbers don't add upAWR Ambiguity: Performance reasoning when the numbers don't add up
AWR Ambiguity: Performance reasoning when the numbers don't add up
 
SOS: Optimizing Shuffle I/O with Brian Cho and Ergin Seyfe
SOS: Optimizing Shuffle I/O with Brian Cho and Ergin SeyfeSOS: Optimizing Shuffle I/O with Brian Cho and Ergin Seyfe
SOS: Optimizing Shuffle I/O with Brian Cho and Ergin Seyfe
 
Incredible Impala
Incredible Impala Incredible Impala
Incredible Impala
 
An Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai Yu
An Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai YuAn Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai Yu
An Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai Yu
 
Testing Delphix: easy data virtualization
Testing Delphix: easy data virtualizationTesting Delphix: easy data virtualization
Testing Delphix: easy data virtualization
 
Accelerating Apache Spark Shuffle for Data Analytics on the Cloud with Remote...
Accelerating Apache Spark Shuffle for Data Analytics on the Cloud with Remote...Accelerating Apache Spark Shuffle for Data Analytics on the Cloud with Remote...
Accelerating Apache Spark Shuffle for Data Analytics on the Cloud with Remote...
 
ORACLE 12C DATA GUARD: FAR SYNC, REAL-TIME CASCADE STANDBY AND OTHER GOODIES
ORACLE 12C DATA GUARD: FAR SYNC, REAL-TIME CASCADE STANDBY AND OTHER GOODIESORACLE 12C DATA GUARD: FAR SYNC, REAL-TIME CASCADE STANDBY AND OTHER GOODIES
ORACLE 12C DATA GUARD: FAR SYNC, REAL-TIME CASCADE STANDBY AND OTHER GOODIES
 
Apache Arrow-Based Unified Data Sharing and Transferring Format Among CPU and...
Apache Arrow-Based Unified Data Sharing and Transferring Format Among CPU and...Apache Arrow-Based Unified Data Sharing and Transferring Format Among CPU and...
Apache Arrow-Based Unified Data Sharing and Transferring Format Among CPU and...
 
Oracle RAC 12c and Policy-Managed Databases, a Technical Overview
Oracle RAC 12c and Policy-Managed Databases, a Technical OverviewOracle RAC 12c and Policy-Managed Databases, a Technical Overview
Oracle RAC 12c and Policy-Managed Databases, a Technical Overview
 
How Impala Works
How Impala WorksHow Impala Works
How Impala Works
 
Multi-Tenant Data Cloud with YARN & Helix
Multi-Tenant Data Cloud with YARN & HelixMulti-Tenant Data Cloud with YARN & Helix
Multi-Tenant Data Cloud with YARN & Helix
 
The Top 12 Features new to Oracle 12c
The Top 12 Features new to Oracle 12cThe Top 12 Features new to Oracle 12c
The Top 12 Features new to Oracle 12c
 
Oracle Database Performance Tuning Concept
Oracle Database Performance Tuning ConceptOracle Database Performance Tuning Concept
Oracle Database Performance Tuning Concept
 
Hadoop operations-2015-hadoop-summit-san-jose-v5
Hadoop operations-2015-hadoop-summit-san-jose-v5Hadoop operations-2015-hadoop-summit-san-jose-v5
Hadoop operations-2015-hadoop-summit-san-jose-v5
 

En vedette

Histograms in 12c era
Histograms in 12c eraHistograms in 12c era
Histograms in 12c eraMauro Pagano
 
Is your SQL Exadata-aware?
Is your SQL Exadata-aware?Is your SQL Exadata-aware?
Is your SQL Exadata-aware?Mauro Pagano
 
SQLT XPLORE - The SQLT XPLAIN Hidden Child
SQLT XPLORE -  The SQLT XPLAIN Hidden ChildSQLT XPLORE -  The SQLT XPLAIN Hidden Child
SQLT XPLORE - The SQLT XPLAIN Hidden ChildEnkitec
 
Same plan different performance
Same plan different performanceSame plan different performance
Same plan different performanceMauro Pagano
 
Mastering the Oracle Data Pump API
Mastering the Oracle Data Pump APIMastering the Oracle Data Pump API
Mastering the Oracle Data Pump APIEnkitec
 
Oracle statistics by example
Oracle statistics by exampleOracle statistics by example
Oracle statistics by exampleMauro Pagano
 
Java ain't scary - introducing Java to PL/SQL Developers
Java ain't scary - introducing Java to PL/SQL DevelopersJava ain't scary - introducing Java to PL/SQL Developers
Java ain't scary - introducing Java to PL/SQL DevelopersLucas Jellema
 
Chasing the optimizer
Chasing the optimizerChasing the optimizer
Chasing the optimizerMauro Pagano
 
My First 100 days with a MySQL DBMS
My First 100 days with a MySQL DBMSMy First 100 days with a MySQL DBMS
My First 100 days with a MySQL DBMSGustavo Rene Antunez
 
Writing Java Stored Procedures in Oracle 12c
Writing Java Stored Procedures in Oracle 12cWriting Java Stored Procedures in Oracle 12c
Writing Java Stored Procedures in Oracle 12cMartin Toshev
 
RMAN in 12c: The Next Generation (WP)
RMAN in 12c: The Next Generation (WP)RMAN in 12c: The Next Generation (WP)
RMAN in 12c: The Next Generation (WP)Gustavo Rene Antunez
 
En rhel-deploy-oracle-rac-database-12c-rhel-7
En rhel-deploy-oracle-rac-database-12c-rhel-7En rhel-deploy-oracle-rac-database-12c-rhel-7
En rhel-deploy-oracle-rac-database-12c-rhel-7Rotua Damanik
 
In Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry OsborneIn Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry OsborneEnkitec
 
In Search of Plan Stability - Part 1
In Search of Plan Stability - Part 1In Search of Plan Stability - Part 1
In Search of Plan Stability - Part 1Enkitec
 
Take Full Advantage of the Oracle PL/SQL Compiler
Take Full Advantage of the Oracle PL/SQL CompilerTake Full Advantage of the Oracle PL/SQL Compiler
Take Full Advantage of the Oracle PL/SQL CompilerSteven Feuerstein
 
Performance Tuning With Oracle ASH and AWR. Part 1 How And What
Performance Tuning With Oracle ASH and AWR. Part 1 How And WhatPerformance Tuning With Oracle ASH and AWR. Part 1 How And What
Performance Tuning With Oracle ASH and AWR. Part 1 How And Whatudaymoogala
 
Oracle 12c and its pluggable databases
Oracle 12c and its pluggable databasesOracle 12c and its pluggable databases
Oracle 12c and its pluggable databasesGustavo Rene Antunez
 

En vedette (20)

Histograms in 12c era
Histograms in 12c eraHistograms in 12c era
Histograms in 12c era
 
Is your SQL Exadata-aware?
Is your SQL Exadata-aware?Is your SQL Exadata-aware?
Is your SQL Exadata-aware?
 
SQLd360
SQLd360SQLd360
SQLd360
 
SQLT XPLORE - The SQLT XPLAIN Hidden Child
SQLT XPLORE -  The SQLT XPLAIN Hidden ChildSQLT XPLORE -  The SQLT XPLAIN Hidden Child
SQLT XPLORE - The SQLT XPLAIN Hidden Child
 
Same plan different performance
Same plan different performanceSame plan different performance
Same plan different performance
 
Mastering the Oracle Data Pump API
Mastering the Oracle Data Pump APIMastering the Oracle Data Pump API
Mastering the Oracle Data Pump API
 
Oracle statistics by example
Oracle statistics by exampleOracle statistics by example
Oracle statistics by example
 
Java ain't scary - introducing Java to PL/SQL Developers
Java ain't scary - introducing Java to PL/SQL DevelopersJava ain't scary - introducing Java to PL/SQL Developers
Java ain't scary - introducing Java to PL/SQL Developers
 
Chasing the optimizer
Chasing the optimizerChasing the optimizer
Chasing the optimizer
 
My First 100 days with a MySQL DBMS
My First 100 days with a MySQL DBMSMy First 100 days with a MySQL DBMS
My First 100 days with a MySQL DBMS
 
Writing Java Stored Procedures in Oracle 12c
Writing Java Stored Procedures in Oracle 12cWriting Java Stored Procedures in Oracle 12c
Writing Java Stored Procedures in Oracle 12c
 
RMAN in 12c: The Next Generation (WP)
RMAN in 12c: The Next Generation (WP)RMAN in 12c: The Next Generation (WP)
RMAN in 12c: The Next Generation (WP)
 
En rhel-deploy-oracle-rac-database-12c-rhel-7
En rhel-deploy-oracle-rac-database-12c-rhel-7En rhel-deploy-oracle-rac-database-12c-rhel-7
En rhel-deploy-oracle-rac-database-12c-rhel-7
 
Web Development In Oracle APEX
Web Development In Oracle APEXWeb Development In Oracle APEX
Web Development In Oracle APEX
 
In Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry OsborneIn Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry Osborne
 
In Search of Plan Stability - Part 1
In Search of Plan Stability - Part 1In Search of Plan Stability - Part 1
In Search of Plan Stability - Part 1
 
Take Full Advantage of the Oracle PL/SQL Compiler
Take Full Advantage of the Oracle PL/SQL CompilerTake Full Advantage of the Oracle PL/SQL Compiler
Take Full Advantage of the Oracle PL/SQL Compiler
 
Performance Tuning With Oracle ASH and AWR. Part 1 How And What
Performance Tuning With Oracle ASH and AWR. Part 1 How And WhatPerformance Tuning With Oracle ASH and AWR. Part 1 How And What
Performance Tuning With Oracle ASH and AWR. Part 1 How And What
 
Oracle 12c and its pluggable databases
Oracle 12c and its pluggable databasesOracle 12c and its pluggable databases
Oracle 12c and its pluggable databases
 
Oracle GoldenGate
Oracle GoldenGate Oracle GoldenGate
Oracle GoldenGate
 

Similaire à Whitepaper: Exadata Consolidation Success Story

A PeopleSoft & OBIEE Consolidation Success Story
A PeopleSoft & OBIEE Consolidation Success StoryA PeopleSoft & OBIEE Consolidation Success Story
A PeopleSoft & OBIEE Consolidation Success StoryEnkitec
 
Azure Data Factory Data Flow Performance Tuning 101
Azure Data Factory Data Flow Performance Tuning 101Azure Data Factory Data Flow Performance Tuning 101
Azure Data Factory Data Flow Performance Tuning 101Mark Kromer
 
Netezza fundamentals for developers
Netezza fundamentals for developersNetezza fundamentals for developers
Netezza fundamentals for developersBiju Nair
 
How should I monitor my idaa
How should I monitor my idaaHow should I monitor my idaa
How should I monitor my idaaCuneyt Goksu
 
Scaling out SSIS with Parallelism, Diving Deep Into The Dataflow Engine
Scaling out SSIS with Parallelism, Diving Deep Into The Dataflow EngineScaling out SSIS with Parallelism, Diving Deep Into The Dataflow Engine
Scaling out SSIS with Parallelism, Diving Deep Into The Dataflow EngineChris Adkin
 
Oracle: Binding versus caging
Oracle: Binding versus cagingOracle: Binding versus caging
Oracle: Binding versus cagingBertrandDrouvot
 
Sap memory management ,workload and performance analysis.pptx
Sap memory management ,workload and performance analysis.pptxSap memory management ,workload and performance analysis.pptx
Sap memory management ,workload and performance analysis.pptxsweta prakash sahoo
 
Cassandra Tutorial
Cassandra Tutorial Cassandra Tutorial
Cassandra Tutorial Na Zhu
 
Netezza fundamentals-for-developers
Netezza fundamentals-for-developersNetezza fundamentals-for-developers
Netezza fundamentals-for-developersTariq H. Khan
 
Windows server power_efficiency___robben_and_worthington__final
Windows server power_efficiency___robben_and_worthington__finalWindows server power_efficiency___robben_and_worthington__final
Windows server power_efficiency___robben_and_worthington__finalBruce Worthington
 
V mware vcsa host overview performance charts
V mware vcsa host overview performance chartsV mware vcsa host overview performance charts
V mware vcsa host overview performance chartsAdam Alhafid
 
SPL_ALL_EN.pptx
SPL_ALL_EN.pptxSPL_ALL_EN.pptx
SPL_ALL_EN.pptx政宏 张
 
Presentación Oracle Database Migración consideraciones 10g/11g/12c
Presentación Oracle Database Migración consideraciones 10g/11g/12cPresentación Oracle Database Migración consideraciones 10g/11g/12c
Presentación Oracle Database Migración consideraciones 10g/11g/12cRonald Francisco Vargas Quesada
 
PostgreSQL Table Partitioning / Sharding
PostgreSQL Table Partitioning / ShardingPostgreSQL Table Partitioning / Sharding
PostgreSQL Table Partitioning / ShardingAmir Reza Hashemi
 
Oracle Database 12c features for DBA
Oracle Database 12c features for DBAOracle Database 12c features for DBA
Oracle Database 12c features for DBAKaran Kukreja
 
Oracle ebs capacity_analysisusingstatisticalmethods
Oracle ebs capacity_analysisusingstatisticalmethodsOracle ebs capacity_analysisusingstatisticalmethods
Oracle ebs capacity_analysisusingstatisticalmethodsAjith Narayanan
 
Challenges of Building a First Class SQL-on-Hadoop Engine
Challenges of Building a First Class SQL-on-Hadoop EngineChallenges of Building a First Class SQL-on-Hadoop Engine
Challenges of Building a First Class SQL-on-Hadoop EngineNicolas Morales
 
Parallel Algorithms Advantages and Disadvantages
Parallel Algorithms Advantages and DisadvantagesParallel Algorithms Advantages and Disadvantages
Parallel Algorithms Advantages and DisadvantagesMurtadha Alsabbagh
 
Investigate SQL Server Memory Like Sherlock Holmes
Investigate SQL Server Memory Like Sherlock HolmesInvestigate SQL Server Memory Like Sherlock Holmes
Investigate SQL Server Memory Like Sherlock HolmesRichard Douglas
 

Similaire à Whitepaper: Exadata Consolidation Success Story (20)

A PeopleSoft & OBIEE Consolidation Success Story
A PeopleSoft & OBIEE Consolidation Success StoryA PeopleSoft & OBIEE Consolidation Success Story
A PeopleSoft & OBIEE Consolidation Success Story
 
Performance tuning in sql server
Performance tuning in sql serverPerformance tuning in sql server
Performance tuning in sql server
 
Azure Data Factory Data Flow Performance Tuning 101
Azure Data Factory Data Flow Performance Tuning 101Azure Data Factory Data Flow Performance Tuning 101
Azure Data Factory Data Flow Performance Tuning 101
 
Netezza fundamentals for developers
Netezza fundamentals for developersNetezza fundamentals for developers
Netezza fundamentals for developers
 
How should I monitor my idaa
How should I monitor my idaaHow should I monitor my idaa
How should I monitor my idaa
 
Scaling out SSIS with Parallelism, Diving Deep Into The Dataflow Engine
Scaling out SSIS with Parallelism, Diving Deep Into The Dataflow EngineScaling out SSIS with Parallelism, Diving Deep Into The Dataflow Engine
Scaling out SSIS with Parallelism, Diving Deep Into The Dataflow Engine
 
Oracle: Binding versus caging
Oracle: Binding versus cagingOracle: Binding versus caging
Oracle: Binding versus caging
 
Sap memory management ,workload and performance analysis.pptx
Sap memory management ,workload and performance analysis.pptxSap memory management ,workload and performance analysis.pptx
Sap memory management ,workload and performance analysis.pptx
 
Cassandra Tutorial
Cassandra Tutorial Cassandra Tutorial
Cassandra Tutorial
 
Netezza fundamentals-for-developers
Netezza fundamentals-for-developersNetezza fundamentals-for-developers
Netezza fundamentals-for-developers
 
Windows server power_efficiency___robben_and_worthington__final
Windows server power_efficiency___robben_and_worthington__finalWindows server power_efficiency___robben_and_worthington__final
Windows server power_efficiency___robben_and_worthington__final
 
V mware vcsa host overview performance charts
V mware vcsa host overview performance chartsV mware vcsa host overview performance charts
V mware vcsa host overview performance charts
 
SPL_ALL_EN.pptx
SPL_ALL_EN.pptxSPL_ALL_EN.pptx
SPL_ALL_EN.pptx
 
Presentación Oracle Database Migración consideraciones 10g/11g/12c
Presentación Oracle Database Migración consideraciones 10g/11g/12cPresentación Oracle Database Migración consideraciones 10g/11g/12c
Presentación Oracle Database Migración consideraciones 10g/11g/12c
 
PostgreSQL Table Partitioning / Sharding
PostgreSQL Table Partitioning / ShardingPostgreSQL Table Partitioning / Sharding
PostgreSQL Table Partitioning / Sharding
 
Oracle Database 12c features for DBA
Oracle Database 12c features for DBAOracle Database 12c features for DBA
Oracle Database 12c features for DBA
 
Oracle ebs capacity_analysisusingstatisticalmethods
Oracle ebs capacity_analysisusingstatisticalmethodsOracle ebs capacity_analysisusingstatisticalmethods
Oracle ebs capacity_analysisusingstatisticalmethods
 
Challenges of Building a First Class SQL-on-Hadoop Engine
Challenges of Building a First Class SQL-on-Hadoop EngineChallenges of Building a First Class SQL-on-Hadoop Engine
Challenges of Building a First Class SQL-on-Hadoop Engine
 
Parallel Algorithms Advantages and Disadvantages
Parallel Algorithms Advantages and DisadvantagesParallel Algorithms Advantages and Disadvantages
Parallel Algorithms Advantages and Disadvantages
 
Investigate SQL Server Memory Like Sherlock Holmes
Investigate SQL Server Memory Like Sherlock HolmesInvestigate SQL Server Memory Like Sherlock Holmes
Investigate SQL Server Memory Like Sherlock Holmes
 

Plus de Kristofferson A

RMOUG2016 - Resource Management (the critical piece of the consolidation puzzle)
RMOUG2016 - Resource Management (the critical piece of the consolidation puzzle)RMOUG2016 - Resource Management (the critical piece of the consolidation puzzle)
RMOUG2016 - Resource Management (the critical piece of the consolidation puzzle)Kristofferson A
 
The Database Sizing Workflow
The Database Sizing WorkflowThe Database Sizing Workflow
The Database Sizing WorkflowKristofferson A
 
RedGateWebinar - Where did my CPU go?
RedGateWebinar - Where did my CPU go?RedGateWebinar - Where did my CPU go?
RedGateWebinar - Where did my CPU go?Kristofferson A
 
OakTableWorld 2013: Ultimate Exadata IO monitoring – Flash, HardDisk , & Writ...
OakTableWorld 2013: Ultimate Exadata IO monitoring – Flash, HardDisk , & Writ...OakTableWorld 2013: Ultimate Exadata IO monitoring – Flash, HardDisk , & Writ...
OakTableWorld 2013: Ultimate Exadata IO monitoring – Flash, HardDisk , & Writ...Kristofferson A
 
OOW 2013: Where did my CPU go
OOW 2013: Where did my CPU goOOW 2013: Where did my CPU go
OOW 2013: Where did my CPU goKristofferson A
 
RMOUG 2013 - Where did my CPU go?
RMOUG 2013 - Where did my CPU go?RMOUG 2013 - Where did my CPU go?
RMOUG 2013 - Where did my CPU go?Kristofferson A
 
RMOUG 2012 - Mining the AWR
RMOUG 2012 - Mining the AWRRMOUG 2012 - Mining the AWR
RMOUG 2012 - Mining the AWRKristofferson A
 
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RACPerformance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RACKristofferson A
 
OOW Unconference 2010: Mining the AWR repository for Capacity Planning, Visua...
OOW Unconference 2010: Mining the AWR repository for Capacity Planning, Visua...OOW Unconference 2010: Mining the AWR repository for Capacity Planning, Visua...
OOW Unconference 2010: Mining the AWR repository for Capacity Planning, Visua...Kristofferson A
 
Oracle Closed World 2010: Graphing the AAS ala EM + doing some cool linear re...
Oracle Closed World 2010: Graphing the AAS ala EM + doing some cool linear re...Oracle Closed World 2010: Graphing the AAS ala EM + doing some cool linear re...
Oracle Closed World 2010: Graphing the AAS ala EM + doing some cool linear re...Kristofferson A
 

Plus de Kristofferson A (11)

RMOUG2016 - Resource Management (the critical piece of the consolidation puzzle)
RMOUG2016 - Resource Management (the critical piece of the consolidation puzzle)RMOUG2016 - Resource Management (the critical piece of the consolidation puzzle)
RMOUG2016 - Resource Management (the critical piece of the consolidation puzzle)
 
The Database Sizing Workflow
The Database Sizing WorkflowThe Database Sizing Workflow
The Database Sizing Workflow
 
RedGateWebinar - Where did my CPU go?
RedGateWebinar - Where did my CPU go?RedGateWebinar - Where did my CPU go?
RedGateWebinar - Where did my CPU go?
 
OakTableWorld 2013: Ultimate Exadata IO monitoring – Flash, HardDisk , & Writ...
OakTableWorld 2013: Ultimate Exadata IO monitoring – Flash, HardDisk , & Writ...OakTableWorld 2013: Ultimate Exadata IO monitoring – Flash, HardDisk , & Writ...
OakTableWorld 2013: Ultimate Exadata IO monitoring – Flash, HardDisk , & Writ...
 
OOW 2013: Where did my CPU go
OOW 2013: Where did my CPU goOOW 2013: Where did my CPU go
OOW 2013: Where did my CPU go
 
RMOUG 2013 - Where did my CPU go?
RMOUG 2013 - Where did my CPU go?RMOUG 2013 - Where did my CPU go?
RMOUG 2013 - Where did my CPU go?
 
RMOUG 2012 - Mining the AWR
RMOUG 2012 - Mining the AWRRMOUG 2012 - Mining the AWR
RMOUG 2012 - Mining the AWR
 
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RACPerformance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
 
Devcon: Virtualization?
Devcon: Virtualization?Devcon: Virtualization?
Devcon: Virtualization?
 
OOW Unconference 2010: Mining the AWR repository for Capacity Planning, Visua...
OOW Unconference 2010: Mining the AWR repository for Capacity Planning, Visua...OOW Unconference 2010: Mining the AWR repository for Capacity Planning, Visua...
OOW Unconference 2010: Mining the AWR repository for Capacity Planning, Visua...
 
Oracle Closed World 2010: Graphing the AAS ala EM + doing some cool linear re...
Oracle Closed World 2010: Graphing the AAS ala EM + doing some cool linear re...Oracle Closed World 2010: Graphing the AAS ala EM + doing some cool linear re...
Oracle Closed World 2010: Graphing the AAS ala EM + doing some cool linear re...
 

Dernier

"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfPrecisely
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 

Dernier (20)

"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 

Whitepaper: Exadata Consolidation Success Story

  • 1. 1 EEXADATAXADATA CCONSOLIDATION SUCCESSONSOLIDATION SUCCESS SSTORYTORY Karl Arao, Enkitec ABSTRACT In today’s competitive business climate companies are under constant pressure to reduce costs without sacrificing quality. Many companies see database and server consolidation as the key to meeting this goal. Since its introduction, Exadata has become the obvious choice for database and server consolidation projects. It is the next step in the evolutionary process. But managing highly consolidated environments is difficult, especially for mixed workload environments. If not done properly the quality of service suffers. In this paper we will touch on how to do accurate provisioning and capacity planning and the tools you’ll need to ensure that your consolidation story has a happy ending. TARGET AUDIENCE Target audiences are DBAs, Architects, Performance Engineers, and Capacity Planners Learner will be able to: • Describe the provisioning process and implementation challenges • Apply the tools and methodology to successfully migrate on an Exadata platform • Develop a resource management model (CPU instance caging & IORM) – detailed on the presentation BACKGROUND The whole consolidation workflow relies on the basic capacity planning formula Utilization = Requirements / Capacity Capacity planning plays a very important role to ensure proper resources are available and be able to handle expected and unexpected workloads. And Exadata is not really a different animal when it comes to provisioning, although it has an intelligent storage the database servers still has limited capacity. The primary principle is to ensure the application workload requirements will fit into the available capacity of the database server. A SIMPLE CONSOLIDATION SCENARIO Let’s say we have a half rack Exadata. That would consist of four (4) database servers and seven (7) storage cells. Each database server has a total CPU capacity of 24 Logical CPUs (cat /proc/cpuinfo) and then multiply that to the number of nodes (4) you’ll get the total CPU capacity for the whole cluster (96 CPUs). When we migrate and consolidate these databases on Exadata, each database has a CPU requirement. For now treat the “CPU requirement” as the amount of CPUs it needs to run on Exadata. We would like to have a good balance between the CPU requirements of the databases and the available CPU capacity across the database nodes, and also making sure that we don’t max out the CPU capacity on any of the nodes.
  • 2. 2 Let’s say we have the following databases to migrate on Exadata: On the above diagram each database (A to G) has a requirement of four (4) CPUs and will run on two nodes. The first row will be read as “Database A has a CPU requirement of 4, and will run on nodes 1 and 2”. Each of the databases is essentially a two node RAC that is spread out across four database servers. On a RAC environment the users will be load balanced between the available RAC instances, so if a database running on two node RAC has a CPU requirement of 4 given that the load is equally distributed across the nodes then they get 50/50 percent share of CPU requirement. On the bottom part is the grand total of CPU requirement for all the databases, which is 28 CPUs. We can then say that out of 96 Total CPUs across four nodes, we are only using 29%. Well that is correct, but we also want to know how each server is utilized that is depicted by the red circles across the same node numbers because we may be having a node that’s 80% utilized where the rest of the nodes are on the 10% range and an equal balance of utilization is critical to capacity planning. Here’s another view of the node layout where we distribute the CPUs of the instances based on their node assignments.
  • 3. 3 Each block on the right side is one CPU. That’s a total of 24 Logical CPUs which accounts the number of CPU threads on the server which is based on the CPU_COUNT database parameter and /proc/cpuinfo. We also set or take the number of CPUs from CPU_COUNT whenever we do instance caging and we are using this metric just to be consistent with the monitoring of OEM and AWR. From the image above the cluster level utilization it’s 29.2% while on the per compute node they are still below the 70% threshold which is the ideal utilization we want for the compute nodes. Now what we don’t want to happen is to have an unbalanced utilization if we change the node layout and assign more instances on node 2 and still make use of the same number of CPU requirement across the databases.
  • 4. 4 On the cluster level it will be the same utilization but on the per compute node we end up having node2 with 83% utilization while the rest are pretty much idle. THE PROVISIONING WORKSHEET Enkitec has developed a tool called Provisioning Worksheet that is mainly used for sizing and consolidation of databases. The sections below describes the overall workflow of the tool. General Tour and Workflow The Provisioning Worksheet has four main tabs that also represent the workflow of the whole provisioning project. Below you’ll see the mind map and the overview of the 4-step process: • Data gathering o The “System & DB Detail” tab on the worksheet. It’s the section that we send to the customers to fill out. The sheet will contain all the important information of the databases that will be migrated to the destination (target) server. • Define the target server o The “Exadata Capacity” tab on the worksheet, the default capacity will be a half rack Exadata. But depending on the customer capacity requirements the default values may not be enough so a review of the end resource
  • 5. 5 utilization is a must. At the end of the provisioning process there will be a section where the minimum “Recommended Hardware” capacity will be shown and should be matched. • Create a provisioning plan o The “Exadata Layout” tab on the worksheet. This is where we input the final list of databases that will be migrated to the target server. The instance mapping where we spread out the instances if they’ll be running as two or four node RAC and the failure scenarios will also be done here. • Review resource utilization o The “Summary & Graphs” tab on the worksheet. Once everything is finalized this section will give you the visualization and summary report of the end utilization of the target server. A red highlight on the utilization number on any of the resources means there’s not enough capacity and the “Exadata Capacity” section should be revisited and add more resources.
  • 6. 6
  • 7. 7 Data gathering The first step on the provisioning process is getting all the capacity requirements. This section of the worksheet is extracted and saved on a separate excel file then sent to the customers for them to fill out. And once the customer sent this back we can now start grouping the servers according to platform or workload type and also get started with the migration planning with the customer. Ultimately this sheet serves as a scratchpad and you are free to add more columns to help with the categorization of the environments that will be migrated. Below is a sample output: The data gathering sheet is divided into four parts: Host details CPU Memory Storage • DB Name • DB Version • App Type/Front End • Workload Type (OLTP/DW/Mix) • Node count (single instance/RAC) • Hostname • Server Make & Model • OS • Cpu Type • CPU Speed • # of Logical CPUs • CPU Utilization Avg/Peak • Physical Memory (GB) • SGA Size (GB) • PGA Size (GB) • Storage Make & Model • Disk RAID Level • # of IO Channels • Database Size (GB) • Backup Size (GB) • Peak R + W IOPS • Peak R IOPS • Peak W IOPS • Peak R + W MB/s • Peak R MB/s • Peak W MB/s • Peak R/W Ratio The gathered raw data points will then be feed to the provisioning plan and will be accounted to the available capacity.
  • 8. 8 Define the target server This is the section of the provisioning worksheet where we input the capacity of the Exadata that the customer currently have or the hardware that they have in mind. What we input here is just the initial hardware capacity where the requirements will be accounted and at the end of the provisioning process we can go back here and add more resources if there are any exceptions (red highlight) on the utilization summary report. • Node Count o On the node count you put 2,4, or 8 which is equivalent to quarter, half, full rack of Exadata • Exadata Speed (SPEC) o Then get the SPECint_rate equivalent of the Exadata processor so we will be able to compare the SPEED of the Exadata CPU against the source servers o See the section CPU -> The “Speed SPEC” or the “SPECint_rate2006/core” of the Appendix B for more details on how to get the value for a particular server. • Exadata CPUs/node o The number of Logical CPUs which is equivalent to the CPU_COUNT parameter or /proc/cpuinfo • Exadata Mem/node (G) o Each node has 96GB of memory on an Exadata • Exadata Storage (G) o Disk space is dependent on ASM redundancy and DATA/RECO allocation. Input the raw GB space capacity. • Backup Factor o The backup factor is usually set to 1, which means we want to have a space for at least 1 full backup of the database • Table Compression Factor o The table compression factor lets us gain more disk space as we compress the big tables. I usually set this to zero for a conservative sizing. • Offload Factor o The OFFLOAD FACTOR which is the amount of CPU resources that will be offloaded to the storage cells. I usually set this to zero for a conservative sizing.
  • 9. 9 Create a provisioning plan This is the section where we input the data points from the Data Gathering (“System & DB Detail” tab). Then we play around with the node layout where we spread out the instances across the compute nodes according to the customer’s preferences and what transpired from the migration planning. There’s an underlying capacity planning math that’s processing the data points of each of the database according to their node layout which will be accounted to the overall and node level capacity. (See Appendix B for the formulas) The node layout There are three values for the node layout: • P = preferred node (green) o primary node o accepts client connections • F = failover node (red) o secondary node o does not accept client connections o instance is down and does not consume resources • A = available node (blue)
  • 10. 10 o secondary node o client connections are just pre-connected, sessions will failover only when the preferred node fails/shutdown o instance is up and running and have provisioned resources Here’s how to interpret the image above: o The DBFS database runs across 4 nodes o hcm2tst only runs on node1 and has a failover instance on node2 o bi2tst runs on node3 and has a pre-connect instance on node4 Above the node layout are node level utilization reports which contains the following: • The number of instances running on that node • CPU utilization • Memory utilization • Recommended minimum Huge Pages allocation o The value shown has 10% allowance for SGA growth/resize o To convert the GB to actual Huge Pages settings make use of the following formula § (HPages GB * 1024) / 2 Most of the columns came from the Data Gathering (“System & DB Detail” tab) sheet but there’s a Host Speed (SPEC) column which represents the “SPECint_rate2006/core” value of the source server. See the section CPU -> The “Speed SPEC” or the “SPECint_rate2006/core” of the Appendix B for more details on how to get the value for a particular server. Node failure scenario The node layout will also depend on the failure scenarios where let’s say if one node goes down the end resource utilization of the remaining nodes should still be on an acceptable range (80% below). Below you’ll see a scenario that when the node1 goes down (changed from P to F) the remaining preferred nodes will catch all the failed over sessions and will cause an increase in resource utilization in terms of CPU and memory. Here the CPU is at the critical level which is at 120% utilization when this scenario happens.
  • 11. 11 The node failure scenario is essential to the availability planning of the whole cluster. This is the part where we do trial and error until we get to the sweet spot of the node layout where we have already failed each of the node and the end utilization of the remaining nodes are on an acceptable utilization range (80% below). Review resource utilization As we change the node layout we can quickly check on the “Summary & Graphs” tab to see the effects of the layout change. The graphs will be rendered in a split second and we’ll be able to visually check the allocation of resources and quickly see any imbalance on the provisioning plan in terms of CPU, memory, and storage. On the top section of the sheet are the Overall Utilization and the Recommended Hardware.
  • 12. 12 Overall Utilization While the node layout section has the “per node” utilization which quickly alerts us on any resource imbalance between nodes the “Overall Utilization” is very useful for monitoring the resource allocation on a cluster level. Here are some important points about this summary section: • It has a conditional formatting where it will have a “red highlight” for any resource component that reaches 75% and above • A “red highlight” means revisiting the provisioning plan (removing databases or reducing allocations) or adding more resources to the capacity which could be as follows: o Additional compute nodes o Memory upgrade o Additional storage o All of the above • The goal is to get rid of the “red highlight” Recommended Hardware All the data points gathered translates to resource requirements and then to the amount or size of hardware it needs to run smoothly. This section is very helpful for validating if the hardware that we currently have or planning on buying is enough to run all the databases that will be migrated or consolidated. Here are some important points about this summary section: • The Equivalent compute nodes has 35% allowance on the CPUs, which means if the Total CPU used is 64.15 which translates to 2.67 compute nodes (divide by 24 Logical CPUs) with the 35% allowance it will be 86.6 which translates to 3.6 compute nodes. This allowance for any workload growth/spikes or any unforeseen CPU workload profile change that could be caused by the migration (plan change, etc.). • The Equivalent compute nodes only accounts for the CPU • On the example below, the 3.6 nodes satisfies the minimum required number of nodes to run the 64.15 CPU requirement and still be at below 75% overall CPU utilization. But for the memory it’s not the case, so we either have to upgrade to more compute nodes or upgrade the memory from 96GB to 128GB. These resource capacity values can be modified on the “Exadata Capacity” tab.
  • 13. 13