SlideShare une entreprise Scribd logo
1  sur  35
Télécharger pour lire hors ligne
AWR1page
Sanity checking time
instrumentation in AWR
reports

John Beresniewicz

ASHmaster

JB micro-bio
• 80’s: DBA / Data modeler / Data Architect
• 90’s: Startup co-founder / product designer / lead
architect / PLSQL engineer
• 00’s: 11-yr Oracle CMTS doing UX design and
architecture for EM Grid Control and DB Diagnostic
and Tuning Packs
• Recently released from a 2-yr sentence at Company X
• Available for hire or consult to collaborate on cool stuff
Acknowledgements
• Kevin Closson / Connor McDonald

(for the motivating case studies)
• Graham Wood

(for mentorship, advice and insight)
• Lothar Flatz / Wolfgang Breitling / Alberto Dell’Era /
Toon Koppelaars

(for test sample AWR reports)
Outline of talk
• Motivation: confusing AWR reports
• Model and instrumentation for time in Oracle
• When model and measures don’t match
• Concept and design of AWR1page
• Using AWR1page: case studies 1 & 2
• Final thoughts, future directions
Motivation: confounding AWR reports
• OakTable inquiries about AWR reports where
“numbers don’t add up”
• First case: double-counted DB time
• AWR Ambiguity OakTable World 2015
• Second case: AIX CPU reporting issues
• AWR1page OakTable World 2016
• Solving such AWR report puzzles is mentally
excruciating
AWR Ambiguity take-away slide
Symptom Possible issue
DB CPU >> ASH CPU
(and significant wait time)
CPU used within wait
(this was the issue here)
ASH CPU >> DB CPU
System CPU-bound
(ASH includes run-queue)
DB Time >> DB CPU + Wait
(and not CPU-bound)
Un-instrumented wait
(in call, not in wait, not on CPU)
DB Time >> ASH DB Time
1. Double-counted DB Time
2. ASH dropped samples
Model: Oracle Time = CPU + Wait
• Time spent executing Oracle code by either
background or foreground processes
• Active processes are usually most interesting

Active = (in DB call) && (on CPU or “active” wait)
• Multiple measure sources for each time component
Oracle Time
CPU Time Wait Time
Oracle time instrumentation
• Wait/Event Model: V$SYSTEM_EVENT

FG/BG active/idle wait time
• Time Model: V$SYS_TIME_MODEL

FG/BG CPU and Elapsed
• OS Timing: V$OSSTAT

CPU sys/usr/IO/wait/load avg
• ASH: V$ACTIVE_SESSION_HISTORY

FG/BG CPU and wait (estimated)
NOTE: Stop using V$SYSSTAT “CPU used by this session”
Time instrumentation essentials
• Run-queue time distorts everything!
• TM elapsed = (call end - call start) - idle wait time
• ASH ON CPU = active session not in wait (derived)
• TM CPU measured = actual instruction time used
• OSSTAT reliability platform-dependent
• Smaller call latencies increase distortion
Normalization: Avg Active Sessions
• Primary Oracle performance metric
• AAS = instrumentation time / elapsed time
• AAS / Cores = core-normalized AAS
RMOUG 2007
Consistency checking AWR reports
• AWR reports are massive

(most of it irrelevant for any specific situation)
• Time data scattered about, difficult to compare
• Units vary, breakdowns inconsistent, not
normalized
• Labor intensive and error-prone
Cognitive load of consistency checking is huge
Sources of inconsistency
• CPU saturation: run queue time

First and most important item to check
• CPU under wait event

Time Model Wait << Wait ~ ASH Wait
• OS CPU accounting for multi-threaded cores

IBM AIX issue: Time Model CPU << ASH CPU
• double-counted DB time

10.1 bug with job queue process accounting
Sources of inconsistency (2)
• un-instrumented wait event

ASH CPU >> Time Model CPU (OS CPU)
• active events classified idle

Time Model, Wait and ASH under-reported
• dropped ASH samples*

ASH << Time Model
• cpu-active idle events*

FG Time << FG CPU Time
Check CPU first!
• OSSTAT: Average Active Sessions Busy / Core
• OSSTAT: Load Average / Core
• OSSTAT: OS_CPU_WAIT AAS (processes)
• Time Model: CPU / Core
• CPU Threads / Core = hyper-threading factor
Some consistency checks
? CPU saturation: Core utilization >> 1
? TM AAS ~ ASH AAS
? TM CPU ~ ASH CPU ~ OS CPU
? TM AAS ~ (TM CPU + WT Wait)
? TM Wait ~ ASH Wait ~ WT Wait
KEY:
WT = Event/Wait Model
TM = Time Model
OS = OSSTAT
ASH = ASH
Oracle Time << CPU + Wait
• CPU under wait, time is being double-counted



TM AAS < (TM CPU + WT Wait) && 

ASH Wait ~ WT Wait
Oracle Time
CPU Time Wait Time
Oracle Time
CPU Time Wait TimeCPU under wait
Oracle Time >> CPU + Wait
• Wait under-counted, un-instrumented wait?



ASH CPU > TM CPU && TM CPU ~ OS CPU && 

TM Wait > WT Wait
• CPU under-counted, AIX or course timers?



ASH CPU > TM CPU && 

ASH Wait ~ WT Wait && TM Wait > WT Wait
Oracle Time
CPU Time Wait Time
The AIX CPU issue
• AIX with SMT turned on reports CPU strangely
• Semantics of CPU accounting altered
• According to Graham: “it’s just broken”
• OakTable blog references:

Marcin Przepiorowski

Jeremy Schneider
• Graham: Increase reported CPU values by 50% for
SMT=4 for more realistic picture
Objectives for AWR1page
• Reduce cognitive load of checking time
consistency in AWR reports: Total / CPU / Wait
• Facilitate comparative assessment of system sizes
and busy-ness
• In 1 page, using only AWR text report as input
AWR report time measure sources
• $0 ~”^Operating System Statistics” {
• $0 ~ "^Time Model Statistics" {
• $0 ~ "^Foreground Wait Class" {
• $0 ~ "^Wait Classes by Total Wait Time" {
• $0 ~ "^Activity Over Time" {
• Also other important report singletons like elapsed
time of AWR period, platform, core count, cpu count
Idea: measure x source matrix
INFO OSSTAT TIME MODEL ASH
HOST CPU
SYS
USR
ORCL TOTAL
CPU
WAIT
MISC
Compare rows
for equality
measures
data sources
Columns add up
heirarchically
Values from AWR report
INFO *
cores
cpus
threads/core
memory
OSSTAT
X
X
X*
X
TIME MODEL ASH
ORCL .
total .
FG
BG
.
.
.
.

X
X
X
.
X
X
X*
CPU load .
. busy
i/o wait
scheduler
ORCL .
FG
BG
X
X
X
X
.
.
.
.
.
.
.
X
X
X
.
.
.
.
X
X
X*
WAIT
MISC
Design mockup
AWR1page cores: nn cpus: nn threads: nn elapsed:nnnn.m
measure OSSTAT TIME MODEL ASH WAIT CLASS
HOST LOAD [nn:mm]
CPU BUSY [nn:mm]
USER nn
SYS nn
IOWAIT [nn:mm]
OS_CPU_WAIT [nn:mm]
ORACLE
AVG ACTIVE SESSIONS (BG & FG) [nn:mm] [nn:mm]
CPU [nn:mm] [nn:mm]
FG nn
BG nn
RES_MGR_CPU_WAIT nn
WAIT [nn:mm] [nn:mm] [nn:mm]
FG nn nn nn
BG nn dd* dd*
Scheduler nn
MISC: CPU per user call n.mmmm n.mmmm n.mmmm
Design notes
• AAS = Average Active [ Sessions | Processes | Threads| Cores ]
• instrumentation time / elapsed time
• All numbers on the sparse matrix are in AAS
• Comparable measures from different sources on same line
• Indentation and vertical position represent decomposition
• Some cells show core-normalized AAS
• [nn:mm] where nn=AAS, mm=AAS/cores
Version 2c: 634 lines of awk
AWR1page
file: awrKC.txt
platform: Linux x86 64-bit cores: 36
RAC: NO cpus: 72
release: 12.1.0.2.0 threads/core: 2
elapsed: 2.02 (min) sessions(end): 56
snaps: 1
========================================================================================================================
measure | OSSSTAT | TIME MODEL | ASH | WAIT CLASS
========================================================================================================================
HOST LOAD [7.80:0.22]
CPU BUSY [5.99:0.17]
USER 2.27
SYS 3.72
IOWAIT [3.00:0.08]
OS_CPU_WAIT N/A
........................................................................................................................
ORACLE
AVERAGE ACTIVE SESSIONS (BG+FG) [4.98:0.14] [4.58:0.13]
CPU [2.02:0.06] [0.50:0.01]
FG 2.00
BG 0.01
RSRC_MGR_CPU_WAIT N/A
WAIT [2.96:0.08] [4.08:0.11] [4.32:0.12]
FG 2.96 4.32
BG 0.00 *0.00
IOWAIT 4.32
FG 4.32
Scheduler 0.00
........................................................................................................................
MISC
CPU per call (ms) 3781.98 10.50 312.50
user calls/sec 1.58
core utilization
case studies
Using AWR1page to diagnose timing inconsistencies
case study 1
Benchmarking Oracle I/O (a.k.a. being a “SLOB”)
AWR1page
file: awrKC.txt
platform: Linux x86 64-bit cores: 36
RAC: NO cpus: 72
release: 12.1.0.2.0 threads/core: 2
elapsed: 2.02 (min) sessions(end): 56
snaps: 1
========================================================================================================================
measure | OSSSTAT | TIME MODEL | ASH | WAIT CLASS
========================================================================================================================
HOST LOAD [7.80:0.22]
CPU BUSY [5.99:0.17]
USER 2.27
SYS 3.72
IOWAIT [3.00:0.08]
OS_CPU_WAIT N/A
........................................................................................................................
ORACLE
AVERAGE ACTIVE SESSIONS (BG+FG) [4.98:0.14] [4.58:0.13]
CPU [2.02:0.06] [0.50:0.01]
FG 2.00
BG 0.01
RSRC_MGR_CPU_WAIT N/A
WAIT [2.96:0.08] [4.08:0.11] [4.32:0.12]
FG 2.96 4.32
BG 0.00 *0.00
IOWAIT 4.32
FG 4.32
Scheduler 0.00
........................................................................................................................
MISC
CPU per call (ms) 3781.98 10.50 312.50
user calls/sec 1.58
AWR1page
file: awrKC.txt
platform: Linux x86 64-bit cores: 36
RAC: NO cpus: 72
release: 12.1.0.2.0 threads/core: 2
elapsed: 2.02 (min) sessions(end): 56
snaps: 1
========================================================================================================================
measure | OSSSTAT | TIME MODEL | ASH | WAIT CLASS
========================================================================================================================
HOST LOAD [7.80:0.22]
CPU BUSY [5.99:0.17]
USER 2.27
SYS 3.72
IOWAIT [3.00:0.08]
OS_CPU_WAIT N/A
........................................................................................................................
ORACLE
AVERAGE ACTIVE SESSIONS (BG+FG) [4.98:0.14] [4.58:0.13]
CPU [2.02:0.06] [0.50:0.01]
FG 2.00
BG 0.01
RSRC_MGR_CPU_WAIT N/A
WAIT [2.96:0.08] [4.08:0.11] [4.32:0.12]
FG 2.96 4.32
BG 0.00 *0.00
IOWAIT 4.32
FG 4.32
Scheduler 0.00
........................................................................................................................
MISC
CPU per call (ms) 3781.98 10.50 312.50
user calls/sec 1.58
Decent-sized Linux system running Oracle 12
DB Time < DB CPU + Wait => CPU under (IO) wait
nontrivial IO wait
CPU ok
bedrock
IO wait = 1 aas CPU + 3 aas wait
case study 2
AskTom question about confusing AWR report
AWR1page
file: QS02FNT_AWR_AskTom.txt
platform: AIX-Based Systems (64-bit) cores: 2
RAC: NO cpus: 8
release: 12.1.0.2.0 threads/core: 4
elapsed: 44.22 (min) sessions(end): 91
snaps: 3
========================================================================================================================
measure | OSSSTAT | TIME MODEL | ASH | WAIT CLASS
========================================================================================================================
HOST LOAD [4.12:2.06]
CPU BUSY [1.13:0.57]
USER 0.66
SYS 0.47
IOWAIT [0.07:0.03]
OS_CPU_WAIT [4.06:2.03]
........................................................................................................................
ORACLE
AVERAGE ACTIVE SESSIONS (BG+FG) [0.62:0.31] [0.72:0.36]
CPU [0.13:0.07] [0.66:0.33]
FG 0.13
BG 0.01
RSRC_MGR_CPU_WAIT 0.00
WAIT [0.48:0.24] [0.06:0.03] [0.05:0.03]
FG 0.42 0.01
BG 0.07 *0.05
IOWAIT 0.03
FG 0.00
Scheduler 0.00
........................................................................................................................
MISC
CPU per call (ms) 115.47 0.01 60.02
user calls/sec 9.80
AWR1page
file: QS02FNT_AWR_AskTom.txt
platform: AIX-Based Systems (64-bit) cores: 2
RAC: NO cpus: 8
release: 12.1.0.2.0 threads/core: 4
elapsed: 44.22 (min) sessions(end): 91
snaps: 3
========================================================================================================================
measure | OSSSTAT | TIME MODEL | ASH | WAIT CLASS
========================================================================================================================
HOST LOAD [4.12:2.06]
CPU BUSY [1.13:0.57]
USER 0.66
SYS 0.47
IOWAIT [0.07:0.03]
OS_CPU_WAIT [4.06:2.03]
........................................................................................................................
ORACLE
AVERAGE ACTIVE SESSIONS (BG+FG) [0.62:0.31] [0.72:0.36]
CPU [0.13:0.07] [0.66:0.33]
FG 0.13
BG 0.01
RSRC_MGR_CPU_WAIT 0.00
WAIT [0.48:0.24] [0.06:0.03] [0.05:0.03]
FG 0.42 0.01
BG 0.07 *0.05
IOWAIT 0.03
FG 0.00
Scheduler 0.00
........................................................................................................................
MISC
CPU per call (ms) 115.47 0.01 60.02
user calls/sec 9.80
AIX = CPU red fag
CPU bound? maybe
small system

SMT = 4
TM ~ ASH
ASH ~ Wait
DB Time > DB CPU + Wait => CPU under-report or run-queue
0.56
0.06
??
Final thoughts, future directions
• IT WORKS! Significant reduction of cognitive load
to consistency check AWR report
• Re-write into Python, awk too fragile
• Accept html version of AWR report
• AWS Lambda service: AWR IN -> 1page OUT
• SQL version: check AWR directly

(should be in DB regression tests)
contact info
• LinkedIn:

https://www.linkedin.com/in/
john-beresniewicz-986b0
• Gmail:

john.beresniewicz@gmail.com
thank you



questions?

Contenu connexe

Tendances

Oracle Open World Thursday 230 ashmasters
Oracle Open World Thursday 230 ashmastersOracle Open World Thursday 230 ashmasters
Oracle Open World Thursday 230 ashmasters
Kyle Hailey
 
Hotsos 2011: Mining the AWR repository for Capacity Planning, Visualization, ...
Hotsos 2011: Mining the AWR repository for Capacity Planning, Visualization, ...Hotsos 2011: Mining the AWR repository for Capacity Planning, Visualization, ...
Hotsos 2011: Mining the AWR repository for Capacity Planning, Visualization, ...
Kristofferson A
 
Oracle performance tuning_sfsf
Oracle performance tuning_sfsfOracle performance tuning_sfsf
Oracle performance tuning_sfsf
Mao Geng
 
Performance Tuning With Oracle ASH and AWR. Part 1 How And What
Performance Tuning With Oracle ASH and AWR. Part 1 How And WhatPerformance Tuning With Oracle ASH and AWR. Part 1 How And What
Performance Tuning With Oracle ASH and AWR. Part 1 How And What
udaymoogala
 
VirtaThon 2011 - Mining the AWR
VirtaThon 2011 - Mining the AWRVirtaThon 2011 - Mining the AWR
VirtaThon 2011 - Mining the AWR
Kristofferson A
 

Tendances (20)

Oracle Open World Thursday 230 ashmasters
Oracle Open World Thursday 230 ashmastersOracle Open World Thursday 230 ashmasters
Oracle Open World Thursday 230 ashmasters
 
AWR Ambiguity: Performance reasoning when the numbers don't add up
AWR Ambiguity: Performance reasoning when the numbers don't add upAWR Ambiguity: Performance reasoning when the numbers don't add up
AWR Ambiguity: Performance reasoning when the numbers don't add up
 
Oracle Performance Tuning Fundamentals
Oracle Performance Tuning FundamentalsOracle Performance Tuning Fundamentals
Oracle Performance Tuning Fundamentals
 
How to find what is making your Oracle database slow
How to find what is making your Oracle database slowHow to find what is making your Oracle database slow
How to find what is making your Oracle database slow
 
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentals
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentalsDB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentals
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentals
 
Analyzing and Interpreting AWR
Analyzing and Interpreting AWRAnalyzing and Interpreting AWR
Analyzing and Interpreting AWR
 
Advanced tips for making Oracle databases faster
Advanced tips for making Oracle databases fasterAdvanced tips for making Oracle databases faster
Advanced tips for making Oracle databases faster
 
Hotsos 2011: Mining the AWR repository for Capacity Planning, Visualization, ...
Hotsos 2011: Mining the AWR repository for Capacity Planning, Visualization, ...Hotsos 2011: Mining the AWR repository for Capacity Planning, Visualization, ...
Hotsos 2011: Mining the AWR repository for Capacity Planning, Visualization, ...
 
Aioug vizag oracle12c_new_features
Aioug vizag oracle12c_new_featuresAioug vizag oracle12c_new_features
Aioug vizag oracle12c_new_features
 
Your tuning arsenal: AWR, ADDM, ASH, Metrics and Advisors
Your tuning arsenal: AWR, ADDM, ASH, Metrics and AdvisorsYour tuning arsenal: AWR, ADDM, ASH, Metrics and Advisors
Your tuning arsenal: AWR, ADDM, ASH, Metrics and Advisors
 
Oracle performance tuning_sfsf
Oracle performance tuning_sfsfOracle performance tuning_sfsf
Oracle performance tuning_sfsf
 
Oracle Database Performance Tuning Concept
Oracle Database Performance Tuning ConceptOracle Database Performance Tuning Concept
Oracle Database Performance Tuning Concept
 
AWR DB performance Data Mining - Collaborate 2015
AWR DB performance Data Mining - Collaborate 2015AWR DB performance Data Mining - Collaborate 2015
AWR DB performance Data Mining - Collaborate 2015
 
Ash architecture and advanced usage rmoug2014
Ash architecture and advanced usage rmoug2014Ash architecture and advanced usage rmoug2014
Ash architecture and advanced usage rmoug2014
 
Oracle database performance tuning
Oracle database performance tuningOracle database performance tuning
Oracle database performance tuning
 
Performance Tuning With Oracle ASH and AWR. Part 1 How And What
Performance Tuning With Oracle ASH and AWR. Part 1 How And WhatPerformance Tuning With Oracle ASH and AWR. Part 1 How And What
Performance Tuning With Oracle ASH and AWR. Part 1 How And What
 
Ash and awr deep dive hotsos
Ash and awr deep dive hotsosAsh and awr deep dive hotsos
Ash and awr deep dive hotsos
 
Oracle Performance Tuning Fundamentals
Oracle Performance Tuning FundamentalsOracle Performance Tuning Fundamentals
Oracle Performance Tuning Fundamentals
 
VirtaThon 2011 - Mining the AWR
VirtaThon 2011 - Mining the AWRVirtaThon 2011 - Mining the AWR
VirtaThon 2011 - Mining the AWR
 
Using Statspack and AWR for Memory Monitoring and Tuning
Using Statspack and AWR for Memory Monitoring and TuningUsing Statspack and AWR for Memory Monitoring and Tuning
Using Statspack and AWR for Memory Monitoring and Tuning
 

En vedette

SQL Tuning Methodology, Kscope 2013
SQL Tuning Methodology, Kscope 2013 SQL Tuning Methodology, Kscope 2013
SQL Tuning Methodology, Kscope 2013
Kyle Hailey
 
Christo kutrovsky oracle, memory & linux
Christo kutrovsky   oracle, memory & linuxChristo kutrovsky   oracle, memory & linux
Christo kutrovsky oracle, memory & linux
Kyle Hailey
 
OOUG - Oracle Performance Tuning with AAS
OOUG - Oracle Performance Tuning with AASOOUG - Oracle Performance Tuning with AAS
OOUG - Oracle Performance Tuning with AAS
Kyle Hailey
 
Benchmarking Oracle I/O Performance with Orion by Alex Gorbachev
Benchmarking Oracle I/O Performance with Orion by Alex GorbachevBenchmarking Oracle I/O Performance with Orion by Alex Gorbachev
Benchmarking Oracle I/O Performance with Orion by Alex Gorbachev
Alex Gorbachev
 
Oracle SQL Performance Tuning and Optimization v26 chapter 1
Oracle SQL Performance Tuning and Optimization v26 chapter 1Oracle SQL Performance Tuning and Optimization v26 chapter 1
Oracle SQL Performance Tuning and Optimization v26 chapter 1
Kevin Meade
 

En vedette (20)

SQL Tuning Methodology, Kscope 2013
SQL Tuning Methodology, Kscope 2013 SQL Tuning Methodology, Kscope 2013
SQL Tuning Methodology, Kscope 2013
 
Jannatul Ferdous_2_resume
Jannatul Ferdous_2_resumeJannatul Ferdous_2_resume
Jannatul Ferdous_2_resume
 
SQL Plan Directives explained
SQL Plan Directives explainedSQL Plan Directives explained
SQL Plan Directives explained
 
RF Lab Summary
RF Lab SummaryRF Lab Summary
RF Lab Summary
 
How to find and fix your Oracle application performance problem
How to find and fix your Oracle application performance problemHow to find and fix your Oracle application performance problem
How to find and fix your Oracle application performance problem
 
FlashSystems 2016 update
FlashSystems 2016 updateFlashSystems 2016 update
FlashSystems 2016 update
 
Oracle 11G SCAN: Concepts and Implementation Experience Sharing
Oracle 11G SCAN: Concepts and Implementation Experience SharingOracle 11G SCAN: Concepts and Implementation Experience Sharing
Oracle 11G SCAN: Concepts and Implementation Experience Sharing
 
Oracle SQL tuning with SQL Plan Management
Oracle SQL tuning with SQL Plan ManagementOracle SQL tuning with SQL Plan Management
Oracle SQL tuning with SQL Plan Management
 
Oaktable World 2014 Toon Koppelaars: database constraints polite excuse
Oaktable World 2014 Toon Koppelaars: database constraints polite excuseOaktable World 2014 Toon Koppelaars: database constraints polite excuse
Oaktable World 2014 Toon Koppelaars: database constraints polite excuse
 
3 ways to reduce Oracle license costs
3 ways to reduce Oracle license costs3 ways to reduce Oracle license costs
3 ways to reduce Oracle license costs
 
Christo kutrovsky oracle, memory & linux
Christo kutrovsky   oracle, memory & linuxChristo kutrovsky   oracle, memory & linux
Christo kutrovsky oracle, memory & linux
 
OOUG - Oracle Performance Tuning with AAS
OOUG - Oracle Performance Tuning with AASOOUG - Oracle Performance Tuning with AAS
OOUG - Oracle Performance Tuning with AAS
 
Using SQL Plan Management (SPM) to balance Plan Flexibility and Plan Stability
Using SQL Plan Management (SPM) to balance Plan Flexibility and Plan StabilityUsing SQL Plan Management (SPM) to balance Plan Flexibility and Plan Stability
Using SQL Plan Management (SPM) to balance Plan Flexibility and Plan Stability
 
My Experience Using Oracle SQL Plan Baselines 11g/12c
My Experience Using Oracle SQL Plan Baselines 11g/12cMy Experience Using Oracle SQL Plan Baselines 11g/12c
My Experience Using Oracle SQL Plan Baselines 11g/12c
 
How a Developer can Troubleshoot a SQL performing poorly on a Production DB
How a Developer can Troubleshoot a SQL performing poorly on a Production DBHow a Developer can Troubleshoot a SQL performing poorly on a Production DB
How a Developer can Troubleshoot a SQL performing poorly on a Production DB
 
Benchmarking Oracle I/O Performance with Orion by Alex Gorbachev
Benchmarking Oracle I/O Performance with Orion by Alex GorbachevBenchmarking Oracle I/O Performance with Orion by Alex Gorbachev
Benchmarking Oracle I/O Performance with Orion by Alex Gorbachev
 
Oracle AWR Data mining
Oracle AWR Data miningOracle AWR Data mining
Oracle AWR Data mining
 
Oracle SQL Performance Tuning and Optimization v26 chapter 1
Oracle SQL Performance Tuning and Optimization v26 chapter 1Oracle SQL Performance Tuning and Optimization v26 chapter 1
Oracle SQL Performance Tuning and Optimization v26 chapter 1
 
Oracle sql tuning
Oracle sql tuningOracle sql tuning
Oracle sql tuning
 
How to analyze and tune sql queries for better performance vts2016
How to analyze and tune sql queries for better performance vts2016How to analyze and tune sql queries for better performance vts2016
How to analyze and tune sql queries for better performance vts2016
 

Similaire à Awr1page - Sanity checking time instrumentation in AWR reports

ASH Archit ecture and Advanced Usage.pdf
ASH Archit ecture and Advanced Usage.pdfASH Archit ecture and Advanced Usage.pdf
ASH Archit ecture and Advanced Usage.pdf
tricantino1973
 
200603ash.pdf Performance Tuning Oracle DB
200603ash.pdf Performance Tuning Oracle DB200603ash.pdf Performance Tuning Oracle DB
200603ash.pdf Performance Tuning Oracle DB
cookie1969
 
ASH and AWR Performance Data by Kellyn Pot'Vin
ASH and AWR Performance Data by Kellyn Pot'VinASH and AWR Performance Data by Kellyn Pot'Vin
ASH and AWR Performance Data by Kellyn Pot'Vin
Enkitec
 
AWR, ADDM, ASH, Metrics and Advisors.ppt
AWR, ADDM, ASH, Metrics and Advisors.pptAWR, ADDM, ASH, Metrics and Advisors.ppt
AWR, ADDM, ASH, Metrics and Advisors.ppt
bugzbinny
 
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RACPerformance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Kristofferson A
 
Empower my sql server administration with 5.7 instruments
Empower my sql server administration with 5.7 instrumentsEmpower my sql server administration with 5.7 instruments
Empower my sql server administration with 5.7 instruments
Marco Tusa
 
SQL Explore 2012: P&T Part 1
SQL Explore 2012: P&T Part 1SQL Explore 2012: P&T Part 1
SQL Explore 2012: P&T Part 1
sqlserver.co.il
 

Similaire à Awr1page - Sanity checking time instrumentation in AWR reports (20)

ASH Archit ecture and Advanced Usage.pdf
ASH Archit ecture and Advanced Usage.pdfASH Archit ecture and Advanced Usage.pdf
ASH Archit ecture and Advanced Usage.pdf
 
Oracle Database : Addressing a performance issue the drilldown approach
Oracle Database : Addressing a performance issue the drilldown approachOracle Database : Addressing a performance issue the drilldown approach
Oracle Database : Addressing a performance issue the drilldown approach
 
Analyze database system using a 3 d method
Analyze database system using a 3 d methodAnalyze database system using a 3 d method
Analyze database system using a 3 d method
 
How should I monitor my idaa
How should I monitor my idaaHow should I monitor my idaa
How should I monitor my idaa
 
Ash and awr performance data2
Ash and awr performance data2Ash and awr performance data2
Ash and awr performance data2
 
ASH and AWR on DB12c
ASH and AWR on DB12cASH and AWR on DB12c
ASH and AWR on DB12c
 
Velocity 2015 linux perf tools
Velocity 2015 linux perf toolsVelocity 2015 linux perf tools
Velocity 2015 linux perf tools
 
200603ash.pdf Performance Tuning Oracle DB
200603ash.pdf Performance Tuning Oracle DB200603ash.pdf Performance Tuning Oracle DB
200603ash.pdf Performance Tuning Oracle DB
 
ASH and AWR Performance Data by Kellyn Pot'Vin
ASH and AWR Performance Data by Kellyn Pot'VinASH and AWR Performance Data by Kellyn Pot'Vin
ASH and AWR Performance Data by Kellyn Pot'Vin
 
Right-Sizing your SQL Server Virtual Machine
Right-Sizing your SQL Server Virtual MachineRight-Sizing your SQL Server Virtual Machine
Right-Sizing your SQL Server Virtual Machine
 
AWR, ADDM, ASH, Metrics and Advisors.ppt
AWR, ADDM, ASH, Metrics and Advisors.pptAWR, ADDM, ASH, Metrics and Advisors.ppt
AWR, ADDM, ASH, Metrics and Advisors.ppt
 
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RACPerformance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
 
active_session_history_oracle_performance.ppt
active_session_history_oracle_performance.pptactive_session_history_oracle_performance.ppt
active_session_history_oracle_performance.ppt
 
Empower my sql server administration with 5.7 instruments
Empower my sql server administration with 5.7 instrumentsEmpower my sql server administration with 5.7 instruments
Empower my sql server administration with 5.7 instruments
 
Collaborate 2019 - How to Understand an AWR Report
Collaborate 2019 - How to Understand an AWR ReportCollaborate 2019 - How to Understand an AWR Report
Collaborate 2019 - How to Understand an AWR Report
 
Spark Autotuning: Spark Summit East talk by Lawrence Spracklen
Spark Autotuning: Spark Summit East talk by Lawrence SpracklenSpark Autotuning: Spark Summit East talk by Lawrence Spracklen
Spark Autotuning: Spark Summit East talk by Lawrence Spracklen
 
Spark Autotuning - Spark Summit East 2017
Spark Autotuning - Spark Summit East 2017 Spark Autotuning - Spark Summit East 2017
Spark Autotuning - Spark Summit East 2017
 
SQL Explore 2012: P&T Part 1
SQL Explore 2012: P&T Part 1SQL Explore 2012: P&T Part 1
SQL Explore 2012: P&T Part 1
 
Benchmarking Solr Performance at Scale
Benchmarking Solr Performance at ScaleBenchmarking Solr Performance at Scale
Benchmarking Solr Performance at Scale
 
Linux Performance Tools 2014
Linux Performance Tools 2014Linux Performance Tools 2014
Linux Performance Tools 2014
 

Dernier

Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Dernier (20)

TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 

Awr1page - Sanity checking time instrumentation in AWR reports

  • 1. AWR1page Sanity checking time instrumentation in AWR reports
 John Beresniewicz
 ASHmaster

  • 2. JB micro-bio • 80’s: DBA / Data modeler / Data Architect • 90’s: Startup co-founder / product designer / lead architect / PLSQL engineer • 00’s: 11-yr Oracle CMTS doing UX design and architecture for EM Grid Control and DB Diagnostic and Tuning Packs • Recently released from a 2-yr sentence at Company X • Available for hire or consult to collaborate on cool stuff
  • 3. Acknowledgements • Kevin Closson / Connor McDonald
 (for the motivating case studies) • Graham Wood
 (for mentorship, advice and insight) • Lothar Flatz / Wolfgang Breitling / Alberto Dell’Era / Toon Koppelaars
 (for test sample AWR reports)
  • 4. Outline of talk • Motivation: confusing AWR reports • Model and instrumentation for time in Oracle • When model and measures don’t match • Concept and design of AWR1page • Using AWR1page: case studies 1 & 2 • Final thoughts, future directions
  • 5. Motivation: confounding AWR reports • OakTable inquiries about AWR reports where “numbers don’t add up” • First case: double-counted DB time • AWR Ambiguity OakTable World 2015 • Second case: AIX CPU reporting issues • AWR1page OakTable World 2016 • Solving such AWR report puzzles is mentally excruciating
  • 6. AWR Ambiguity take-away slide Symptom Possible issue DB CPU >> ASH CPU (and significant wait time) CPU used within wait (this was the issue here) ASH CPU >> DB CPU System CPU-bound (ASH includes run-queue) DB Time >> DB CPU + Wait (and not CPU-bound) Un-instrumented wait (in call, not in wait, not on CPU) DB Time >> ASH DB Time 1. Double-counted DB Time 2. ASH dropped samples
  • 7. Model: Oracle Time = CPU + Wait • Time spent executing Oracle code by either background or foreground processes • Active processes are usually most interesting
 Active = (in DB call) && (on CPU or “active” wait) • Multiple measure sources for each time component Oracle Time CPU Time Wait Time
  • 8. Oracle time instrumentation • Wait/Event Model: V$SYSTEM_EVENT
 FG/BG active/idle wait time • Time Model: V$SYS_TIME_MODEL
 FG/BG CPU and Elapsed • OS Timing: V$OSSTAT
 CPU sys/usr/IO/wait/load avg • ASH: V$ACTIVE_SESSION_HISTORY
 FG/BG CPU and wait (estimated) NOTE: Stop using V$SYSSTAT “CPU used by this session”
  • 9. Time instrumentation essentials • Run-queue time distorts everything! • TM elapsed = (call end - call start) - idle wait time • ASH ON CPU = active session not in wait (derived) • TM CPU measured = actual instruction time used • OSSTAT reliability platform-dependent • Smaller call latencies increase distortion
  • 10. Normalization: Avg Active Sessions • Primary Oracle performance metric • AAS = instrumentation time / elapsed time • AAS / Cores = core-normalized AAS RMOUG 2007
  • 11. Consistency checking AWR reports • AWR reports are massive
 (most of it irrelevant for any specific situation) • Time data scattered about, difficult to compare • Units vary, breakdowns inconsistent, not normalized • Labor intensive and error-prone Cognitive load of consistency checking is huge
  • 12. Sources of inconsistency • CPU saturation: run queue time
 First and most important item to check • CPU under wait event
 Time Model Wait << Wait ~ ASH Wait • OS CPU accounting for multi-threaded cores
 IBM AIX issue: Time Model CPU << ASH CPU • double-counted DB time
 10.1 bug with job queue process accounting
  • 13. Sources of inconsistency (2) • un-instrumented wait event
 ASH CPU >> Time Model CPU (OS CPU) • active events classified idle
 Time Model, Wait and ASH under-reported • dropped ASH samples*
 ASH << Time Model • cpu-active idle events*
 FG Time << FG CPU Time
  • 14. Check CPU first! • OSSTAT: Average Active Sessions Busy / Core • OSSTAT: Load Average / Core • OSSTAT: OS_CPU_WAIT AAS (processes) • Time Model: CPU / Core • CPU Threads / Core = hyper-threading factor
  • 15. Some consistency checks ? CPU saturation: Core utilization >> 1 ? TM AAS ~ ASH AAS ? TM CPU ~ ASH CPU ~ OS CPU ? TM AAS ~ (TM CPU + WT Wait) ? TM Wait ~ ASH Wait ~ WT Wait KEY: WT = Event/Wait Model TM = Time Model OS = OSSTAT ASH = ASH
  • 16. Oracle Time << CPU + Wait • CPU under wait, time is being double-counted
 
 TM AAS < (TM CPU + WT Wait) && 
 ASH Wait ~ WT Wait Oracle Time CPU Time Wait Time Oracle Time CPU Time Wait TimeCPU under wait
  • 17. Oracle Time >> CPU + Wait • Wait under-counted, un-instrumented wait?
 
 ASH CPU > TM CPU && TM CPU ~ OS CPU && 
 TM Wait > WT Wait • CPU under-counted, AIX or course timers?
 
 ASH CPU > TM CPU && 
 ASH Wait ~ WT Wait && TM Wait > WT Wait Oracle Time CPU Time Wait Time
  • 18. The AIX CPU issue • AIX with SMT turned on reports CPU strangely • Semantics of CPU accounting altered • According to Graham: “it’s just broken” • OakTable blog references:
 Marcin Przepiorowski
 Jeremy Schneider • Graham: Increase reported CPU values by 50% for SMT=4 for more realistic picture
  • 19. Objectives for AWR1page • Reduce cognitive load of checking time consistency in AWR reports: Total / CPU / Wait • Facilitate comparative assessment of system sizes and busy-ness • In 1 page, using only AWR text report as input
  • 20. AWR report time measure sources • $0 ~”^Operating System Statistics” { • $0 ~ "^Time Model Statistics" { • $0 ~ "^Foreground Wait Class" { • $0 ~ "^Wait Classes by Total Wait Time" { • $0 ~ "^Activity Over Time" { • Also other important report singletons like elapsed time of AWR period, platform, core count, cpu count
  • 21. Idea: measure x source matrix INFO OSSTAT TIME MODEL ASH HOST CPU SYS USR ORCL TOTAL CPU WAIT MISC Compare rows for equality measures data sources Columns add up heirarchically
  • 22. Values from AWR report INFO * cores cpus threads/core memory OSSTAT X X X* X TIME MODEL ASH ORCL . total . FG BG . . . .
 X X X . X X X* CPU load . . busy i/o wait scheduler ORCL . FG BG X X X X . . . . . . . X X X . . . . X X X* WAIT MISC
  • 23. Design mockup AWR1page cores: nn cpus: nn threads: nn elapsed:nnnn.m measure OSSTAT TIME MODEL ASH WAIT CLASS HOST LOAD [nn:mm] CPU BUSY [nn:mm] USER nn SYS nn IOWAIT [nn:mm] OS_CPU_WAIT [nn:mm] ORACLE AVG ACTIVE SESSIONS (BG & FG) [nn:mm] [nn:mm] CPU [nn:mm] [nn:mm] FG nn BG nn RES_MGR_CPU_WAIT nn WAIT [nn:mm] [nn:mm] [nn:mm] FG nn nn nn BG nn dd* dd* Scheduler nn MISC: CPU per user call n.mmmm n.mmmm n.mmmm
  • 24. Design notes • AAS = Average Active [ Sessions | Processes | Threads| Cores ] • instrumentation time / elapsed time • All numbers on the sparse matrix are in AAS • Comparable measures from different sources on same line • Indentation and vertical position represent decomposition • Some cells show core-normalized AAS • [nn:mm] where nn=AAS, mm=AAS/cores
  • 25. Version 2c: 634 lines of awk AWR1page file: awrKC.txt platform: Linux x86 64-bit cores: 36 RAC: NO cpus: 72 release: 12.1.0.2.0 threads/core: 2 elapsed: 2.02 (min) sessions(end): 56 snaps: 1 ======================================================================================================================== measure | OSSSTAT | TIME MODEL | ASH | WAIT CLASS ======================================================================================================================== HOST LOAD [7.80:0.22] CPU BUSY [5.99:0.17] USER 2.27 SYS 3.72 IOWAIT [3.00:0.08] OS_CPU_WAIT N/A ........................................................................................................................ ORACLE AVERAGE ACTIVE SESSIONS (BG+FG) [4.98:0.14] [4.58:0.13] CPU [2.02:0.06] [0.50:0.01] FG 2.00 BG 0.01 RSRC_MGR_CPU_WAIT N/A WAIT [2.96:0.08] [4.08:0.11] [4.32:0.12] FG 2.96 4.32 BG 0.00 *0.00 IOWAIT 4.32 FG 4.32 Scheduler 0.00 ........................................................................................................................ MISC CPU per call (ms) 3781.98 10.50 312.50 user calls/sec 1.58 core utilization
  • 26. case studies Using AWR1page to diagnose timing inconsistencies
  • 27. case study 1 Benchmarking Oracle I/O (a.k.a. being a “SLOB”)
  • 28. AWR1page file: awrKC.txt platform: Linux x86 64-bit cores: 36 RAC: NO cpus: 72 release: 12.1.0.2.0 threads/core: 2 elapsed: 2.02 (min) sessions(end): 56 snaps: 1 ======================================================================================================================== measure | OSSSTAT | TIME MODEL | ASH | WAIT CLASS ======================================================================================================================== HOST LOAD [7.80:0.22] CPU BUSY [5.99:0.17] USER 2.27 SYS 3.72 IOWAIT [3.00:0.08] OS_CPU_WAIT N/A ........................................................................................................................ ORACLE AVERAGE ACTIVE SESSIONS (BG+FG) [4.98:0.14] [4.58:0.13] CPU [2.02:0.06] [0.50:0.01] FG 2.00 BG 0.01 RSRC_MGR_CPU_WAIT N/A WAIT [2.96:0.08] [4.08:0.11] [4.32:0.12] FG 2.96 4.32 BG 0.00 *0.00 IOWAIT 4.32 FG 4.32 Scheduler 0.00 ........................................................................................................................ MISC CPU per call (ms) 3781.98 10.50 312.50 user calls/sec 1.58
  • 29. AWR1page file: awrKC.txt platform: Linux x86 64-bit cores: 36 RAC: NO cpus: 72 release: 12.1.0.2.0 threads/core: 2 elapsed: 2.02 (min) sessions(end): 56 snaps: 1 ======================================================================================================================== measure | OSSSTAT | TIME MODEL | ASH | WAIT CLASS ======================================================================================================================== HOST LOAD [7.80:0.22] CPU BUSY [5.99:0.17] USER 2.27 SYS 3.72 IOWAIT [3.00:0.08] OS_CPU_WAIT N/A ........................................................................................................................ ORACLE AVERAGE ACTIVE SESSIONS (BG+FG) [4.98:0.14] [4.58:0.13] CPU [2.02:0.06] [0.50:0.01] FG 2.00 BG 0.01 RSRC_MGR_CPU_WAIT N/A WAIT [2.96:0.08] [4.08:0.11] [4.32:0.12] FG 2.96 4.32 BG 0.00 *0.00 IOWAIT 4.32 FG 4.32 Scheduler 0.00 ........................................................................................................................ MISC CPU per call (ms) 3781.98 10.50 312.50 user calls/sec 1.58 Decent-sized Linux system running Oracle 12 DB Time < DB CPU + Wait => CPU under (IO) wait nontrivial IO wait CPU ok bedrock IO wait = 1 aas CPU + 3 aas wait
  • 30. case study 2 AskTom question about confusing AWR report
  • 31. AWR1page file: QS02FNT_AWR_AskTom.txt platform: AIX-Based Systems (64-bit) cores: 2 RAC: NO cpus: 8 release: 12.1.0.2.0 threads/core: 4 elapsed: 44.22 (min) sessions(end): 91 snaps: 3 ======================================================================================================================== measure | OSSSTAT | TIME MODEL | ASH | WAIT CLASS ======================================================================================================================== HOST LOAD [4.12:2.06] CPU BUSY [1.13:0.57] USER 0.66 SYS 0.47 IOWAIT [0.07:0.03] OS_CPU_WAIT [4.06:2.03] ........................................................................................................................ ORACLE AVERAGE ACTIVE SESSIONS (BG+FG) [0.62:0.31] [0.72:0.36] CPU [0.13:0.07] [0.66:0.33] FG 0.13 BG 0.01 RSRC_MGR_CPU_WAIT 0.00 WAIT [0.48:0.24] [0.06:0.03] [0.05:0.03] FG 0.42 0.01 BG 0.07 *0.05 IOWAIT 0.03 FG 0.00 Scheduler 0.00 ........................................................................................................................ MISC CPU per call (ms) 115.47 0.01 60.02 user calls/sec 9.80
  • 32. AWR1page file: QS02FNT_AWR_AskTom.txt platform: AIX-Based Systems (64-bit) cores: 2 RAC: NO cpus: 8 release: 12.1.0.2.0 threads/core: 4 elapsed: 44.22 (min) sessions(end): 91 snaps: 3 ======================================================================================================================== measure | OSSSTAT | TIME MODEL | ASH | WAIT CLASS ======================================================================================================================== HOST LOAD [4.12:2.06] CPU BUSY [1.13:0.57] USER 0.66 SYS 0.47 IOWAIT [0.07:0.03] OS_CPU_WAIT [4.06:2.03] ........................................................................................................................ ORACLE AVERAGE ACTIVE SESSIONS (BG+FG) [0.62:0.31] [0.72:0.36] CPU [0.13:0.07] [0.66:0.33] FG 0.13 BG 0.01 RSRC_MGR_CPU_WAIT 0.00 WAIT [0.48:0.24] [0.06:0.03] [0.05:0.03] FG 0.42 0.01 BG 0.07 *0.05 IOWAIT 0.03 FG 0.00 Scheduler 0.00 ........................................................................................................................ MISC CPU per call (ms) 115.47 0.01 60.02 user calls/sec 9.80 AIX = CPU red fag CPU bound? maybe small system
 SMT = 4 TM ~ ASH ASH ~ Wait DB Time > DB CPU + Wait => CPU under-report or run-queue 0.56 0.06 ??
  • 33. Final thoughts, future directions • IT WORKS! Significant reduction of cognitive load to consistency check AWR report • Re-write into Python, awk too fragile • Accept html version of AWR report • AWS Lambda service: AWR IN -> 1page OUT • SQL version: check AWR directly
 (should be in DB regression tests)