This document summarizes a presentation given by Jeff McQuigg of KPI Partners on running OBI QA cycles more effectively. The presentation covered:
1) Why QA plans differ for each project and the importance of builds in the QA process.
2) The resources and automation needed for effective QA and how to test the OBI metadata.
3) Breaking the QA process into layers like the UI, reports, security model, and OBI model to test independently.
4) Ensuring the OBI stack is tested for data validation, user functionality, security, performance and infrastructure reliability.
2. Senior Architect at KPI Partners
10 years OBIEE consulting experience, 20+ years overall
Personally been involved with over 45+ OBI projects in every capacity (BI
Architect, Data Modeling, RPD Metadata, Business Analyst, Report
Developer, ETL Architect/Developer, Project Manager, Pre-Sales)
Oracle Ace thought leader for BI & OBI:
• Blogging on OBI best practices since 2007 at
GreatOBI.WordPress.com
• Frequent Oracle Open World speaker
Personal: My 3,000+ beer bottles of beer are on display at Brewpalace.com
2
3. www.kpipartners.com
Transform Data Into Insight
Strategic Consulting | Systems Implementation | Training
Staff built from
Oracle/Siebel/Hyperion
engineering teams
On-site, off-shore and blended
shore delivery models
Exclusive pre-built solutions for Oracle BI
Oracle BI & E-Business Suite Hyperion
Depot Repair Analytics Student Info Analytics
Fixed Asset Analytics Subledger (SLA) Analytics
Endeca
Manufacturing Analytics and more Exalytics
Salesforce.com Analytics
The Leader In Oracle BI & EPM 3
4. 1. Why QA plans are always different
2. The importance of Builds
3. What resources will you need
4. The benefits of automation
5. How to test OBI Meta Data
Goal: Build better QA plans!
4
5. Break the problem down to UI Alerts
its layers
Reports
Each layer can be tested
independently for the most Security Model
part
• Some assumptions needed OBI Model (Ad-hoc)
End-to-End and End-to-Mid Loads
testing ensures proper
handoff between layers Extracts
5
6. Test the OBI stack for the following:
1. Data Validation – Is the data accurate?
2. User Functionality – Does the UI work properly?
3. Security – Are the appropriate visibility rules applied?
4. Performance – Do the loads and reports run fast
enough?
5. Infrastructure – Is the infrastructure reliable and
robust?
6
7. 1. Resources always differ
• Consultants, IT BI &/or DW team, source system SMEs, internal QA teams, power users
2. Project execution always differs
• Agile or iterative vs. traditional waterfall
• Document robustness
• QA team members’ involvement during project
• Different corporate toll gates/methodology
3. Legacy reports / Re-platforming
• May or may not have something to compare to “QA Plans are like a box of chocolates”
- Forrest McQuigg
4. QA source application environments differ
• Dedicated vs. shared, controlled vs. no control
5. Technical stack differences (less so on BI Apps)
• E.g.: Real-time layer, ODS, DW, DM, UI and security integrations, large user volumes
No two QA Plans are ever the same!
7
8. A BI system is different than compiling
code modules for a new build
It’s about the Data Transformation:
Applying code to data to make
different data
ETL loads drive the QA plan as they
take time (12-48 hours typically)
ETL has a high degree of integration The ETL Machine
• Facts depend on Dimensions
• Extract Load Post Load Process Aggregate
Several iterations of ETL Builds are needed to get it right
• Full load for base logic
• Incremental loads for subsequent loads
In-line ETL testing during Development can lower QA risk, but Builds are still needed
8
9. Putting the Build cycles together yields a staggered plan
Focus on the Full
Load first
then on Incremental Loads
Staggering can help if
two environments
QA Begins More than 2 Full
loads may be needed
9
10. Consider two levels of QA test cases:
1. Code-to-Spec
• Identify transformational logic and develop test cases based on
the design spec
2. Spec-to-Business Objective
• Ensure the spec was written correctly. Examples:
All widgets should be assigned to a customer
If an owner doesn’t exist in System A, then it should use the
record from system B
• Perhaps 2-5 goals per spec
• More applicable for custom coded solutions and less so for OOB
BI Apps extensions
10
11. QA Script Development should occur
alongside ETL development using the same
spec
• Keep QA and Dev resources separated
Object Code Dev
Design QA
Spec Execution
Object QA Script Dev
11
12. Pipeline as much as your team and environment can handle
• Offshore capability helps tremendously
• Weekend builds are important Test Fix
• Baby sitting loads takes time & effort + +
Solve Build
Multiple DW environments are a must
• QA Pass #1 in Environment #1 while Load #2 ongoing in Environment #2
• Flexibility is key; use Pre-Production server if available
• Greater complexity when SIT and UAT are used in parallel
• Post Release 1 Deployments are even more complicated
Plan on at least 2 QA iterations for Full Load and at least 3 or more for
Incremental Load
Keep in mind any special loads like weekly or monthly jobs
OBI RPD and Reports can be layered easily on any of these environments
12
13. Now is not the time to be optimistic!
Add sufficient buffers in the schedule
• Problems are 100% guaranteed
Dependencies: A bug in a dimension may
require a full reload to retest
QA Cycles typically can run for several
months on a complex system
Plan on your source system support
approach for Incremental Loads early
• Multiple Prod snapshots to manage – or –
• Manual creation of test cases in a QA system
13
14. There are a variety of tests to run for ETL
• Table row counts
• Allocations & Summations (Total $ by month)
• Attribute ranges (Min and Max values)
• Specific Transformation logic (If-Then-Else)
• Slowly Changing Dimensions, Snapshot Facts
• Metadata columns (minimal on BI Apps)
• Aggregates sync with base tables
• 10 Random Records tracing
• Engineered Records (great for incrementals and special
test cases)
14
15. Use SQL script files to automate Status_CD Count(*)
Select ApplLogic(Status_CD), Count(*) Open 4,129
from Source.Table Closed 65,536
Group by Status_CD order by Status_CD
Rejected 80,085
Minus
UnSpecified 1,024
Select Status, Count(*) MINUS
from Target.Table Status Count(*)
Group by Status order by Status;
Open 4,129
Closed 65,536
For Extracts, consider external file comparison tool Rejected 80,085
Source system technical SMEs write source scripts UnSpecified 1,024
• Challenging to replicate business transformation logic – get your top experts
Compare source SQL with Metric totals in OBI Answers (Anchor Metrics)
Perform both single-hop (Extracts vs. Loads) and multi-hop tests (Source vs.
DW Target)
Leverage database constraints (NOT NULL, FK) during DEV and QA to assist
15
16. Ad-hoc testing of the OBI Model is skipped too
frequently
Even if reports are accurate, what about ad- Reports
hoc queries?
OBI Model
Reports are built on top of the ad-hoc subject
area
Database
OBI thinks and generates SQL – does it do so
correctly?
Confidence
There are tradeoffs between QA effort vs.
confidence
• 100% confidence is not possible
# Tests
16
17. Ensure proper SQL generation and consistent results
Can be done on a buggy DW (within reason)
• ETL QA Team ensures raw numbers in tables are accurate
• OBI Tests are relative to those numbers even if incorrect
Automation of OBI Testing: Build test reports in OBI
alongside reports in DEV
• Place in separate IT only dashboard
• Can be run in any environment at any time
• Excellent Automation technique
• Great for fast diagnosis of problems – catching unintended consequences
17
18. 1. Skeleton accuracy: (Tables & Joins)
• Does OBI generate the proper SQL? (BI Architect)
• Do the metric values remain constant for each dimension?
(Tests join paths and aggregates)
2. Derived metric accuracy:
• Check OBI Derived metrics based on an Anchor metrics
3. Dimensional attribute accuracy:
• Is descriptive data coming over from the source properly?
4. Data security model accuracy:
• Are filters applied properly to existing queries?
• Is the SQL generated still correct & results consistent?
5. Drill Paths:
• Confirm that dimensional drill paths are correct
18
19. “Tree Top” Tests
• Break an Anchor metric out by the tops of each dimension
• Make sure correct SQL & correct tables are used (Architect)
• Demonstrates Unit Test
• Also try multiple dimensions at the same time if possible
19
20. Raw OBI mappings to base Fact table fields are
“Base” or “Anchor” metrics Database Table
• E.g.: Count(Headcount_Ind) or Sum(Total_Amt)
• There can be filtering logic in the RPD
• ETL QA verifies that these are correct
Especially for BI Apps projects M1 M2
Derived metrics are those calculated in OBI:
• Filtered metrics: Headcount vs. Employee Headcount
• Time Series: Prior Year Employee Headcount M3 M4
• Rates & Ratios: # Cases per Employee
• Complex Metrics: Rolling 12 Avg. Headcount
Incorrect Anchor metrics due to ETL do not matter M5 M6
• Prior Year $ should match even if the TOTAL_AMT_USD field is
wrong in the Database
• Data fields are variables, just like algebra: Prior Year(x)
M7
20
21. Hint: Capture definitions leveraging other defined
fields
Reuse BI definitions as opposed to always mapping to raw tables
1. Order.Status: IfNull(ORDER_TABLE.STATUS_CD, ‘Unspecified’)
2. # Orders: Count(ORDER_TABLE.ORDER_NUM)
3. # Open Orders: # Orders where Order.Status = ‘Open’
Reuse business terminology as much as possible
Three benefits:
1) Makes creating test cases much easier
2) Communicates the definitions better to business users
3) Helps developers reuse logic when building in RPD
21
22. Make reports that confirm the Derived metrics using
their Anchor metrics
Use color coding to assist
Use report calculations to demonstrate
Creative solutions are a must!
Report Calc
22
23. Try to avoid downloads to Excel if possible
• Hinders automation
Use two reports if needed – Be Creative!
Provide some instructions;
• These reports should be used for a very long time
RSUM()
RSUM() RCOUNT() RCOUNT()
23
24. Again be creative!
This test verifies Prior Year Ship $ is accurate
Solution: Run the report for 2012 and 2011
• Compare 2011.CY Ship $ to 2012.PY Ship $
2011 2012
24
25. Extensive OBI ad-hoc testing may take too long
Leverage your reports as surrogates
for much of the OBI ad-hoc tests
• They will include multiple dimensions
(Skeleton test)
• Various versions of the metrics within the
many report structures
• “Hit it from multiple angles”
More reports per topic greater confidence
25
26. Can be done by an internal QA group not familiar with all the details
Must have a decent report spec to use
• Difficult if an iterative report design approach is taken – minimal specs to use
Displaying the proper data set - filters (Report shows Open Orders only)
All columns relatively match (% Deviation = 100 * (Plan – Actual)/ Actual)
• Can be done without OBI & ETL testing (Algebra again!)
Drill downs and navigations work
• For navigations, key test is making sure #s remain the same!
Interactions with prompts
UI Items: Labeling, formatting, colors, conditional formats, UI Standards,
Help links, etc.
26
27. 3 Aspects:
• Visibility – see the correct dashboards & subject areas & folders
• Capability – Answers access, create iBots, etc.
• Data Access – Correct dataset - data filtering is happening
Reports
Some security testing should be done during OBI tests
• Are the basics working? Security
• Will it work for ad-hoc?
OBI Model
Creation of test accounts flows easily from security
model design Database
Final layer is to run reports as test users and verify
data set accuracy
• For this user, for this report, are the numbers what they are
supposed to be?
27
28. ETL QA Roles
Strong SQL, source system knowledge, source Systems Analysts, Developers,
system access for entering records Source SMEs
OBI Model
OBI Answers, knowledge of data model, metric Systems Analysts, QA Team, OBI
definitions guide Developers, OBI Architect
Reports
Ability to independently confirm data from the Systems Analysts, Business
source, general analyst, business user Analysts, End Users, QA Team
Security
General Analyst or power user skillset, Answers Systems Analysts, Business
Analysts, End Users, QA Team
Infrastructure
Deep technical skills, typically those who set up Infrastructure Admins
the infrastructure
28
29. 1. Perform QA as early in the process as possible
2. Design a QA plan with your team’s skill sets in
mind
3. Plan QA Cycles around the Build
4. Automate as much as possible
5. Don’t ignore OBI Ad-hoc tests
29
31. Contact Us
facebook.com/kpipartners
linkedin.com/company/kpipartners
twitter.com/kpipartners
youtube.com/kpipartners
The Leader In Oracle BI & EPM 31
32. Contact Us
Email: info@kpipartners.com
Web: kpipartners.com/contact
KPI World Headquarters North America Offices
39899 Balentine Drive New York, NY Minneapolis, MN
Suite #375 Chicago, IL San Diego, CA
Boston, MA Greensboro, NC
Newark, CA 94560
Phone: (510) 818-9480 Global Offices
Bangalore, India Hyderabad, India
The Leader In Oracle BI & EPM 32