In today's fast-paced IT world, companies follow “best” testing trends and practices with the assumption that, by applying these methodologies, their product quality will improve. But that does not always happen. Why? Liana Gevorgyan questions and defines, in the language of metrics, exactly what is expected to be changed or improved, and how to implement these improvements. While your project is in progress, choosing the right metrics and looking at their trends help you understand what must change to improve your methodology. Metrics—customer satisfaction, critical/blocking issues ratio with trends for each iteration, gap analysis results and improvement metrics, automation scripts, and test case coverage—and their priority are defined by assigning weight for each based on current project size, process model, technology, time, and goal. With a long list of metrics and measurement techniques, learn to drill down to what really makes sense in your organization. Develop a model that meets your needs and evaluates changes more effectively.
2. 4/27/2015
2
1999
Mars Climate Orbiter Crash
Instead of using the provided
metric system for navigation,
the contractor carried out
measurements using imperial
units and the space craft
crashed into Mars.
COST
$135 Million
1996
ARIANE Failure
Ariane 5 rocket exploded 36.7
seconds after take off. The
engine of this satellite was
much faster than that of the
previous models, but it had a
software bug that went
unnoticed.
COST
>$370 Million
3. 4/27/2015
3
2003
EDS Fails Child Support
EDS created an IT system
for a Child Support
Agency in the UK that had
many software
incompatibility errors.
COST
$1.1 Billion
2013
NASDAQ Trading Shutdown
August 22, 2013 NASDAQ
Stock Market Shut down
trading for three hours
because of a computer
error.
COST
$2 Billion
4. 4/27/2015
4
1985-1987
Therac-25 Medical Accelerator
A software failure caused
wrong dosages of x-rays.
These dosages were
hundreds or thousands of
times greater than
normal, resulting in death
or serious injury.
COST
5 Human Lives
Technology In Our Daily Life
Average usage of electronic systems in developed countries:
One PC or desktop in each home.
80% of people are using mobile phones
40% of people have cars with various electronic systems
People are traveling via train, plane on an average once a year
Dozens of other embedded systems in our homes
Dozens of software programs in our work place, service systems
QualityQualityQualityQuality of all mentioned systems are equal to the Quality of life!of all mentioned systems are equal to the Quality of life!of all mentioned systems are equal to the Quality of life!of all mentioned systems are equal to the Quality of life!
5. 4/27/2015
5
SECTION 2) DEFINING THE “WHAT”
Known QA Metrics & Trends
Defining “What”
10
Metrics and Trends
Measure to Understand
Understand to Control
Control to Improve
6. 4/27/2015
6
Several Known QA Metrics and Trends
11
Manual & automation time ration during
regression cycle
Scripts maintenance time during delivery
iteration
Daily test cases manual execution
Automation effectiveness for issues
identification
Issues found per area during regression
Areas impacted after new features integration
Issues identification behavior based on major
refactoring.
Software process timetable metrics
Delivery process productivity metric
Software system availability metrics
Test cases coverage
Automation coverage
Defined issues based on gap analysis
Ambiguities per requirement
Identified issues by criticality
Identified issues by area separation
Issues resolution turnaround time
Backlog growth speed
Release patching tendency and costs
Customer escalations by
Blocker/Critical/Major issues per release
QA engineer performance
Continuous integration efficiency
Metrics Classification
12
PRODUCT METRICS
PROCESS METRICS
QA METRICS
7. 4/27/2015
7
Metrics Examples by Classification
13
Delivery process productivity metrics
Continuous integration efficiency
Release patching tendency and costs
Backlog growth speed
QA engineer performance
Software process timetable metrics
Software system stability metrics
Identified issues by criticality
Identified issues by area separation
Customer escalations by Blocker/Critical/Major
issues per release
Ambiguities per requirement
Backlog growth speed
PRODUCT METRICSPROCESS METRICS
Sample Metrics Visual
14
55%20%
15%
7% 3%
Automated UI and BE
Automated UI
In Progress
Pending Automation
Not Feasible
AUTOMATION COVERAGE BUGS BY SEVERITY
3 5
12
34
45
Blocker Critical High Medium Low
8. 4/27/2015
8
Visual Depiction Of Sample Trends
15
Blocker
High
Low
0
10
20
30
40
Blocker
Critical
High
Medium
Low
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
ISSUE ESCALATIONS BY CRITICALITY - MONTHLY TREND
1 2 3 4 5
6 WEEKS
REJECTED BUGS % PER WEEK
Expectations
Smooth releases
Predefined risks with mitigation plans
Nice feedback and appreciation
Top notch and innovative products
16
9. 4/27/2015
9
Real Life
Delivery not always ideal
We are familiar what is patching the release
Lack of process tracking data for analysis
Experimental delivery models not exactly the Best practice
models
17
SECTION 3) DELIVERY
PROCESSES & METRICS
Waterfall & Agile
11. 4/27/2015
11
Agile Process Metrics
21
SCRUM TEAM SPRINT METRICS
Scrum Team’s Understanding
of Sprint Scope and Goal
Scrum Team’s Adherence to
Scrum Rules & Engineering
Practices
Scrum Team’s
Communication
Retrospective Process
Improvement
Team Enthusiasm
Quality Delivered to
Customer
Team Velocity
Technical Debt
Management
Actual Stories
Completed vs. Planned
Processes Are Not Always Best Practices
22
Unique way of Agile
Transition from Waterfall to Agile
Transition from Agile to Kanban
12. 4/27/2015
12
Metrics Set Definition for Your Project
23
Process
Technology
Iterations
Project/Team Size
GoalGoalGoalGoal
SECTION 4) WEIGHT BASED ANALYSIS
FOR QA METRICS & MEASUREMENTS
Mapping With Graph Theory
13. 4/27/2015
13
Metric and Trends for your Project
25
You are watching Metrics/Trends set, are they the right ones?
Trends are in an acceptable range, but the products quality is
not improving?
Trying to improve one metric and another is going down?
How do youHow do youHow do youHow do you analyze and fix it?analyze and fix it?analyze and fix it?analyze and fix it?
Mapping QA Metrics Info Graph Theory
26
Process Metrics > A= Metric 1, B=Metric 2…
Actions/Data set that takes effect in Metrics > A1, A2…
Metric dependencies of specific action
Product Metrics > C= Metric 3, D=Metric 4…
14. 4/27/2015
14
Preconditions & Definitions For Metrics &
Actions mapped model
27
Node’s initial weight is predefined and has value from 1-10
Edge’s weight is predefined and has value from 1-10
Connections between Nodes is defined based on dependencies of Metrics
from each other and from Actions
All Actions have fixed 1 weight
Initial Metrics Model & Dependencies
28
Assume
Current Metric set is:
2 Process Metrics -> M1, M2
2 Product
Metrics -> M3, M4
Where :
M1 has dependency on M3
M1 has dependency on M4
M2 has dependency on M3
There are 3 Actions or Data sets that have effect on
some of the Metrics. Those are A1, A2, A3
Where :
M1 has dependency on A1 and A2
M4 has dependency on A3
Initial Priority
Initial Priority based on
Best Practices
W(M1) = 5W(M1) = 5W(M1) = 5W(M1) = 5
W(M2) = 4W(M2) = 4W(M2) = 4W(M2) = 4
W(M3)W(M3)W(M3)W(M3) = 3= 3= 3= 3
W(M4)W(M4)W(M4)W(M4) = 2= 2= 2= 2
15. 4/27/2015
15
Metrics Visualization via Graph
29
M2M2
M3M3
M1M1
4
3
M4M4
5
2
A2A2A2A2A2A2A2A2
A1A1A1A1A1A1A1A1
A3A3A3A3A3A3A3A3
Process Metrics > A= Metric 1, B=Metric 2…
Actions/Data set that takes effect in Metrics > A1, A2…
Metric dependencies of specific action
Product Metrics > C= Metric 3, D=Metric 4…
Weight Assignment On Undirected Graph
30
M2M2
M3M3
M1M1
4
3
M4M4
5
2
A2A2A2A2A2A2A2A2
A1A1A1A1A1A1A1A1
A3A3A3A3A3A3A3A3
2
3
5
1
1
6
1
1 1
Process Metrics > A= Metric 1, B=Metric 2…
Actions/Data set that takes effect in Metrics > A1, A2…
Metric dependencies of specific action
Product Metrics > C= Metric 3, D=Metric 4…
16. 4/27/2015
16
Calculation Formula for Metrics New Priority
31
Priority of the node is calculated the following way:
where
---- node weight assigned by user
---- cumulative weight of each node's edges
New Priority Calculations For One Metric
32
M2M2
M3M3
2
4
3
A2A2
A1A1
1
11
1
Initial Priority
M2 = W(M2)=4
New Priority
M2 = W(M2) * (W(M2-A1) + W(M2-A2) + W(M2-M3))
M2 = 4 * (1+1+2) = 4*4 = 16
17. 4/27/2015
17
New Priority Calculations For Graph
33
M1M1M1M1 M2M2M2M2 M3M3M3M3 M4M4M4M4 A1A1A1A1 A2A2A2A2 A3A3A3A3
W 5 4 3 2 1 1 1
M1M1M1M1 5 3 5 40
M2M2M2M2 4 2 1 1 16
M3M3M3M3 3 3 2 6 33
M4M4M4M4 2 5 10
CALCUL ATI ON S
Metrics
New
Priority
M1M1M1M1
M3M3M3M3
M2M2M2M2
M4M4M4M4
Metrics
Initial
Priority
M1M1M1M1
M2M2M2M2
M3M3M3M3
M4M4M4M4
Metrics Priorities: Current Vs. Calculated
34
Initial Priority Based on
Best Practices
M1
M2
M3
M4
Project Dependent
Calculated Priority
M1
M3
M2
M4
INITIAL PRIORITY NEW PRIORITY
18. 4/27/2015
18
SECTION 5) METRICS WEIGHT
BASED ANALYSIS IN PRACTICE
Defining “How”
Metrics Definition For Test Project
36
Process – Agile with Area ownership
Technology – SAAS Based Enterprise Web & Mobile App
Iteration – 2 weeks
Project Size – 5 Scrum Teams
Goal – Customer Satisfaction, No Blocker, Critical Issues Escalation
by Customer
19. 4/27/2015
19
Key Metrics and Dependencies
37
Metrics
M1 - Customer Escalations per defect severity – Product Metric
M2 – Opened Valid Defects per Area – Product Metric
M3 – Rejected Defects – Process Metric
M4 - Test cases Coverage – Process Metric
M5 - Automation Coverage – Process Metric
M6 - Defect fixes per Criticality – Product Metric
Actions and Data Sets
A1 – Customer types per investment and escalations per severity
A2 – Most Buggy areas
Metrics Initial Priority
Weight assignment and dependency analysis
38
Metric Name Predefined
Node
Weight
Metrics By
Initial
Priority
M1 - Customer Escalations per
defect severity
8 M1
M2 – Opened Valid Defects per
Area
5 M6
M3 – Rejected Defects 4 M2
M4 - Test cases Coverage 3 M3
M5 - Automation Coverage 2 M4
M6 - Defect fixes per Criticality
per Team
6 M5
Node
Weight
M1 M2 M3 M4 M5 M6 A1 A2
M1 = 8 2 6 4 5 2
M2 = 5 2 1 3
M3 = 4 1 3
M4 = 3 6 3 3 2
M5 = 2 2
M6 = 6 4 2
21. 4/27/2015
21
Key Metric Changes & Improvement Plans
41
Metrics by Calculated Priority
M1 - Customer Escalations
per defect severity
M3 – Rejected Defects
M6 - Defect fixes per
Criticality per Team
M2 – Opened Valid Defects
per Area
M4 - Test cases Coverage
M5 - Automation Coverage
Group Defect by Severity and per Customer investment to understand
real picture. 1000 Minor issues can cost more than 1 High severity issue.
Proceed Trainings to low defect rejection, so developers will not spend
more time on analysis of invalid issues
Make sure Defect fixes are going in Parallel with new feature
development for each sprint
Continuously update Test case after each new issue, to make sure you
have good coverage
Automate as much as possible to cut the costs and increase the coverage
Monitoring of Trend Based Priority Metrics
Based on Process Changes
42
60
70 68
75
20
25
29
34
70
80 82
78
0
10
20
30
40
50
60
70
80
90
Jan Feb March April
M1 M3 M6
22. 4/27/2015
22
Let the Challenge Begin & Have FUN
43
Thank You
Global Footprint
About Us
A leading provider of next-gen mobile application
lifecycle services ranging from design and
development to testing and sustenance.
Locations
Corporate HQ: Silicon Valley
Offices: Conshohocken (PA), Ahmedabad (India),
Pune (India), London (UK)
InfoStretch Corporation