2. Introduction
• What is test reporting to non-testers?
• Why is it important?
• How will we look at this?
2
Simon Morley Iqnite Nordic 2010
3. Approach
• Real reports from within the last 12 months
• Product & project names have been removed
• We’ll look at what the reports are apparently saying and
other interpretations
• Summary
3
Simon Morley Iqnite Nordic 2010
4. Test Reporting Chain
Line
Team Project
Lead Lead
? ? Project
Steer
?
Tester
? ?
?
Tech Test 4
Test Coord Peers
Coord
Simon Morley Iqnite Nordic 2010
5. Background: Test Environments
NE NE NE
NE
(SUT)
NE Sim NE Real
(SUT) NE (SUT) User
Equip
Many Simulated Some Simulated Few Simulated
Interfaces Interfaces Interfaces
5
NE: Network Element
SUT: System Under Test
Simon Morley Iqnite Nordic 2010
6. Report #1 & #2 – Test Case Focus
Background
• Iterative development – reports used in ship/no-ship
decisions
• Meetings contain team members, project managers,
system integrators, technical and test coordinators.
• The incremental development teams only report the
6
faults that are outstanding at time of delivery.
Simon Morley Iqnite Nordic 2010
8. Report #1 – Headline Figures
Test Case Stats
Pass 903 95,86%
Fail 39 4,14%
Not Started 0 0,00%
Trouble Reports
Prio A 0
(2 in test suite, 1 in
Prio B 3 SUT)
8
Prio C 0
Simon Morley Iqnite Nordic 2010
9. Report #1 - Summary
• All new test cases passed.
• Failures in one particular area
• Regression had 99% pass rate.
9
Simon Morley Iqnite Nordic 2010
10. Report #1 - Impressions
• High pass rate.
• Low number of outstanding fault reports.
• Failures all limited to one area.
Q: What is working/isn’t working in that area?
Q: Is it widespread in that functional area?
Q: Will extensive retest of the new functionality be needed 10
when the fix is produced.
Simon Morley Iqnite Nordic 2010
12. Report #2 – Headline Figures
Test Case Stats
Pass 1020 100,00%
Fail 0 0%
Not Started 0 0%
Trouble Reports
Prio A 0
Prio B 0
Prio C 0 12
Simon Morley Iqnite Nordic 2010
13. Report #2 - Summary
• All cases passed – new and regression.
• Only one problem/concern observed:
“We have to cope with >30% regression failures with each
run…
that means 3-4 re-runs at least before we are having
“stable” results…
that takes at least 24 hours per regression run…”
13
Simon Morley Iqnite Nordic 2010
14. Report #2 - Impressions
• The numbers (test case & trouble reports) say all is well
• Summary says that there is a certain instability in the
“system”
Q: If there is some instability do I trust the headline
figures?
This summary came as an afterthought – it’s
14
importance was nearly hidden!
Simon Morley Iqnite Nordic 2010
16. Report #3: Test Environment
NE NE NE
NE
(SUT)
NE Sim NE Real
(SUT) NE (SUT) User
Equip
Many Simulated Some Simulated Few Simulated
Interfaces Interfaces Interfaces
16
NE: Network Element
SUT: System Under Test
Simon Morley Iqnite Nordic 2010
17. Report #3 – Background
• The activity was to address a perceived gap in test
coverage.
• The testing was performed in a large network env.
(expensive)
• The activity was analysed afterwards to assess whether
is was cost beneficial – the test reports were one input
into this assessment.
17
Simon Morley Iqnite Nordic 2010
19. Report #3 – Headline Findings
13 Test Areas covered
• 8 working
• 3 working with limitations
• 2 working with severe limitations
Summary:
• “Approx 70% of fault reports lead to improvement in the
product"
• "This is a good activity and it should be continued
19
(repeated)"
Simon Morley Iqnite Nordic 2010
20. Report #3 – First Impressions
• Good testing - lots of problems found - some parts of
the system are not working so well.
• High-level view showing which areas are working ok,
working with some and working with severe limitations.
• But, does the claim “this is a good activity and should
be continued” follow from the data? Let’s look…
20
Simon Morley Iqnite Nordic 2010
21. Report #3 – Data Breakdown
• 137 fault reports from the test activity (both phases)
• 73 faults in one part of the system (53.3%)
The message translated from the report was:
“There’s a problem with this part of the system…”
What about the silent evidence? 21
Simon Morley Iqnite Nordic 2010
23. Report #3 – Two Views of Trouble Reports
Severity when Test Report
TR is written
Severity when
TR is analysed ?
23
Simon Morley Iqnite Nordic 2010
24. Report #3 – Silent Evidence
•What is Silent Evidence?
•Let’s look at the part of the system with most fault reports…
•Of 73 TR’s the distribution of severity was:-
TR prio # of TR's % of total
A 0 0
B 47 64,38%
24
C 26 35,62%
Simon Morley Iqnite Nordic 2010
25. Report #3 - Analysis of the TR’s
Of the 73 TR’s 70 had been analysed showing:-
• 18 (25,71%) did not lead to no a new fix/correction
• 52 (74,29%) were deemed to be "fixable"
Of the 52 that were fixable:
• 32 (45.7%) were B severity
• 20 (28.6%) were C severity (implicitly non-troubling) 25
Simon Morley Iqnite Nordic 2010
26. Report #3 – Examining the B’s
Of the 32 B severity faults:
Group # % of total (/70)
1 12 17,14%
2 12 17,14%
3 4 5,71%
4 4 5,71%
26
Simon Morley Iqnite Nordic 2010
27. Silent Evidence?
• Little consideration of the end-user/customer
• Some testing was run out of phase
• The existing fault database wasn’t always examined
• Too much (wrong) weighting given in some of the
reporting (some of the “working with (severe)
limitations” was incorrect.)
27
Simon Morley Iqnite Nordic 2010
29. Report #3 – TR’s in Test Reports
Severity when Test Report
TR is written
Severity when
TR is analysed ?
29
Simon Morley Iqnite Nordic 2010
30. Reporting Style Problems
• Statistics focussed (numbers with a short story)
• Forgetting about the testing silent evidence
• Reporting at too high level of abstraction – information
can get lost
30
Simon Morley Iqnite Nordic 2010
31. Alternative Reporting Style
• Report on the areas covered
• Report on the areas not covered
• Report on assumptions – simulations, emulations – in
product component, data, behaviour, scenario
31
Simon Morley Iqnite Nordic 2010
32. Improving Communication
• Communication starts early
• No surprises - everybody knows the situation
throughout the project
• The report is just a formal documentation and round-up
of many of the aspects communicated already
32
Simon Morley Iqnite Nordic 2010
33. Other Reflections
• Inherent problems with numbers - not all test cases are
equal, not all fault reports are equal
• Numbers say nothing on their own – the story about the
product should be up front - the numbers become
footnotes to the story.
• It's important to tell a good story - well-rounded, no
undue bias
• Don't assume - don't assume your report receiver 33
knows the whole story
Simon Morley Iqnite Nordic 2010
34. Thank You for Listening!
Simon Morley
Blog: http://testers-headache.blogspot.com/
Twitter: @YorkyAbroad