SlideShare une entreprise Scribd logo
1  sur  90
Télécharger pour lire hors ligne
TB
Full Day Tutorial
10/14/2014 8:30:00 AM
"Successful Test Automation: A
Manager’s View"
Presented by:
Mark Fewster
Grove Consultants
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
Mark Fewster
Grove Software Testing Ltd.
Mark Fewster has more than thirty years of experience in software testing ranging from test
management to test techniques and test automation. For the past two decades, Mark has
provided consultancy and training in software testing, published papers, and co-
authoredSoftware Test Automation and Experiences of Test Automation with Dorothy Graham.
A popular speaker at conferences worldwide, Mark has won the Mercury BTO Innovation in
Quality Award. He is currently helping ISTQB define the expert level certification for test
automation.
Speaker Presentations
Written by
Grove Software Testing Ltd.
www.grove.co.uk
Version 3_1
© Grove Software Testing, 2014
StarWest 2014
Managing Successful Test
Automation
A one-day tutorial
Managing Successful Test Automation
Contents
Session 0: Introduction to the tutorial
Objectives, What we cover (and don’t cover) today
Session 1: Planning and Managing Test Automation
Test automation objectives (and exercise)
Responsibilities
Pilot project
Measures for automation
Return on Investment (ROI) (and exercise)
Session 2: Testware Architecture
Importance of a testware architecture
What needs to be organised
Session 3: Pre- and Post-Processing
Automating more than tests
Test status
Session 4: Scripting Techniques
Objectives of scripting techniques
Different types of scripts
Domain specific test language
Session 5: Automated Comparison
Automated test verification
Test sensitivity
Comparison example
Session 6: Final Advice, Q&A and Direction
Strategy exercise
Final advice
Questions and Answers
Abstract
Many organisations have invested a lot of time and effort into test automation but they
have not achieved the significant returns that they had expected. Some blame the
tool that they use while others conclude test automation doesn't work well for their
situation. The truth is often very different. These organisations are typically doing
many of the right things but they are not addressing key issues that are vital to long
term success with test automation.
Mark Fewster describes the most important issues that you must address, and helps
you understand and choose the best approaches for your organization—no matter
which automation tools you use. Management issues including responsibilities,
automation objectives and return on investment are covered along with technical
issues such as testware architecture, pre- and post-processing and automated
comparison techniques.
The target audience for this tutorial is people involved with managing test automation
who need to understand the key issues in making test automation successful.
Technical issues are covered at a high level of understanding; there are no tool
demos!
Biography
Mark has over 30 years of industrial experience in software testing ranging from test
management to test techniques and test automation. In the last two decades Mark
has provided consultancy and training in software testing, published papers and co-
authored two books with Dorothy Graham, "Software Test Automation” and
“Experiences of Test Automation”. He is a popular speaker at national and
international conferences and seminars, and has won the Mercury BTO Innovation in
Quality Award.
Mark has served on the committee of the British Computer Society's Specialist
Interest Group in Software Testing (BCS SIGiST) and is currently helping ISTQB in
defining the expert level certification for test automation.
presented by Mark Fewster
mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014
0-1
Managing Successful Test Automation
Prepared by
Dorothy Graham
info@dorothygraham.co.uk
www.DorothyGraham.co.uk
© Mark Fewster and Dorothy Graham 2014
Mark Fewster
Grove Software Testing Ltd.
mark@grove.co.uk
www.grove.co.uk
and
0.1
Objectives of this tutorial
● help you achieve better success in automation
■ independent of any particular tool
● mainly management and some technical issues
■ objectives for automation
■ showing Return on Investment (ROI)
■ importance of testware architecture
■ practical tips for a few technical issues
■ what works in practice (case studies)
● help you plan an effective automation strategy
0.2
presented by Mark Fewster
mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014
0-2
Tutorial contents
1) planning & managing test automation
2) testware architecture
3) pre and post processing
4) scripting techniques
5) automated comparison
6) final advice, Q&A and direction
0.3
Shameless commercial plug
Part 1: How to do
automation - still relevant
today, though we plan to
update it at some point!
Latest
book
(2012)
testautomationpatterns.org
0.4
presented by Mark Fewster
mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014
0-3
What is today about? (and not about)
● test execution automation (not other tools)
● We will NOT cover:
■ demos of tools (time, which one, expo)
■ comparative tool info (expo, web)
■ selecting a tool*
● at the end of the day
■ understand technical and non-technical issues
■ have your own automation objectives
■ plan your own automation strategy
* Mark will email you Chapter 10 of the STA book on request – mark@grove.co.uk
0.5
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Planning & Managing Test Automation
Managing Successful Test Automation
1 Managing 2 Architecture 3 Pre- and Post
4 Scripting 6 Advice5 Comparison
1.1
Managing Successful Test Automation
Contents
Managing
1 2 3
4 5 6
Test automation objectives
Responsibilities
Automation in agile
Pilot project
Measures for automation
Return on Investment (ROI)
1.2
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
An automation effort
● is a project (getting started or major changes)
■ with goals, responsibilities, and monitoring
■ but not just a project – ongoing effort is needed
● not just one effort – continuing
■ when acquiring a tool – pilot project
■ when anticipated benefits have not materialized
■ different projects at different times
► with different objectives
● objectives are important for automation efforts
■ where are we going? are we getting there?
1.3
fast
testing
slow
testing
Effectiveness
Low
High
EfficiencyManual testing Automated
Efficiency and effectiveness
poor
fast
testing
poor
slow
testing
goodgood
greatest
benefit
not good but
common
worst
better
1.4
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Common test automation objectives
● choose your most important objective:
1. faster testing
2. run more tests
3. reduce testing costs
4. automate x% of tests
5. find more bugs
6. other: _______________________________
cheaper testing, less effort,
fewer testers
more testing, more often
increase test coverage
reduce elapsed time, shorten
schedule, time to market
better testing,
improve software quality
automate 100% of testing
Exercise
1.5
Same tests
automated
edit tests
(maintenance) set-up execute
analyse
failures clear-up
Manual
testing
More mature
automation
Faster testing?
1.6
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Run more tests?
● which is better:
■ 100 1-minute tests or 1 60-minute test?
● not the only aspect:
■ failure analysis may be much longer for the 60-minute test!
25% 25% 25% 25%
Setup Com Uniq TD
100 1-min tests:
unique testing = 15 sec x 100 = 25 mins
25% 50% 25%
Setup Unique TD
1 60-min test:
unique testing = 50%= 30 mins +
1.7
Run more tests?
● 3 sets of tests
■ A: 100 tests, easy to automate
■ B: 60 tests, moderately difficult to automate
■ C: 30 tests, hard to automate
● what if the next release leaves A unchanged but has
major changes to B and C?
“Good automation is not found in the
number of tests run but in the
value of the tests that are run”
1.8
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Reduce testing costs?
Cost
Time (months / years)
Automation
effort
Total effort
Yes, but not to zero!
1.9
Manual testing effort
(without automation)
Manual testing effort
(with automation)
Automate x% of the manual tests?
manual
tests automated
tests
tests not
worth
automating
exploratory
test
automation
manual tests
automated
(% manual)
tests (&
verification) not
possible to do
manually
tests not
automated
yet
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
What is automated?
regression tests exploratory testing
likelihood of
finding bugs
most often
automated
1.11
Find more bugs?
● tests find bugs, not automation
● automation is a mechanism for running tests
● the bug-finding ability of a test is not affected by the
manner in which it is executed
● this can be a dangerous objective
■ especially for regression automation!
Automated tests Manual Scripted Exploratory Fix Verification
9.3% 24.0% 58.2% 8.4%
Experiences of Test Automation, Ch 27, p 503, Ed Allen & Brian Newman 1.12
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
When is “find more bugs” a good objective for
automation?
● objective is “fewer regression bugs missed”
● when the first run of a given test is automated
■ MBT, Exploratory test automation, automated test design
■ keyword-driven (e.g. users populate spreadsheet)
● find bugs in parts we wouldn’t have tested?
■ indirect! (direct result of running more tests)
1.13
Good objectives for test automation
● realistic and achievable
● short and long term
● regularly re-visited and revised
● measurable
● should be different objectives for testing and for
automation
● automation should support testing activities
Pattern: SET CLEAR GOALS
1.14
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Trying to get started: Tessa Benzie
■ consultancy to start automation effort
► project, needs a champion – hired someone
► training first, something else next, etc.
■ contract test manager – more consultancy
► bought a tool – now used by a couple contractors
► TM moved on, new QA manager has other priorities
■ just wanting to do it isn’t enough
► needs dedicated effort
► now have “football teams” of manual testers
Chapter 29, pp 535, Experiences of Test Automation 1.15
Managing Successful Test Automation
Contents
Managing
1 2 3
4 5 6
Test automation objectives
Responsibilities
Automation in agile
Pilot project
Measures for automation
Return on Investment (ROI)
1.16
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
What is an automated test?
● a test!
■ designed by a tester for a purpose
● test is executed
■ implemented / constructed to run automatically using a tool
■ or run manually
● who decides which tests to run?
● who decides how a test is run?
1.17
Test manager’s dilemma
● who should undertake automation work
■ not all testers can automate (well)
■ not all testers want to automate
■ not all automators want to test!
● conflict of responsibilities
■ (if you are both tester and automator)
■ should I automate tests or run tests manually?
● get additional resources as automators?
■ contractors? borrow a developer? tool vendor?
1.18
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Relationships
19
engine:
test tool
passengers:
test cases
car: test
infrastructure
driver:
tester
mechanic:
test automator
Testers
● test the software
■ design tests
■ select tests for automation
► requires planning / negotiation
● execute automated tests
■ should not need detailed
technical expertise
● analyse failed automated
tests
■ report bugs found by tests
■ problems with the tests may
need help from the automation
team
Automators
● automate tests (requested by
testers)
● support automated testing
■ allow testers to execute tests
■ help testers debug failed tests
■ provide additional tools
● predict
■ maintenance effort for software
changes
■ cost of automating new tests
● improve the automation
■ more benefits, less cost
Responsibilities
1.20
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Good testing
● is effective
■ finds most of the faults
(90+%)
■ gives confidence
● is efficient
■ uses a few small test cases
● is flexible
■ can use different subsets of
test cases for different test
objectives
Good automation
● easy to use
■ flexible: supports different
requirements for automation
■ responsive: quick changes
when needed
● cheap to use
■ build & maintain automated
tests
■ failure analysis
● improve over time
1.21
Testing versus automation
Roles for automation
● Testware architect
■ designs the overall structure for the automation
● Champion
■ “sells” automation to managers and testers
● Tool specialist / toolsmith
■ technical aspects, licensing, updates to the tool
● Automated script developers
■ write new scripts as needed (e.g. keyword)
■ debug automation problems
1.22
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Managing Successful Test Automation
Contents
Managing
1 2 3
4 5 6
Test automation objectives
Responsibilities
Automation in agile
Pilot project
Measures for automation
Return on Investment (ROI)
1.23
Agile automation: Lisa Crispin
■ starting point: buggy code, new functionality needed,
whole team regression tests manually
■ testable architecture: (open source tools)
► want unit tests automated (TDD), start with new code
► start with GUI smoke tests - regression
► business logic in middle level with FitNesse
■ 100% regression tests automated in one year
► selected set of smoke tests for coverage of stories
■ every 6 mos, engineering sprint on the automation
■ key success factors
► management support & communication
► whole team approach, celebration & refactoring
1.24
Chapter 1, pp 17-32, Experiences of Test Automation
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Automation and agile
● agile automation: apply agile principles to automation
■ multidisciplinary team
■ automation sprints
■ refactor when needed
● fitting automation into agile development
■ ideal: automation is part of “done” for each sprint
► Test-Driven Design = write and automate tests first
■ alternative: automation in the following sprint ->
► may be better for system level tests
1.25See www.satisfice.com/articles/agileauto-paper.pdf (James Bach)
Automation in agile/iterative development
1.26
A
manual testing of
this release (testers)
A B
B CA
FEDCBA
regression testing (automators automate the best tests)
run automated tests (testers)
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Requirements for agile test framework
● support manual and automated testing
■ using the same test construction process
● support fully manual execution at any time
■ requires good naming convention for components
● support manual + automated execution
■ so test can be used before it is 100% automated
● implement reusable objects
● allow “stubbing” objects before GUI available
1.27Source: Dave Martin, LDSChurch.org, email
A tale of two projects: Ane Clausen
■ Project 1: 5 people part-time, within test group
► no objectives, no standards, no experience, unstable
► after 6 months was closed down
■ Project 2: 3 people full time, 3-month pilot
► worked on two (easy) insurance products, end to end
► 1st month: learn and plan, 2nd & 3rd months: implement
► started with simple, stable, positive tests, easy to do
► close cooperation with business, developers, delivery
► weekly delivery of automated Business Process Tests
■ after 6 months, automated all insurance products
1.28Chapter 6, pp 105-128, Experiences of Test Automation
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Managing Successful Test Automation
Contents
Managing
1 2 3
4 5 6
Test automation objectives
Responsibilities
Automation in agile
Pilot project
Measures for automation
Return on Investment (ROI)
1.29
Pilot project
● reasons
■ you’re unique
■ many variables / unknowns /
options
■ learn how to start / improve
● benefits
■ find the best way for you
■ solve problems once
■ establish confidence (based
on experience)
■ set realistic targets
● objectives
■ demonstrate tool value
■ gain experience / skills in the
use of the tool
■ identify changes to existing
test process
■ set internal standards and
conventions
■ refine assessment of costs and
achievable benefits
1.30
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Characteristics of a pilot project
1.31
Planned
Important
Learning
Objective
Timely
resourced, targets, contingency
full time work, worthwhile tests
informative, useful, revealing
quantified, not subjective
short term, focused
P
I
L
O
T
What to explore in the pilot
● build / implement automated tests (architecture)
■ different ways to build stable tests (e.g. 10 – 20)
● maintenance
■ different versions of the application
■ reduce maintenance for most likely changes
● failure analysis
■ support for identifying bugs
■ coping with common bugs affecting many automated tests
1.32
Also: naming conventions, reporting results, measurement
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
After the pilot…
● having processes & standards is only the start
■ 30% on new process
■ 70% on deployment
► marketing, training, coaching
► feedback, focus groups, sharing what’s been done
● the (psychological) Change Equation
■ change only happens if (x + y + z) > w
1.33
Source: Eric Van Veenendaal,
successful test process improvement
Managing Successful Test Automation
Contents
Managing
1 2 3
4 5 6
Test automation objectives
Responsibilities
Automation in agile
Pilot project
Measures for automation
Return on Investment (ROI)
1.34
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Why measure automation?
● to justify and confirm starting automation
■ business case for purchase/investment decision, to confirm ROI
has been achieved e.g. after pilot
■ both compare manual vs automated testing
● to monitor on-going automation “health”
■ for increased efficiency, continuous improvement
■ build time, maintenance time, failure analysis time, refactoring
time
► on-going costs – what are the benefits?
■ monitor your automation objectives
1.35
Useful measures
● a useful measure:
“supports effective analysis and decision making, and that
can be obtained relatively easily.”
Bill Hetzel, “Making Software
Measurement Work”, QED, 1993.
● easy measures may be more useful even though less
accurate (e.g. car fuel economy)
● ‘useful’ depends on objectives, i.e. what you want to
know
1.36
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
EMTE – what is it?
● Equivalent Manual Test Effort
■ given a set of automated tests,
■ EMTE is how much effort would it take
► to run those tests manually
● note
■ you would not actually run these tests manually
■ EMTE = is the effort you would have spent if you had run the
tests manually
■ EMTE can be used to show some test automation benefit
1.37
Monitoring test automation health
● important to do (often neglected)
■ need to distinguish between test automation progress and test
automation health
● progress examples
■ number of tests automated
■ coverage achieved by automated tests
■ number of test cycles executed / release
● health examples
■ benefits
■ build cost
■ analysis cost
■ maintenance cost 1.38
ROI = benefit – cost
cost
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Internal objectives for test automation
● provide automation services
■ efficient building, use and maintenance of automated tests
■ decrease automation costs over time
■ increase automation benefits
► savings, ease of use, flexibility
● measured ROI (appropriate for test objectives)
■ Equivalent Manual Test Effort (EMTE)
► additional test hours
► hours of unattended tests performed
■ proportion of unattended testing
■ increased coverage
■ reduce elapsed time 1.39
Measure benefit
● equivalent manual test effort (EMTE)
■ hours of additional testing
■ hours of unattended testing
● number of tests
■ tests executed
■ additional (new) tests
■ repeated tests
● number of test cycles
■ additional cycles
● increased coverage
1.40
Relate target to
total cost of
automation, e.g.
benefit 10 times
total cost
Suggestion
the degree to which automation has supported
testers in achieving their objectives
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
An example comparative benefits chart
1.41
0
10
20
30
40
50
60
70
80
exec speed times run data variety tester work
man
aut
ROI spreadsheet – email me for a copy
14 x faster 5 x more often 4 x more data 12 x less effort
Measure build effort
● time taken to automate tests
■ hours to add new or existing manual tests
■ average across different test types
● proportion of equivalent manual test effort
■ e.g. 1 hour to automate 30 minute manual test
= 2 times equivalent manual test effort
1.42
Target: < 2 times
:decreasing 10% per year
Suggestion
put your own
(more appropriate)
figures here
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Measure failure analysis effort
● analysis effort for each test
■ captured in defect report
■ effort from first recognition through to resumption of test
execution
■ average hours (or minutes) per failed test case
► needs comparison of same with manual testing
► must also monitor defect reporting effectiveness
– e.g. how many ‘non reproducible’ reports
1.43
Target: X minutes?
Trend:stable
Suggestion
put your own
(more appropriate)
figure here
Measure maintenance effort
● maintenance effort of automated tests
■ percentage of test cases requiring maintenance
■ average effort per test case
■ percentage of equivalent manual test effort
1.44
Target: < 10%
Trend:stable or decreasing
Suggestion
put your own
( appropriate)
figure here
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Recommendations
● don’t measure everything!
● choose three or four measures
■ applicable to your most important objectives
● monitor for a few months
■ see what you learn
● change measures if they don’t give useful information
1.45
Managing Successful Test Automation
Contents
Managing
1 2 3
4 5 6
Test automation objectives
Responsibilities
Automation in agile
Pilot project
Measures for automation
Return on Investment (ROI)
1.46
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Is this Return on Investment (ROI)?
● tests are run more often
● tests take less time to run
● it takes less human effort to run tests
● we can test (cover) more of the system
● we can run the equivalent of days / weeks of manual
testing in a few minutes / hours
● faster time to market
1.47
these are (good) benefits
but are not ROI
ROI = (benefit – cost)
cost
Examples of ROI achieved
● Michael Snyman, S African bank (Ch 29.13)
■ US$4m on testing project, automation $850K
■ savings $8m, ROI 900%
● Henri van de Scheur, Database testing (Ch 2)
■ results: 2400 times more efficient
● Stefan Mohacsi, Armin Beer: European Space Agency (Ch
9)
■ MBT, break even after four test cycles
1.48
from: Experiences of Test Automation book
1-Managing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
How important is ROI?
● ROI can be dangerous
■ easiest way to measure: tester time
■ may give impression that tools replace people
● “automation is an enabler for success, not a cost
reduction tool”
► Yoram Mizrachi, “Planning a mobile test automation strategy that
works, ATI magazine, July 2012
● many achieve lasting success without measuring ROI
(depends on your context)
■ need to be measure benefits (and publicize them)
1.49
Managing Successful Test Automation
Managing
1 2 3
4 5 6
Note in your Summary Sheet the key
points for you from this session
Summary: key points
1.50
• Assign responsibility for automation (and testing)
• Use a pilot project to explore the good methods
• Know your automation objectives
• Measure what’s important to you
• Show ROI from automation
Now complete
the exercise!
2-Architecture
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Testware Architecture
Managing Successful Test Automation
1 Managing 2 Architecture 3 Pre- and Post
Ref. Chapter 5: Testware Architecture
“Software Test Automation”
4 Scripting 6 Advice5 Comparison
2.1
Managing Successful Test Automation
Contents
Architecture
1 2 3
4 5 6
Importance of a testware architecture
What needs to be organised
2.2
2-Architecture
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Testware architecture
● organisation of, and relationship between, artefacts
■ scripts, input, test data, test descriptions, expected results,
actual results, log files, etc.
● if you create an automated test:
■ do you know what already exists that you can use?
► scripts, data, tools
■ do you know where to put the artefacts?
► script(s), data, expected results, etc.
■ do you know what names to use?
● if you execute an automated test:
■ do you know where to find all the test results?
■ do you know how to analyse a test failure? 2.3
Testware architecture
testwarearchitecture
Testers
Test Execution Tool
High Level Keywords
Structured Scripts
structured
testware
Test
Automator(s)
write tests (in DSTL)
runs scripts
abstraction here
= easier to change
tools and maintain
= long life
abstraction here
= easier to write
automated tests
= widely used
(testframework)
2.4
2-Architecture
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Architecture – abstraction levels
● most critical factor for success
■ worst: close ties between scripts, tool & tester
● separate testers’ view from technical aspects
■ so testers don’t need tool knowledge
► for widespread use of automation
► scripting techniques address this
● separate tests from the tool – modular design
■ likely changes confined to one / few module(s)
■ re-use of automation functions
■ for minimal maintenance and long-lived automation
2.5
Localised regimes
● “everyone will do the sensible thing”
■ most will do something sensible, but different
● “use the tool however it best suits you”
■ ignores cost of learning how best to automate
● problems include:
■ effort wasted repeatedly solving the same problems in different
ways
■ no re-use between teams
■ multiple learning curves
2.6
2-Architecture
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Easy way out: use the tool’s architecture
● tool will have its own way of organising tests
■ where to put things (for the convenience of the tool!)
■ will “lock you in” to that tool – good for vendors!
● a better way (gives independence from tools)
■ organise your tests to suit you
■ as part of pre-processing, copy files to where the tool needs
(expects) to find them
■ as part of post-processing, copy back to where you want things
to live
2.7
Tool-specific vs generic scripts
A
Tool A
A
A
A
B
Tool B
B
B
B
Tool A
A
G
G
G
Tool B
B
New env
2.8
2-Architecture
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Test-specific vs reused scripts
T1
Test 1
T1
T1
T2
Test 2
T2
T2
T1
Test 1
R
R
T2
Test 2 Test 1 Test 2
R
R
Test
Definition
2.9
TESTNAME: <name of test>
PURPOSE: <single sentence explaining test purpose>
MATERIALS:
<a list of the artefacts used by this test>
RESULTS:
<a list of the artefacts produced by this test>
SETUP:
<sequence of keywords implementing setup actions>
TEARDOWN:
<sequence of keywords implementing teardown actions>
EXECUTION:
<sequence of keywords implementing the test actions>
Test definition
2.10
2-Architecture
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Test definition additional info
● measures
■ expected run time (of the automated test)
■ EMTE (equivalent manual test effort)
■ others?
● attributes, such as test selector tags
■ tag specific sets of tests so they can be selected to be run
■ examples: smoke tests, short tests, bug fix tests. long tests,
specific environment tests
2.11
General control script
For each test to be executed
Read keyword definition
Verify keyword definition
Check specified materials exist
Execute setup
Execute test
Perform post-execution comparison
Execute teardown
Check specified results exist
Report results
EndFor
2.12
2-Architecture
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Learning is incremental: Molly Mahai
■ book learning – knew about investment, not replace people,
don’t automate everything, etc.
■ set up good architecture? books not enough
■ picked something to get started
► after a while, realised limitations
► too many projects, library cumbersome
■ re-designed architecture, moved things around
■ didn’t know what we needed till we experienced the problems
for ourselves
► like trying to educate a teenager
2.13
Chapter 29, pp 527-528, Experiences of Test Automation
Managing Successful Test Automation
Contents
Architecture
1 2 3
4 5 6
Importance of a testware architecture
What needs to be organised
2.14
2-Architecture
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
A test for you
● show me one of your automated tests running
■ how long will it take before it runs?
● typical problems
■ fails: forgot a file, couldn’t find a called script
■ can’t do it (yet):
► Joe knows how but he’s out,
► environment not right,
► haven’t run in a while,
► don’t know what files need to be set up for this script
● why not: run up your framework, select test, GO
2.15
Key issues
● scale
■ the number of scripts, data files, results files, benchmark files,
etc. will be large and growing
● shared scripts and data
■ efficient automation demands reuse of scripts and data through
sharing, not multiple copies
● multiple versions
■ as the software changes so too will some tests but the old tests
may still be required
● multiple environments / platforms
2.16
2-Architecture
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Terms
- Testware artefacts
Testware
Products By-Products
Test Materials Test Results
logs
status
summary
differences
logs
status
summary
differences
actual
results
scripts
data
inputs
expected
results
doc (specifications)
env
utilities
2.17
Shared
script
saveas.scp
Shared
script
open.scp
Scribble
Testware for example test case
diffs.txt
Differences
Test script:
- test input
countries.scp
Log
log.txt
countries2.dcm
Expected
Output
Test
Specification
testspec.txt
countries.dcm
Initial
Document
countries2.dcm
Edited
Document
Compare
2.18
2-Architecture
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Testware by type
Testware
countries2.dcm
Products By-Products
Test Materials Test Results
saveas.scp
countries.dcm
countries.scp
testdef.txt
open.scp
countries2.dcm
log.txt
diff.txt
status.txt
2.19
Benefits of standard approach
● tools can assume knowledge (architecture)
■ they need less information; are easier to use; fewer errors will
be made
● can automate many tasks
■ checking (completeness, interdependencies); documentation
(summaries, reports); browsing
● portability of tests
■ between people, projects, organisations, etc.
● shorter learning curve
2.20
2-Architecture
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Testware Sets
● Testware Set:
■ a collection of testware artifacts
■ four types:
►Test Set - one or more test cases
►Script Set - scripts used by two or more Test Sets
►Data Set - data files used by two or more Test Sets
►Utility Set - utilities used by two or more Test Sets
■ good software practice: look for what is common, and keep it in
only one place!
■ Keep your testware DRY!
2.21
Testware library
● repository of master versions
of all Testware Sets
■ “uncategorised scripts are
worse than no scripts”
► Onaral & Turkmen
● CM is critical
■ “If it takes too long to update
your test library, automation
introduces delay instead of
adding efficiency”
► Linda Hayes, AST magazine,
Sept 2010
2.22
d_ScribbleTypical v1
d_ScribbleTypical v2
d_ScribbleVolume v1
s_Logging v1
s_ScribbleDocument v1
s_ScribbleDocument v2
s_ScribbleDocument v3
s_ScribbleNavigate v1
t_ScribbleBreadth v1
t_ScribbleCheck v1
t_ScribbleFormat v1
t_ScribbleList v1
t_ScribbleList v2
t_ScribbleList v3
t_ScribblePrint v1
t_ScribbleSave v1
t_ScribbleTextEdit v1
t_ScribbleTextEdit v2
u_ScribbleFilters v1
u_GeneralCompare v1
Version
numbers
2-Architecture
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Separate test results
Test
suite
A single test suite ...
Software
under test
... may be used on subtly
different versions of software ...
Test
results
... and produce
different sets
of test results ...
... that we
wish to keep
2.23
Incremental approach: Ursula Friede
■ large insurance application
■ first attempt failed
► no structure (architecture), data in scripts
■ four phases (unplanned)
► parameterized (dates, claim numbers, etc)
► parameters stored in single database for all scripts
► improved error handling (non-fatal unexpected events)
► automatic system restart
■ benefits: saved 200 man-days per test cycle
► €120,000!
2.24
Chapter 23, pp 437-445, Experiences of Test Automation
2-Architecture
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Managing Successful Test Automation
Architecture
1 2 3
4 5 6
Summary: key points
2.25
• Structure your automation testware to suit you
• Testware comprises many files, etc. which need to be
given a home
• Use good software development standards
Note in your Summary Sheet the key
points for you from this session
3-Pre and Post Processing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Pre- and Post-Processing
Managing Successful Test Automation
1 Managing 2 Architecture 3 Pre- and Post
6 Advice4 Scripting
Ref. Chapter 6: Automating Pre- and Post-Processing
“Software Test Automation”
5 Comparison
3.1
Managing Successful Test Automation
Contents
Pre and Post
1 2 3
4 5 6
Automating more than tests
Test status
Contents
3.2
3-Pre and Post Processing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
What is pre- and post-processing?
● Pre-processing
■ automation of setup tasks necessary to fulfil test case
prerequisites
● Post-processing
■ automation of post-execution tasks necessary to complete
verification and house-work
● These terms are useful because:
■ there are lots of tasks, they come in packs, many are the same,
and they can be easily automated
3.3
Automated tests/automated testing
Select / identify test cases to run
Set-up test environment:
• create test environment
• load test data
Repeat for each test case:
• set-up test pre-requisites
• execute
• compare results
• log results
• analyse test failures
• report defect(s)
• clear-up after test case
Clear-up test environment:
• delete unwanted data
• save important data
Summarise results
Automated tests
Select / identify test cases to run
Set-up test environment:
• create test environment
• load test data
Repeat for each test case:
• set-up test pre-requisites
• execute
• compare results
• log results
• clear-up after test case
Clear-up test environment:
• delete unwanted data
• save important data
Summarise results
Analyse test failures
Report defects
Automated testing
Automated processManual process 3.4
3-Pre and Post Processing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Examples
● pre-processing
■ copy scripts from a common
script set (e.g. open, saveas)
■ delete files that shouldn’t
exist when test starts
■ set up data in files
■ copy files to where the tool
expects to find them
■ save normal default file and
rename the test file to the
default (for this test)
● post-processing
■ copy results to where the
comparison process expects
to find them
■ delete actual results if they
match expected (or archive if
required)
■ rename a file back to its
normal default
3.5
Outside the box: Jonathan Kohl
■ task automation (throw-away scripts)
► entering data sets to 2 browsers (verify by watching)
► install builds, copy test data
■ support manual exploratory testing
■ testing under the GUI to the database (“side door”)
■ don’t believe everything you see
► 1000s of automated tests pass too quickly
► monitoring tools to see what was happening
► “if there’s no error message, it must be ok”
– defects didn’t make it to the test harness
– overloaded system ignored data that was wrong
Chapter 19, pp 355-373, Experiences of Test Automation 3.6
3-Pre and Post Processing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Automation +
execution
comparison
traditional
test
automation
DSTL
structured
testware
architecture
loosen your
oracles
ETA,
monkeys
manual
testing
3.7
Managing Successful Test Automation
Contents
Pre and Post
1 2 3
4 5 6
Automating more than tests
Test status
3.8
3-Pre and Post Processing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Test status – pass or fail?
● tool cannot judge pass or fail
■ only “match” or “no match”
■ assumption: expected results are correct
● when a test fails (i.e. the software fails)
■ need to analyse the failure
► true failure? write up bug report
► test fault? fix the test (e.g. expected result)
► known bug or failure affecting many automated tests?
– this can eat a lot of time in automated testing
– solution: additional test statuses
3.9
Test statuses for automation
• other possible additional test statuses
– test blocked
– environment problem (e.g. network down, timeouts)
– set-up problems (files missing)
– test needs to be changed but not done yet
Compare to No differences found Differences found
(true) expected outcome Pass Fail
expected fail outcome Expected Fail Unknown
don’t know / missing Unknown Unknown
3.10
3-Pre and Post Processing
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Managing Successful Test Automation
Note in your Summary Sheet the key
points for you from this session
Pre and Post
1 2 3
4 5 6
Summary: key points
• Pre- and post processing to automate setup and clear-up
tasks
• Test status is more than just pass / fail
3.11
4-Scripting
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Scripting Techniques
Managing Successful Test Automation
Ref. Chapter 3: Scripting Techniques
“Software Test Automation”
1 Managing 2 Architecture 3 Pre- and Post
4 Scripting 6 Advice5 Comparison
4.1
Managing Successful Test Automation
Contents
Scripting
1 2 3
4 5 6
Objectives of scripting techniques
Different types of scripts
Domain specific test language
4.2
4-Scripting
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Scripting
● the backbone of automation
● four broad approaches
■ linear
■ structured
■ data-driven
■ keyword-driven
● issues
■ more code means more maintenance
■ programmers like programming
► may write new code even when unnecessary
● do you know your script to test ratio?
test specific programming required
framework supports abstraction
4.3
Objectives of scripting techniques
● implement your testware
architecture
● to reduce costs
■ make it easier to build
automated tests
► avoid duplication
■ avoid excessive maintenance
costs
► greater reuse of functional,
modular scripting
● greater return on investment
■ better testing support
■ greater portability
► environments & hardware
platforms
● enhance capabilities
■ achieve more testing for same
(or less) effort
► testing beyond traditional
manual approaches
probably best achieved by data-driven or keyword-driven
4.4
4-Scripting
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
re-usable modules / script library
Progression of automation implementation
low-level
instructions
structured
code
data
driven
keyword
driven
manual test
scripts
input
data
test
definitions
get quote - motor, age, …
create policy – motor, age, …
Technical
Test
Analyst
Test
Analyst
easier maintenance
Source: Mark Fewster, Grove Consultants (grove.co.uk) 4.5
Managing Successful Test Automation
Contents
Scripting
1 2 3
4 5 6
Objectives of scripting techniques
Different types of scripts
Domain specific test language
4.6
4-Scripting
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Example application
Tax Calculator
Name:
Earnings
January
February
March
Tax band:
Tax due:
input fields
calculated outputs
£290
B
£45
Fred
£100
£80
£110
Test 1:
1. open application
2. enter name & earnings
3. save results
4. close application
4.7
Test
Tool
Structured scripting
(manual) test
procedures
Test
Tool
softwareundertest
create
test
scripts
script
library
high-level
instructions
and test data
low-level
“how to”
instructions
automatorstesters
softwareundertest
4.8
4-Scripting
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
About structured scripting
● script library for re-used scripts
■ part of testware architecture implementation
► shared scripts interface to software under test
► all other scripts interface to shared scripts
● reduced costs
■ maintenance
► fewer scripts affected by software changes
■ build
► individual control scripts are smaller and easier to read (a ‘higher’
level language is used)
4.9
Example structured scripts
10
Sub Test1()
Call OpenApplication(“TaxCalculator”)
Call CalculateTax(“Pooh”, 100, 150, 125)
Call SaveAsAndClose(“Test 1 Results”)
End Sub
Sub OpenApplication(Application)
Workbooks.Open Filename:= _
"ATT “ & Application & ”.xls“
End Sub
Sub CalculateTax(Name, M1, M2, M3)
ActiveCell.FormulaR1C1 = Name
Range("C6").Select
ActiveCell.FormulaR1C1 = M1
Range("C7").Select
ActiveCell.FormulaR1C1 = M2
Range("C8").Select
ActiveCell.FormulaR1C1 = M3
End Sub
Sub SaveAsAndClose(Filename)
ActiveWorkbook.SaveAs Filename:= _
Filename & “.xls", FileFormat:= _
xlNormal, Password:="“
ActiveWorkbook.Close
End Sub
Supporting scripts:
Main test script:
additional tests can
be created more easily
4-Scripting
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Usable (re-usable) scripts
● to re-use a script, need to know:
■ what does this script do?
■ what does it need?
■ what does it deliver?
■ what state when it starts?
■ what state when it finishes?
● information in a standard place for every script
■ can search for the answers to these questions
4.11
Test
Tool
Test
Tool
automatorstesters
Data driven
(manual) test
procedures
softwareundertest
script
library
low-level
“how to”
instructions
high-level
instructions
test data
create
data
files
create
control
scripts
4.12
4-Scripting
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
About data driven
● test data extracted from scripts
■ placed into separate data files
● control script reads data from data file
■ one script implements several tests by reading different data
files (reduces script maintenance per test)
● reduced build cost
■ faster and easier to automate similar test procedures
■ many test variations using different data
● multiple control scripts required
■ one for each type of test (with varying data)
4.13
Example data-driven script
14
Name Earn1 Earn2 Earn3
Pooh 100 150 125
Piglet 75 90 80
Roo 120 110 65
data file: TaxCalcData.csv
Sub RunTests()
For each Row in file
Call OpenApplication(“TaxCalculator”)
Open TaxCalcData.csv
Read Name
Read Earn1
Read Earn2
Read Earn3
Call CalculateTax(Name, Earn1, _
Earn2, Earn3)
Call SaveAsAndClose(Test & “ Results”)
Next Row
End Sub
main script (runs all tests in data file)
4-Scripting
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Test
Tool
Test
Tool
automatorstesters
Keywords (basic)
softwareundertest
(manual) test
procedures
create
test
definitions
control
script
high-level
instructions
and test data
script
library low-level “how to”
instructions and
keywords
single control script:
“interpreter” / ITE
4.15
About keywords
● single control script (Interactive Test Environment)
■ improvements to this benefit all tests (ROI)
■ extracts high-level instructions from scripts
● ‘test definition’
■ independent of tool scripting language
■ a language tailored to testers’ requirements
► software design
► application domain
► business processes
● more tests, fewer scripts
Unit test: calculate
one interest payment
System test:
summarise interest
for one customer
Acceptance test:
end of day run, all
interest payments 4.16
4-Scripting
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Example keyword-driven scripts
CalculateTax Pooh, 100, 150, 125
CalculateTax Piglet, 75, 90, 80
CalculateTax Roo, 120, 110, 65
Keyword file (option 1)
CalculateTax
Pooh, 100, 150, 125
Piglet, 75, 90, 80
Roo, 120, 110, 65
Keyword file (option 2)
Keyword file (Robot Framework)
Test Case Action Arg1 Arg2 Arg3 Arg4
Check Tax Calculate Tax Pooh 100 150 125
Calculate Tax Piglet 75 90 80
Calculate Tax Roo 120 110 65
CalculateTax Datafile.txt
Keyword file (option 3)
Name Earn1 Earn2 Earn3
Pooh 100 150 125
Piglet 75 90 80
Roo 120 110 65
Datafile.txt
4.17
Minimising automation code
● more code means more maintenance
■ better to reuse existing code than write new code
► that does the same thing
■ achieved by
► clear objectives
– e.g. automate for minimal maintenance
► use of appropriate scripting approach
– abstraction
► careful design
– consider sets of tests, not individual tests
► consistency (standards)
4.18
4-Scripting
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Managing Successful Test Automation
Contents
Scripting
1 2 3
4 5 6
Objectives of scripting techniques
Different types of scripts
Domain specific test language
4.19
procedures
/definitions
test
high-level
instructions
and test data
Test
Tool
Merged test procedure/test definition
softwareundertest
create
control
script
high-level
instructions
and test data
script
library
low-level “how to”
instructions and
keyword scripts
single control script:
“interpreter” / ITE
language for testers 4.20
4-Scripting
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Domain Specific Test Language (DSTL)
● test procedures and test definitions similar
■ both describe sequences of test cases
► giving test inputs and expected results
● combine into one document
■ can include all test information
■ avoids extra ‘translation’ step
■ testers specify tests regardless manual/automated
■ automators implement required keywords
4.21
Keywords in the test definition language
● multiple levels of keywords possible
■ high level for business functionality
■ low level for component testing
● composite keywords
■ define keywords as sequence of other keywords
■ gives greater flexibility (testers can define composite
keywords) but risk of chaos
● format
■ freeform, structured, or standard notation
► (e.g. XML)
4.22
4-Scripting
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Example use of keywords
Create a new account, order 2 items and check out
Firstname Surname Email address Password
Create Account Edward Brown ebrown@gmail.com apssowdr
Item Num Items Check Price for Items
Order Item 1579 3 15.30
Order Item 2598 12.99
Total
Checkout 28.29
4.23
Documenting keywords
Name Name for this keyword
Purpose What this keyword does
Parameters Any inputs needed, outputs produced
Pre-
conditions
What needs to be true before using it,
where valid
Post-
conditions What will be true after it finishes
Error
conditions
What errors it copes with, what is
returned
Example An example of the use of the keyword
Source: Martin Gijsen. See also Hans Buwalda book & articles 4.24
4-Scripting
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Example keyword Create account
*mandatory
Name Create account
Purpose Creates a new account
Parameters *First name: 2 to 32 characters
*Last name: 2 to 32 characters
*Email address: also serves as account id
*Password: 4 - 32 characters
Pre-conditions Account doesn't exist for this person
Post-conditions Account created (including email confirmation)
order Screen displayed
Error conditions Account already exists
Example (see example)
4.25
Example keyword Order item
*mandatory
Name Order item
Purpose Order one or more of a specific item
Parameters *Item number: 1000 to 9999, in catalogue
Number of items wanted: 1 to Max-for-item. If blank,
assumes 1
Pre-conditions Valid account logged in
Item in stock (sufficient for the order)
Prices available for item (including discounts for many)
Post-conditions Item(s) appear in shopping basket
Number of available items decreased by number ordered
Error conditions Insufficient items in stock
Example (see example)
4.26
4-Scripting
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Implementing keywords
● ways to implement keywords
■ scripting language (of a tool)
■ programming language (e.g. Java)
■ use what your developers are most familiar with!
● ways of supporting a DSTL
■ commercial, open source or home-grown framework
■ spreadsheet or database for test descriptions
4.27
Frameworks
● commercial tools
■ ATRT, ATSM, Axe, Certify, eCATT, FASTBoX, GUIdancer,
Liberation, Ranorex, te52, TestComplete, TestDrive,
TestingWhiz, Tosca Testsuite, zest
● open source
■ Cacique, FitNesse, Helium, Jameleon, JET, JSystem, Jubula,
Maveryx, Open2Test, Power Tools, QAliber Test Builder, Rasta,
Robot Framework, SAFS, SpecFlow, STAF, TAF, TAF Core,
TestMaker, Xebium
● I can email you my Tool List
■ test execution and framework tools
■ info@dorothygraham.co.uk
4.28
4-Scripting
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Test
Tool
softwareundertest
script
libraries
tool dependenttool independent
Execution-tool-independent framework
frame-
work
test
procedures
/definitions
tool independent
scripting language
Another
Test
Tool
sut
Test
Tool
softwareundertest
script
libraries
some tests run manually
4.29
Managing Successful Test Automation
Note in your Summary Sheet the key
points for you from this session
Scripting
1 2 3
4 5 6
Summary
● objectives of good scripting
■ reduce costs, enhance
capabilities
● many types of scripting
■ structured, data-driven,
keyword
● keyword/DSTL the most
sophisticated
■ yields significant benefits
● increased productivity
■ customised front end,
tool independence
4.30
5-Comparison
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Automated Comparison
Managing Successful Test Automation
Ref. Chapter 4: Automated Comparison
“Software Test Automation”
1 Managing 2 Architecture 3 Pre- and Post
4 Scripting 6 Advice5 Comparison
5.1
Managing Successful Test Automation
Contents
Comparison
1 2 3
4 5 6
Automated test verification
Test sensitivity
5.2
5-Comparison
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Perverse persistence: Michael Williamson
■ testing Webmaster tools at Google (new to testing)
■ QA used Eggplant (image processing tool)
■ new UI broke existing automation
■ automate 4 or 5 functions
■ comparing bitmap images – inaccurate and slow
■ testers had to do automation maintenance
► not worth developers learning tool’s language
■ after 6 months, went for more appropriate tools
■ QA didn’t use the automation, tested manually!
► tool was just running in the background
Chapter 17, pp 321-338, Experiences of Test Automation
5.3
Checking versus testing
■ checking confirms that things are as we think
► e.g. check that the code still works as before
■ testing is a process of exploration, discovery, investigation and
learning
► e.g. what are the threats to value to stakeholders, give information
■ checks are machine-decidable
► if it’s automated, it’s probably a check
■ tests require sapience
► including “are the checks good enough”
Source: Michael Bolton, www.developsense.com/blog/2009/08/testing-vs-checking/
5.4
5-Comparison
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
General comparison guidelines
● keep as simple as possible
● well documented
● standardise as much as possible
● avoid bit-map comparison
● poor comparisons destroy good tests
● divide and conquer:
■ use a multi-pass strategy
■ compare different aspects in each pass
5.5
Two types of comparison
● dynamic comparison
■ done during test execution
■ performed by the test tool
■ can be used to direct the
progress of the test
► e.g. if this fails, do that instead
■ fail information written to test
log (usually)
● post-execution comparison
■ done after the test execution
has completed
■ good for comparing files or
databases
■ can be separated from test
execution
■ can have different levels of
comparison
► e.g. compare in detail if all high
level comparisons pass
5.6
5-Comparison
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Scribble
Comparison types compared
diffs.txt
Differences
Log
log.txt
countries2.dcm
Expected
Output
countries.dcm
Initial
Document
countries2.dcm
Edited
Document
Compare
Test script:
-test input
-comparison
instructions
scribble1.scp
Error message
as expected?
dynamic comparison
post-execution
comparison 5.7
Comparison process
● few tools for post-execution comparison
● simple comparators come with operating systems but do
not have pattern matching
■ e.g, Unix ‘diff’, Windows ‘UltraCompare’
● text manipulation tools widely available
■ sed, awk, grep, egrep, Perl, Tcl, Python
● use pattern matching tools with a simple comparator to
make a ‘comparison process’
● use masks and filters for efficiency
5.8
5-Comparison
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Managing Successful Test Automation
Contents
Comparison
1 2 3
4 5 6
Automated test verification
Test sensitivity
Contents
5.9
Test sensitivity
● the more data there is available:
■ the easier it is to analyse faults and debug
● the more data that is compared:
■ the more sensitive the test
● the more sensitive a test:
■ the more likely it is to fail
■ (this can be both a good and bad thing)
5.10
5-Comparison
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Sensitive versus specific(robust) test
Unexpected
change occurs
Sensitive test
verifies the
entire outcome
Test outcome
Specific test
verifies this
field only
Test is supposed
to change only
this field
5.11
Unexpected
change occurs
for every test
Three tests,
each changes
a different field
If all tests are
sensitive, they
all show the
unexpected change
If all tests are
specific, the
unexpected
change is
missed
Test
outcome
Too much sensitivity = redundancy
5.12
5-Comparison
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Using test sensitivity
● sensitive tests:
■ few, at high level
■ breadth / sanity checking tests
■ good for regression / maintenance
● specific/robust tests:
■ many, at detailed level
■ focus on specific aspects
■ good for development
A good test
automation strategy
will plan
a combination of
sensitive and specific
tests 5.13
Managing Successful Test Automation
Note in your Summary Sheet the key
points for you from this session
Comparison
1 2 3
4 5 6
Summary: key points
• Balance dynamic and post-execution comparison
• Balance sensitive and specific tests
• Use masking and filters to adjust sensitivity
5.14
5-Comparison
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Comparison example
● Expected output
■ Date 11 May 2011
■ Order No X43578
■ Login J Smith
■ Add 1 mouse
■ Add 1 mountain bike
■ Add_to_total $15.99
■ Add_to_total $249.99
■ Total_due $265.98
■ Logout J Smith
● Actual output
■ Login M Jones
■ Order No X54965
■ Add 1 mountain bike
■ Add_to_total $249.99
■ Add 1 toaster
■ Add_to_total $35.45
■ Logout J Smith
■ Total_due $285.34
■ Date 16 Aug 2011
Has this test passed?
5.15
Simple automated comparison
● Expected output
■ Date 11 May 2011
■ Order No X43578
■ Login J Smith
■ Add 1 mouse
■ Add 1 mountain bike
■ Add_to_total $15.99
■ Add_to_total $249.99
■ Total_due $265.98
■ Logout J Smith
● Actual output
■ Login M Jones
■ Order No X54965
■ Add 1 mountain bike
■ Add_to_total $249.99
■ Add 1 toaster
■ Add_to_total $35.45
■ Logout J Smith
■ Total_due $285.34
■ Date 16 Aug 2011
Fail
Fail
Fail
Fail
Fail
Fail
Fail
Fail
Fail
5.16
5-Comparison
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Filter: alphabetical order
● Expected output
■ Add 1 mountain bike
■ Add 1 mouse
■ Add_to_total $15.99
■ Add_to_total $249.99
■ Login J Smith
■ Logout J Smith
■ Date 11 May 2011
■ Order No X43578
■ Total_due $265.98
● Actual output
■ Add 1 mountain bike
■ Add 1 toaster
■ Add_to_total $249.99
■ Add_to_total $35.45
■ Login M Jones
■ Logout J Smith
■ Date 16 Aug 2011
■ Order No X54965
■ Total_due $285.34
Pass
Fail
Fail
Fail
Fail
Pass
Fail
Fail
Fail
1st Pass is a coincidence, 2nd Pass is actually a bug!
5.17
Filters: replacement by object type
● Expected output
■ Add <item>
■ Add <item>
■ Add_to_total $15.99
■ Add_to_total $249.99
■ Login J Smith
■ Logout J Smith
■ Date <date>
■ Order No <orderno>
■ Total_due $265.98
● Actual output
■ Add <item>
■ Add <item>
■ Add_to_total $249.99
■ Add_to_total $35.45
■ Login M Jones
■ Logout J Smith
■ Date <date>
■ Order No <orderno>
■ Total_due $285.34
This has helped eliminate things we aren’t interested in
Pass
Pass
Fail
Fail
Fail
Pass
Pass
Pass
Fail
5.18
5-Comparison
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Filters: replacement for everything?
● Expected output
■ Add <item>
■ Add <item>
■ Add_to_total $<amt>
■ Add_to_total $<amt>
■ Login <name>
■ Logout <name>
■ Date <date>
■ Order No <orderno>
■ Total_due $<amt>
● Actual output
■ Add <item>
■ Add <item>
■ Add_to_total $<amt>
■ Add_to_total $<amt>
■ Login <name>
■ Logout <name>
■ Date <date>
■ Order No <orderno>
■ Total_due $<amt>
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Everything passed – isn’t this great?
Actually, no – not checking name or total 5.19
Variables – what needs to be done
● Expected output
■ Add 1 <item>
■ Add 1 <item>
■ Add_to_total T1=$15.99
■ Add_to_total T2=$249.99
■ Login NAME=J Smith
■ Logout J Smith=NAME?
■ Date <date>
■ Order <orderno>
■ Total_due $265.98 =T1+T2?
● Actual output
■ Add 1 <item>
■ Add 1 <item>
■ Add_to_total T1=$249.99
■ Add_to_total T2=$35.45
■ Login NAME=M Jones
■ Logout J Smith =NAME?
■ Date <date>
■ Order <orderno>
■ Total_due $285.34 =T1+T2?
“Variable=” implemented as “store but ignore in comparison”
“=Variable/expression” implemented as “check it is equal to”
P
P
P
P
P
F
P
P
F
5.20
6-Advice
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Final Advice and Direction
Managing Successful Test Automation
1 Managing 2 Architecture 3 Pre- and Post
4 Scripting 6 Advice5 Comparison
6.1
What next?
● we have looked at a number of ideas about test
automation today
● what is your situation?
■ what are the most important things for you now?
■ where do you want to go?
■ how will you get there?
● make a start on your test automation strategy now
■ adapt it to your own situation tomorrow
6.2
6-Advice
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Strategy exercise
● your automation strategy / action plan
■ review the exercises
► automation objectives, 3rd page
► responsibility
► measurement
► architecture
● Your strategy
■ identify the top 3 changes you want to make to your automation
■ note your plans now
6.3
Dealing with high level management
● management support
■ building good automation takes time and effort
■ set realistic expectations
● benefits and ROI
■ make benefits visible (charts on the walls)
■ metrics for automation
► to justify it, compare to manual test costs over iterations
► automation health: on-going continuous improvement
– build cost, maintenance cost, failure analysis cost
– coverage of system tested
6.4
6-Advice
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Dealing with developers
● critical aspect for successful automation
■ automation is development
► may need help from developers
► automation needs development standards to work
– testability is critical for automatability
– why should they work to new standards if there is “nothing in it for
them”?
■ seek ways to cooperate and help each other
► run tests for them
– in different environments
– rapid feedback from smoke tests
► help them design better tests?
6.5
Standards and technical factors
● standards for the testware architecture
■ where to put things
■ what to name things
■ how to do things
► but allow exceptions if needed
● new technology can be great
■ but only if the context is appropriate for it (e.g. Model-Based
Testing)
● use automation “outside the box”
6.6
6-Advice
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
On-going automation
● you are never finished
■ don’t “stand still” - schedule regular review and re-factoring of
the automation
■ change tools, hardware when needed
■ re-structure if your current approach is causing problems
● regular “pruning” of tests
■ don’t have “tenured” test suites
► check for overlap, removed features
► each test should earn its place
6.7
Information and web sites
■ www.TestAutomationPatterns.org
■ www.AutomatedTestingInstitute.com
► TestKIT Conference, Washington DC
■ tool information
► commercial and open source: http://testertools.com
► open source tools
– www.opensourcetesting.org
– http://sourceforge.net
– http://riceconsulting.com (search on “cheap and free tools”)
► LinkedIn group: QA Automation Architect
■ www.ISTQB.org
► Expert level in Test Automation (in progress)
6.8
6-Advice
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Managing Successful Test Automation
Note in your Summary Sheet the key
points for you from this session
Advice
1 2 3
4 5 6
Summary: successful test
automation
• assigned responsibility for automation tasks
• realistic, measured objectives (testing ≠ automation)
• technical factors – architecture, levels of abstraction,
DSTL, scripting, comparison, pre and post processing
• management support, ROI, continuous improvement
Free book!
6.9
and now …
● any final questions / comments?
● please evaluate this tutorial (vs its objectives)
■ high mark – thanks very much!
■ low mark – please explain (so we can improve)
► (see Session 0 for tutorial objectives, what is covered and what is
intentionally excluded)
● email Dot for ROI spreadsheet / tool list
(info@DorothyGraham.co.uk)
● email Mark for automation consultancy & questions
(mark@grove.co.uk)
6.10
6-Advice
presented by Mark Fewster
mark@grove.co.uk
© Mark Fewster &
Dorothy Graham 2014
Any more questions?
Please email me!
mark@grove.co.uk
Thank you for coming today
I hope this will be useful for you
All the best with your automation!
Managing Successful Test Automation
6.11

Contenu connexe

Tendances

Agile Testing Strategy
Agile Testing StrategyAgile Testing Strategy
Agile Testing Strategy
tharindakasun
 
Test Automation - Keytorc Approach
Test Automation - Keytorc Approach Test Automation - Keytorc Approach
Test Automation - Keytorc Approach
Keytorc Software Testing Services
 
Role Of Qa And Testing In Agile 1225221397167302 8
Role Of Qa And Testing In Agile 1225221397167302 8Role Of Qa And Testing In Agile 1225221397167302 8
Role Of Qa And Testing In Agile 1225221397167302 8
a34sharm
 

Tendances (20)

Test Automation
Test AutomationTest Automation
Test Automation
 
Agile Testing Strategy
Agile Testing StrategyAgile Testing Strategy
Agile Testing Strategy
 
Cypress Automation
Cypress  AutomationCypress  Automation
Cypress Automation
 
Test automation
Test automationTest automation
Test automation
 
Cypress - Best Practices
Cypress - Best PracticesCypress - Best Practices
Cypress - Best Practices
 
Agile Testing by Example
Agile Testing by ExampleAgile Testing by Example
Agile Testing by Example
 
Test Automation Architecture
Test Automation ArchitectureTest Automation Architecture
Test Automation Architecture
 
Test Automation - Keytorc Approach
Test Automation - Keytorc Approach Test Automation - Keytorc Approach
Test Automation - Keytorc Approach
 
Automation testing introduction for FujiNet
Automation testing introduction for FujiNetAutomation testing introduction for FujiNet
Automation testing introduction for FujiNet
 
Scrum Testing Methodology
Scrum Testing MethodologyScrum Testing Methodology
Scrum Testing Methodology
 
Selenium test automation
Selenium test automationSelenium test automation
Selenium test automation
 
Test Automation in Agile
Test Automation in AgileTest Automation in Agile
Test Automation in Agile
 
Role Of Qa And Testing In Agile 1225221397167302 8
Role Of Qa And Testing In Agile 1225221397167302 8Role Of Qa And Testing In Agile 1225221397167302 8
Role Of Qa And Testing In Agile 1225221397167302 8
 
How to Get Started with Cypress
How to Get Started with CypressHow to Get Started with Cypress
How to Get Started with Cypress
 
Software Testing Process, Testing Automation and Software Testing Trends
Software Testing Process, Testing Automation and Software Testing TrendsSoftware Testing Process, Testing Automation and Software Testing Trends
Software Testing Process, Testing Automation and Software Testing Trends
 
6 Traits of a Successful Test Automation Architecture
6 Traits of a Successful Test Automation Architecture6 Traits of a Successful Test Automation Architecture
6 Traits of a Successful Test Automation Architecture
 
e2e testing with cypress
e2e testing with cypresse2e testing with cypress
e2e testing with cypress
 
What is sanity testing
What is sanity testingWhat is sanity testing
What is sanity testing
 
Sap test center of excellence
Sap test center of excellenceSap test center of excellence
Sap test center of excellence
 
Regression testing
Regression testingRegression testing
Regression testing
 

En vedette

En vedette (15)

Building on Existing Infrastructure for Mobile Applications
Building on Existing Infrastructure for Mobile ApplicationsBuilding on Existing Infrastructure for Mobile Applications
Building on Existing Infrastructure for Mobile Applications
 
Mindmaps: Lightweight Documentation for Testing
Mindmaps: Lightweight Documentation for TestingMindmaps: Lightweight Documentation for Testing
Mindmaps: Lightweight Documentation for Testing
 
Implement an Enterprise Performance Test Process
Implement an Enterprise Performance Test ProcessImplement an Enterprise Performance Test Process
Implement an Enterprise Performance Test Process
 
The Power of an Individual Tester: The HealthCare.gov Experience
The Power of an Individual Tester: The HealthCare.gov ExperienceThe Power of an Individual Tester: The HealthCare.gov Experience
The Power of an Individual Tester: The HealthCare.gov Experience
 
Testing the New Disney World Website
Testing the New Disney World WebsiteTesting the New Disney World Website
Testing the New Disney World Website
 
The Internet of Things and You
The Internet of Things and YouThe Internet of Things and You
The Internet of Things and You
 
Innovation for Existing Software Product: An R&D Approach
Innovation for Existing Software Product: An R&D ApproachInnovation for Existing Software Product: An R&D Approach
Innovation for Existing Software Product: An R&D Approach
 
Why Agile Fails in Large Enterprises—and What to Do about It
Why Agile Fails in Large Enterprises—and What to Do about ItWhy Agile Fails in Large Enterprises—and What to Do about It
Why Agile Fails in Large Enterprises—and What to Do about It
 
Crafting Smaller User Stories: Examples and Exercises
Crafting Smaller User Stories: Examples and ExercisesCrafting Smaller User Stories: Examples and Exercises
Crafting Smaller User Stories: Examples and Exercises
 
Risk-Based Testing for Agile Projects
Risk-Based Testing for Agile ProjectsRisk-Based Testing for Agile Projects
Risk-Based Testing for Agile Projects
 
Survival Guide: Taming the Data Quality Beast
Survival Guide: Taming the Data Quality BeastSurvival Guide: Taming the Data Quality Beast
Survival Guide: Taming the Data Quality Beast
 
Mobile App Testing: The Good, the Bad, and the Ugly
Mobile App Testing: The Good, the Bad, and the UglyMobile App Testing: The Good, the Bad, and the Ugly
Mobile App Testing: The Good, the Bad, and the Ugly
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and Planning
 
Metrics Program Implementation: Pitfalls and Successes
Metrics Program Implementation: Pitfalls and SuccessesMetrics Program Implementation: Pitfalls and Successes
Metrics Program Implementation: Pitfalls and Successes
 
Quality Index: A Composite Metric for the Voice of Testing
Quality Index: A Composite Metric for the Voice of TestingQuality Index: A Composite Metric for the Voice of Testing
Quality Index: A Composite Metric for the Voice of Testing
 

Similaire à Successful Test Automation: A Manager’s View

Top 5 Pitfalls of Test Automation and How To Avoid Them
Top 5 Pitfalls of Test Automation and How To Avoid ThemTop 5 Pitfalls of Test Automation and How To Avoid Them
Top 5 Pitfalls of Test Automation and How To Avoid Them
Sundar Sritharan
 

Similaire à Successful Test Automation: A Manager’s View (20)

MeManagement Issues in Test Automation
MeManagement Issues in Test AutomationMeManagement Issues in Test Automation
MeManagement Issues in Test Automation
 
Why Automation Fails—in Theory and Practice
Why Automation Fails—in Theory and PracticeWhy Automation Fails—in Theory and Practice
Why Automation Fails—in Theory and Practice
 
Management Issues in Test Automation
Management Issues in Test AutomationManagement Issues in Test Automation
Management Issues in Test Automation
 
Management Issues in Test Automation
Management Issues in Test AutomationManagement Issues in Test Automation
Management Issues in Test Automation
 
How to make Automation an asset for Organization
How to make Automation an asset for OrganizationHow to make Automation an asset for Organization
How to make Automation an asset for Organization
 
Top 5 Pitfalls of Test Automation and How To Avoid Them
Top 5 Pitfalls of Test Automation and How To Avoid ThemTop 5 Pitfalls of Test Automation and How To Avoid Them
Top 5 Pitfalls of Test Automation and How To Avoid Them
 
Management Issues in Test Automation
Management Issues in Test AutomationManagement Issues in Test Automation
Management Issues in Test Automation
 
The Leaders Guide to Getting Started with Automated Testing
The Leaders Guide to Getting Started with Automated TestingThe Leaders Guide to Getting Started with Automated Testing
The Leaders Guide to Getting Started with Automated Testing
 
Managing Successful Test Automation
Managing Successful Test AutomationManaging Successful Test Automation
Managing Successful Test Automation
 
Unit 5 st ppt
Unit 5 st pptUnit 5 st ppt
Unit 5 st ppt
 
Presentation1
Presentation1Presentation1
Presentation1
 
Automated vs.pdf
Automated vs.pdfAutomated vs.pdf
Automated vs.pdf
 
Intelligent Mistakes in Test Automation
Intelligent Mistakes in Test AutomationIntelligent Mistakes in Test Automation
Intelligent Mistakes in Test Automation
 
Quality Assurance and Testing services
Quality Assurance and Testing servicesQuality Assurance and Testing services
Quality Assurance and Testing services
 
Why Test Automation Fails
Why Test Automation FailsWhy Test Automation Fails
Why Test Automation Fails
 
The Survey Says: Testers Spend Their Time Doing...
The Survey Says: Testers Spend Their Time Doing...The Survey Says: Testers Spend Their Time Doing...
The Survey Says: Testers Spend Their Time Doing...
 
Best practices for test automation
Best practices for test automationBest practices for test automation
Best practices for test automation
 
Use Automation to Assist—Not Replace—Manual Testing
Use Automation to Assist—Not Replace—Manual TestingUse Automation to Assist—Not Replace—Manual Testing
Use Automation to Assist—Not Replace—Manual Testing
 
Software Testing Presentation
Software Testing PresentationSoftware Testing Presentation
Software Testing Presentation
 
The productivity of testing in software development life cycle
The productivity of testing in software development life cycleThe productivity of testing in software development life cycle
The productivity of testing in software development life cycle
 

Plus de TechWell

Plus de TechWell (20)

Failing and Recovering
Failing and RecoveringFailing and Recovering
Failing and Recovering
 
Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization
 
Test Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTest Design for Fully Automated Build Architecture
Test Design for Fully Automated Build Architecture
 
System-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartSystem-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good Start
 
Build Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyBuild Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test Strategy
 
Testing Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for SuccessTesting Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for Success
 
Implement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowImplement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlow
 
Develop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityDevelop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your Sanity
 
Ma 15
Ma 15Ma 15
Ma 15
 
Eliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyEliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps Strategy
 
Transform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTransform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOps
 
The Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipThe Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—Leadership
 
Resolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsResolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile Teams
 
Pin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GamePin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile Game
 
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsAgile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
 
A Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationA Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps Implementation
 
Databases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessDatabases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery Process
 
Mobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateMobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to Automate
 
Cultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessCultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for Success
 
Turn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTurn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile Transformation
 

Dernier

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Dernier (20)

Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 

Successful Test Automation: A Manager’s View

  • 1. TB Full Day Tutorial 10/14/2014 8:30:00 AM "Successful Test Automation: A Manager’s View" Presented by: Mark Fewster Grove Consultants Brought to you by: 340 Corporate Way, Suite 300, Orange Park, FL 32073 888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
  • 2. Mark Fewster Grove Software Testing Ltd. Mark Fewster has more than thirty years of experience in software testing ranging from test management to test techniques and test automation. For the past two decades, Mark has provided consultancy and training in software testing, published papers, and co- authoredSoftware Test Automation and Experiences of Test Automation with Dorothy Graham. A popular speaker at conferences worldwide, Mark has won the Mercury BTO Innovation in Quality Award. He is currently helping ISTQB define the expert level certification for test automation. Speaker Presentations
  • 3. Written by Grove Software Testing Ltd. www.grove.co.uk Version 3_1 © Grove Software Testing, 2014 StarWest 2014 Managing Successful Test Automation A one-day tutorial
  • 4.
  • 5. Managing Successful Test Automation Contents Session 0: Introduction to the tutorial Objectives, What we cover (and don’t cover) today Session 1: Planning and Managing Test Automation Test automation objectives (and exercise) Responsibilities Pilot project Measures for automation Return on Investment (ROI) (and exercise) Session 2: Testware Architecture Importance of a testware architecture What needs to be organised Session 3: Pre- and Post-Processing Automating more than tests Test status Session 4: Scripting Techniques Objectives of scripting techniques Different types of scripts Domain specific test language Session 5: Automated Comparison Automated test verification Test sensitivity Comparison example Session 6: Final Advice, Q&A and Direction Strategy exercise Final advice Questions and Answers
  • 6.
  • 7. Abstract Many organisations have invested a lot of time and effort into test automation but they have not achieved the significant returns that they had expected. Some blame the tool that they use while others conclude test automation doesn't work well for their situation. The truth is often very different. These organisations are typically doing many of the right things but they are not addressing key issues that are vital to long term success with test automation. Mark Fewster describes the most important issues that you must address, and helps you understand and choose the best approaches for your organization—no matter which automation tools you use. Management issues including responsibilities, automation objectives and return on investment are covered along with technical issues such as testware architecture, pre- and post-processing and automated comparison techniques. The target audience for this tutorial is people involved with managing test automation who need to understand the key issues in making test automation successful. Technical issues are covered at a high level of understanding; there are no tool demos! Biography Mark has over 30 years of industrial experience in software testing ranging from test management to test techniques and test automation. In the last two decades Mark has provided consultancy and training in software testing, published papers and co- authored two books with Dorothy Graham, "Software Test Automation” and “Experiences of Test Automation”. He is a popular speaker at national and international conferences and seminars, and has won the Mercury BTO Innovation in Quality Award. Mark has served on the committee of the British Computer Society's Specialist Interest Group in Software Testing (BCS SIGiST) and is currently helping ISTQB in defining the expert level certification for test automation.
  • 8.
  • 9. presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 0-1 Managing Successful Test Automation Prepared by Dorothy Graham info@dorothygraham.co.uk www.DorothyGraham.co.uk © Mark Fewster and Dorothy Graham 2014 Mark Fewster Grove Software Testing Ltd. mark@grove.co.uk www.grove.co.uk and 0.1 Objectives of this tutorial ● help you achieve better success in automation ■ independent of any particular tool ● mainly management and some technical issues ■ objectives for automation ■ showing Return on Investment (ROI) ■ importance of testware architecture ■ practical tips for a few technical issues ■ what works in practice (case studies) ● help you plan an effective automation strategy 0.2
  • 10. presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 0-2 Tutorial contents 1) planning & managing test automation 2) testware architecture 3) pre and post processing 4) scripting techniques 5) automated comparison 6) final advice, Q&A and direction 0.3 Shameless commercial plug Part 1: How to do automation - still relevant today, though we plan to update it at some point! Latest book (2012) testautomationpatterns.org 0.4
  • 11. presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 0-3 What is today about? (and not about) ● test execution automation (not other tools) ● We will NOT cover: ■ demos of tools (time, which one, expo) ■ comparative tool info (expo, web) ■ selecting a tool* ● at the end of the day ■ understand technical and non-technical issues ■ have your own automation objectives ■ plan your own automation strategy * Mark will email you Chapter 10 of the STA book on request – mark@grove.co.uk 0.5
  • 12.
  • 13. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Planning & Managing Test Automation Managing Successful Test Automation 1 Managing 2 Architecture 3 Pre- and Post 4 Scripting 6 Advice5 Comparison 1.1 Managing Successful Test Automation Contents Managing 1 2 3 4 5 6 Test automation objectives Responsibilities Automation in agile Pilot project Measures for automation Return on Investment (ROI) 1.2
  • 14. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 An automation effort ● is a project (getting started or major changes) ■ with goals, responsibilities, and monitoring ■ but not just a project – ongoing effort is needed ● not just one effort – continuing ■ when acquiring a tool – pilot project ■ when anticipated benefits have not materialized ■ different projects at different times ► with different objectives ● objectives are important for automation efforts ■ where are we going? are we getting there? 1.3 fast testing slow testing Effectiveness Low High EfficiencyManual testing Automated Efficiency and effectiveness poor fast testing poor slow testing goodgood greatest benefit not good but common worst better 1.4
  • 15. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Common test automation objectives ● choose your most important objective: 1. faster testing 2. run more tests 3. reduce testing costs 4. automate x% of tests 5. find more bugs 6. other: _______________________________ cheaper testing, less effort, fewer testers more testing, more often increase test coverage reduce elapsed time, shorten schedule, time to market better testing, improve software quality automate 100% of testing Exercise 1.5 Same tests automated edit tests (maintenance) set-up execute analyse failures clear-up Manual testing More mature automation Faster testing? 1.6
  • 16. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Run more tests? ● which is better: ■ 100 1-minute tests or 1 60-minute test? ● not the only aspect: ■ failure analysis may be much longer for the 60-minute test! 25% 25% 25% 25% Setup Com Uniq TD 100 1-min tests: unique testing = 15 sec x 100 = 25 mins 25% 50% 25% Setup Unique TD 1 60-min test: unique testing = 50%= 30 mins + 1.7 Run more tests? ● 3 sets of tests ■ A: 100 tests, easy to automate ■ B: 60 tests, moderately difficult to automate ■ C: 30 tests, hard to automate ● what if the next release leaves A unchanged but has major changes to B and C? “Good automation is not found in the number of tests run but in the value of the tests that are run” 1.8
  • 17. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Reduce testing costs? Cost Time (months / years) Automation effort Total effort Yes, but not to zero! 1.9 Manual testing effort (without automation) Manual testing effort (with automation) Automate x% of the manual tests? manual tests automated tests tests not worth automating exploratory test automation manual tests automated (% manual) tests (& verification) not possible to do manually tests not automated yet
  • 18. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 What is automated? regression tests exploratory testing likelihood of finding bugs most often automated 1.11 Find more bugs? ● tests find bugs, not automation ● automation is a mechanism for running tests ● the bug-finding ability of a test is not affected by the manner in which it is executed ● this can be a dangerous objective ■ especially for regression automation! Automated tests Manual Scripted Exploratory Fix Verification 9.3% 24.0% 58.2% 8.4% Experiences of Test Automation, Ch 27, p 503, Ed Allen & Brian Newman 1.12
  • 19. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 When is “find more bugs” a good objective for automation? ● objective is “fewer regression bugs missed” ● when the first run of a given test is automated ■ MBT, Exploratory test automation, automated test design ■ keyword-driven (e.g. users populate spreadsheet) ● find bugs in parts we wouldn’t have tested? ■ indirect! (direct result of running more tests) 1.13 Good objectives for test automation ● realistic and achievable ● short and long term ● regularly re-visited and revised ● measurable ● should be different objectives for testing and for automation ● automation should support testing activities Pattern: SET CLEAR GOALS 1.14
  • 20. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Trying to get started: Tessa Benzie ■ consultancy to start automation effort ► project, needs a champion – hired someone ► training first, something else next, etc. ■ contract test manager – more consultancy ► bought a tool – now used by a couple contractors ► TM moved on, new QA manager has other priorities ■ just wanting to do it isn’t enough ► needs dedicated effort ► now have “football teams” of manual testers Chapter 29, pp 535, Experiences of Test Automation 1.15 Managing Successful Test Automation Contents Managing 1 2 3 4 5 6 Test automation objectives Responsibilities Automation in agile Pilot project Measures for automation Return on Investment (ROI) 1.16
  • 21. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 What is an automated test? ● a test! ■ designed by a tester for a purpose ● test is executed ■ implemented / constructed to run automatically using a tool ■ or run manually ● who decides which tests to run? ● who decides how a test is run? 1.17 Test manager’s dilemma ● who should undertake automation work ■ not all testers can automate (well) ■ not all testers want to automate ■ not all automators want to test! ● conflict of responsibilities ■ (if you are both tester and automator) ■ should I automate tests or run tests manually? ● get additional resources as automators? ■ contractors? borrow a developer? tool vendor? 1.18
  • 22. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Relationships 19 engine: test tool passengers: test cases car: test infrastructure driver: tester mechanic: test automator Testers ● test the software ■ design tests ■ select tests for automation ► requires planning / negotiation ● execute automated tests ■ should not need detailed technical expertise ● analyse failed automated tests ■ report bugs found by tests ■ problems with the tests may need help from the automation team Automators ● automate tests (requested by testers) ● support automated testing ■ allow testers to execute tests ■ help testers debug failed tests ■ provide additional tools ● predict ■ maintenance effort for software changes ■ cost of automating new tests ● improve the automation ■ more benefits, less cost Responsibilities 1.20
  • 23. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Good testing ● is effective ■ finds most of the faults (90+%) ■ gives confidence ● is efficient ■ uses a few small test cases ● is flexible ■ can use different subsets of test cases for different test objectives Good automation ● easy to use ■ flexible: supports different requirements for automation ■ responsive: quick changes when needed ● cheap to use ■ build & maintain automated tests ■ failure analysis ● improve over time 1.21 Testing versus automation Roles for automation ● Testware architect ■ designs the overall structure for the automation ● Champion ■ “sells” automation to managers and testers ● Tool specialist / toolsmith ■ technical aspects, licensing, updates to the tool ● Automated script developers ■ write new scripts as needed (e.g. keyword) ■ debug automation problems 1.22
  • 24. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Managing Successful Test Automation Contents Managing 1 2 3 4 5 6 Test automation objectives Responsibilities Automation in agile Pilot project Measures for automation Return on Investment (ROI) 1.23 Agile automation: Lisa Crispin ■ starting point: buggy code, new functionality needed, whole team regression tests manually ■ testable architecture: (open source tools) ► want unit tests automated (TDD), start with new code ► start with GUI smoke tests - regression ► business logic in middle level with FitNesse ■ 100% regression tests automated in one year ► selected set of smoke tests for coverage of stories ■ every 6 mos, engineering sprint on the automation ■ key success factors ► management support & communication ► whole team approach, celebration & refactoring 1.24 Chapter 1, pp 17-32, Experiences of Test Automation
  • 25. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Automation and agile ● agile automation: apply agile principles to automation ■ multidisciplinary team ■ automation sprints ■ refactor when needed ● fitting automation into agile development ■ ideal: automation is part of “done” for each sprint ► Test-Driven Design = write and automate tests first ■ alternative: automation in the following sprint -> ► may be better for system level tests 1.25See www.satisfice.com/articles/agileauto-paper.pdf (James Bach) Automation in agile/iterative development 1.26 A manual testing of this release (testers) A B B CA FEDCBA regression testing (automators automate the best tests) run automated tests (testers)
  • 26. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Requirements for agile test framework ● support manual and automated testing ■ using the same test construction process ● support fully manual execution at any time ■ requires good naming convention for components ● support manual + automated execution ■ so test can be used before it is 100% automated ● implement reusable objects ● allow “stubbing” objects before GUI available 1.27Source: Dave Martin, LDSChurch.org, email A tale of two projects: Ane Clausen ■ Project 1: 5 people part-time, within test group ► no objectives, no standards, no experience, unstable ► after 6 months was closed down ■ Project 2: 3 people full time, 3-month pilot ► worked on two (easy) insurance products, end to end ► 1st month: learn and plan, 2nd & 3rd months: implement ► started with simple, stable, positive tests, easy to do ► close cooperation with business, developers, delivery ► weekly delivery of automated Business Process Tests ■ after 6 months, automated all insurance products 1.28Chapter 6, pp 105-128, Experiences of Test Automation
  • 27. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Managing Successful Test Automation Contents Managing 1 2 3 4 5 6 Test automation objectives Responsibilities Automation in agile Pilot project Measures for automation Return on Investment (ROI) 1.29 Pilot project ● reasons ■ you’re unique ■ many variables / unknowns / options ■ learn how to start / improve ● benefits ■ find the best way for you ■ solve problems once ■ establish confidence (based on experience) ■ set realistic targets ● objectives ■ demonstrate tool value ■ gain experience / skills in the use of the tool ■ identify changes to existing test process ■ set internal standards and conventions ■ refine assessment of costs and achievable benefits 1.30
  • 28. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Characteristics of a pilot project 1.31 Planned Important Learning Objective Timely resourced, targets, contingency full time work, worthwhile tests informative, useful, revealing quantified, not subjective short term, focused P I L O T What to explore in the pilot ● build / implement automated tests (architecture) ■ different ways to build stable tests (e.g. 10 – 20) ● maintenance ■ different versions of the application ■ reduce maintenance for most likely changes ● failure analysis ■ support for identifying bugs ■ coping with common bugs affecting many automated tests 1.32 Also: naming conventions, reporting results, measurement
  • 29. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 After the pilot… ● having processes & standards is only the start ■ 30% on new process ■ 70% on deployment ► marketing, training, coaching ► feedback, focus groups, sharing what’s been done ● the (psychological) Change Equation ■ change only happens if (x + y + z) > w 1.33 Source: Eric Van Veenendaal, successful test process improvement Managing Successful Test Automation Contents Managing 1 2 3 4 5 6 Test automation objectives Responsibilities Automation in agile Pilot project Measures for automation Return on Investment (ROI) 1.34
  • 30. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Why measure automation? ● to justify and confirm starting automation ■ business case for purchase/investment decision, to confirm ROI has been achieved e.g. after pilot ■ both compare manual vs automated testing ● to monitor on-going automation “health” ■ for increased efficiency, continuous improvement ■ build time, maintenance time, failure analysis time, refactoring time ► on-going costs – what are the benefits? ■ monitor your automation objectives 1.35 Useful measures ● a useful measure: “supports effective analysis and decision making, and that can be obtained relatively easily.” Bill Hetzel, “Making Software Measurement Work”, QED, 1993. ● easy measures may be more useful even though less accurate (e.g. car fuel economy) ● ‘useful’ depends on objectives, i.e. what you want to know 1.36
  • 31. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 EMTE – what is it? ● Equivalent Manual Test Effort ■ given a set of automated tests, ■ EMTE is how much effort would it take ► to run those tests manually ● note ■ you would not actually run these tests manually ■ EMTE = is the effort you would have spent if you had run the tests manually ■ EMTE can be used to show some test automation benefit 1.37 Monitoring test automation health ● important to do (often neglected) ■ need to distinguish between test automation progress and test automation health ● progress examples ■ number of tests automated ■ coverage achieved by automated tests ■ number of test cycles executed / release ● health examples ■ benefits ■ build cost ■ analysis cost ■ maintenance cost 1.38 ROI = benefit – cost cost
  • 32. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Internal objectives for test automation ● provide automation services ■ efficient building, use and maintenance of automated tests ■ decrease automation costs over time ■ increase automation benefits ► savings, ease of use, flexibility ● measured ROI (appropriate for test objectives) ■ Equivalent Manual Test Effort (EMTE) ► additional test hours ► hours of unattended tests performed ■ proportion of unattended testing ■ increased coverage ■ reduce elapsed time 1.39 Measure benefit ● equivalent manual test effort (EMTE) ■ hours of additional testing ■ hours of unattended testing ● number of tests ■ tests executed ■ additional (new) tests ■ repeated tests ● number of test cycles ■ additional cycles ● increased coverage 1.40 Relate target to total cost of automation, e.g. benefit 10 times total cost Suggestion the degree to which automation has supported testers in achieving their objectives
  • 33. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 An example comparative benefits chart 1.41 0 10 20 30 40 50 60 70 80 exec speed times run data variety tester work man aut ROI spreadsheet – email me for a copy 14 x faster 5 x more often 4 x more data 12 x less effort Measure build effort ● time taken to automate tests ■ hours to add new or existing manual tests ■ average across different test types ● proportion of equivalent manual test effort ■ e.g. 1 hour to automate 30 minute manual test = 2 times equivalent manual test effort 1.42 Target: < 2 times :decreasing 10% per year Suggestion put your own (more appropriate) figures here
  • 34. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Measure failure analysis effort ● analysis effort for each test ■ captured in defect report ■ effort from first recognition through to resumption of test execution ■ average hours (or minutes) per failed test case ► needs comparison of same with manual testing ► must also monitor defect reporting effectiveness – e.g. how many ‘non reproducible’ reports 1.43 Target: X minutes? Trend:stable Suggestion put your own (more appropriate) figure here Measure maintenance effort ● maintenance effort of automated tests ■ percentage of test cases requiring maintenance ■ average effort per test case ■ percentage of equivalent manual test effort 1.44 Target: < 10% Trend:stable or decreasing Suggestion put your own ( appropriate) figure here
  • 35. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Recommendations ● don’t measure everything! ● choose three or four measures ■ applicable to your most important objectives ● monitor for a few months ■ see what you learn ● change measures if they don’t give useful information 1.45 Managing Successful Test Automation Contents Managing 1 2 3 4 5 6 Test automation objectives Responsibilities Automation in agile Pilot project Measures for automation Return on Investment (ROI) 1.46
  • 36. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Is this Return on Investment (ROI)? ● tests are run more often ● tests take less time to run ● it takes less human effort to run tests ● we can test (cover) more of the system ● we can run the equivalent of days / weeks of manual testing in a few minutes / hours ● faster time to market 1.47 these are (good) benefits but are not ROI ROI = (benefit – cost) cost Examples of ROI achieved ● Michael Snyman, S African bank (Ch 29.13) ■ US$4m on testing project, automation $850K ■ savings $8m, ROI 900% ● Henri van de Scheur, Database testing (Ch 2) ■ results: 2400 times more efficient ● Stefan Mohacsi, Armin Beer: European Space Agency (Ch 9) ■ MBT, break even after four test cycles 1.48 from: Experiences of Test Automation book
  • 37. 1-Managing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 How important is ROI? ● ROI can be dangerous ■ easiest way to measure: tester time ■ may give impression that tools replace people ● “automation is an enabler for success, not a cost reduction tool” ► Yoram Mizrachi, “Planning a mobile test automation strategy that works, ATI magazine, July 2012 ● many achieve lasting success without measuring ROI (depends on your context) ■ need to be measure benefits (and publicize them) 1.49 Managing Successful Test Automation Managing 1 2 3 4 5 6 Note in your Summary Sheet the key points for you from this session Summary: key points 1.50 • Assign responsibility for automation (and testing) • Use a pilot project to explore the good methods • Know your automation objectives • Measure what’s important to you • Show ROI from automation Now complete the exercise!
  • 38.
  • 39. 2-Architecture presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Testware Architecture Managing Successful Test Automation 1 Managing 2 Architecture 3 Pre- and Post Ref. Chapter 5: Testware Architecture “Software Test Automation” 4 Scripting 6 Advice5 Comparison 2.1 Managing Successful Test Automation Contents Architecture 1 2 3 4 5 6 Importance of a testware architecture What needs to be organised 2.2
  • 40. 2-Architecture presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Testware architecture ● organisation of, and relationship between, artefacts ■ scripts, input, test data, test descriptions, expected results, actual results, log files, etc. ● if you create an automated test: ■ do you know what already exists that you can use? ► scripts, data, tools ■ do you know where to put the artefacts? ► script(s), data, expected results, etc. ■ do you know what names to use? ● if you execute an automated test: ■ do you know where to find all the test results? ■ do you know how to analyse a test failure? 2.3 Testware architecture testwarearchitecture Testers Test Execution Tool High Level Keywords Structured Scripts structured testware Test Automator(s) write tests (in DSTL) runs scripts abstraction here = easier to change tools and maintain = long life abstraction here = easier to write automated tests = widely used (testframework) 2.4
  • 41. 2-Architecture presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Architecture – abstraction levels ● most critical factor for success ■ worst: close ties between scripts, tool & tester ● separate testers’ view from technical aspects ■ so testers don’t need tool knowledge ► for widespread use of automation ► scripting techniques address this ● separate tests from the tool – modular design ■ likely changes confined to one / few module(s) ■ re-use of automation functions ■ for minimal maintenance and long-lived automation 2.5 Localised regimes ● “everyone will do the sensible thing” ■ most will do something sensible, but different ● “use the tool however it best suits you” ■ ignores cost of learning how best to automate ● problems include: ■ effort wasted repeatedly solving the same problems in different ways ■ no re-use between teams ■ multiple learning curves 2.6
  • 42. 2-Architecture presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Easy way out: use the tool’s architecture ● tool will have its own way of organising tests ■ where to put things (for the convenience of the tool!) ■ will “lock you in” to that tool – good for vendors! ● a better way (gives independence from tools) ■ organise your tests to suit you ■ as part of pre-processing, copy files to where the tool needs (expects) to find them ■ as part of post-processing, copy back to where you want things to live 2.7 Tool-specific vs generic scripts A Tool A A A A B Tool B B B B Tool A A G G G Tool B B New env 2.8
  • 43. 2-Architecture presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Test-specific vs reused scripts T1 Test 1 T1 T1 T2 Test 2 T2 T2 T1 Test 1 R R T2 Test 2 Test 1 Test 2 R R Test Definition 2.9 TESTNAME: <name of test> PURPOSE: <single sentence explaining test purpose> MATERIALS: <a list of the artefacts used by this test> RESULTS: <a list of the artefacts produced by this test> SETUP: <sequence of keywords implementing setup actions> TEARDOWN: <sequence of keywords implementing teardown actions> EXECUTION: <sequence of keywords implementing the test actions> Test definition 2.10
  • 44. 2-Architecture presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Test definition additional info ● measures ■ expected run time (of the automated test) ■ EMTE (equivalent manual test effort) ■ others? ● attributes, such as test selector tags ■ tag specific sets of tests so they can be selected to be run ■ examples: smoke tests, short tests, bug fix tests. long tests, specific environment tests 2.11 General control script For each test to be executed Read keyword definition Verify keyword definition Check specified materials exist Execute setup Execute test Perform post-execution comparison Execute teardown Check specified results exist Report results EndFor 2.12
  • 45. 2-Architecture presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Learning is incremental: Molly Mahai ■ book learning – knew about investment, not replace people, don’t automate everything, etc. ■ set up good architecture? books not enough ■ picked something to get started ► after a while, realised limitations ► too many projects, library cumbersome ■ re-designed architecture, moved things around ■ didn’t know what we needed till we experienced the problems for ourselves ► like trying to educate a teenager 2.13 Chapter 29, pp 527-528, Experiences of Test Automation Managing Successful Test Automation Contents Architecture 1 2 3 4 5 6 Importance of a testware architecture What needs to be organised 2.14
  • 46. 2-Architecture presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 A test for you ● show me one of your automated tests running ■ how long will it take before it runs? ● typical problems ■ fails: forgot a file, couldn’t find a called script ■ can’t do it (yet): ► Joe knows how but he’s out, ► environment not right, ► haven’t run in a while, ► don’t know what files need to be set up for this script ● why not: run up your framework, select test, GO 2.15 Key issues ● scale ■ the number of scripts, data files, results files, benchmark files, etc. will be large and growing ● shared scripts and data ■ efficient automation demands reuse of scripts and data through sharing, not multiple copies ● multiple versions ■ as the software changes so too will some tests but the old tests may still be required ● multiple environments / platforms 2.16
  • 47. 2-Architecture presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Terms - Testware artefacts Testware Products By-Products Test Materials Test Results logs status summary differences logs status summary differences actual results scripts data inputs expected results doc (specifications) env utilities 2.17 Shared script saveas.scp Shared script open.scp Scribble Testware for example test case diffs.txt Differences Test script: - test input countries.scp Log log.txt countries2.dcm Expected Output Test Specification testspec.txt countries.dcm Initial Document countries2.dcm Edited Document Compare 2.18
  • 48. 2-Architecture presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Testware by type Testware countries2.dcm Products By-Products Test Materials Test Results saveas.scp countries.dcm countries.scp testdef.txt open.scp countries2.dcm log.txt diff.txt status.txt 2.19 Benefits of standard approach ● tools can assume knowledge (architecture) ■ they need less information; are easier to use; fewer errors will be made ● can automate many tasks ■ checking (completeness, interdependencies); documentation (summaries, reports); browsing ● portability of tests ■ between people, projects, organisations, etc. ● shorter learning curve 2.20
  • 49. 2-Architecture presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Testware Sets ● Testware Set: ■ a collection of testware artifacts ■ four types: ►Test Set - one or more test cases ►Script Set - scripts used by two or more Test Sets ►Data Set - data files used by two or more Test Sets ►Utility Set - utilities used by two or more Test Sets ■ good software practice: look for what is common, and keep it in only one place! ■ Keep your testware DRY! 2.21 Testware library ● repository of master versions of all Testware Sets ■ “uncategorised scripts are worse than no scripts” ► Onaral & Turkmen ● CM is critical ■ “If it takes too long to update your test library, automation introduces delay instead of adding efficiency” ► Linda Hayes, AST magazine, Sept 2010 2.22 d_ScribbleTypical v1 d_ScribbleTypical v2 d_ScribbleVolume v1 s_Logging v1 s_ScribbleDocument v1 s_ScribbleDocument v2 s_ScribbleDocument v3 s_ScribbleNavigate v1 t_ScribbleBreadth v1 t_ScribbleCheck v1 t_ScribbleFormat v1 t_ScribbleList v1 t_ScribbleList v2 t_ScribbleList v3 t_ScribblePrint v1 t_ScribbleSave v1 t_ScribbleTextEdit v1 t_ScribbleTextEdit v2 u_ScribbleFilters v1 u_GeneralCompare v1 Version numbers
  • 50. 2-Architecture presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Separate test results Test suite A single test suite ... Software under test ... may be used on subtly different versions of software ... Test results ... and produce different sets of test results ... ... that we wish to keep 2.23 Incremental approach: Ursula Friede ■ large insurance application ■ first attempt failed ► no structure (architecture), data in scripts ■ four phases (unplanned) ► parameterized (dates, claim numbers, etc) ► parameters stored in single database for all scripts ► improved error handling (non-fatal unexpected events) ► automatic system restart ■ benefits: saved 200 man-days per test cycle ► €120,000! 2.24 Chapter 23, pp 437-445, Experiences of Test Automation
  • 51. 2-Architecture presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Managing Successful Test Automation Architecture 1 2 3 4 5 6 Summary: key points 2.25 • Structure your automation testware to suit you • Testware comprises many files, etc. which need to be given a home • Use good software development standards Note in your Summary Sheet the key points for you from this session
  • 52.
  • 53. 3-Pre and Post Processing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Pre- and Post-Processing Managing Successful Test Automation 1 Managing 2 Architecture 3 Pre- and Post 6 Advice4 Scripting Ref. Chapter 6: Automating Pre- and Post-Processing “Software Test Automation” 5 Comparison 3.1 Managing Successful Test Automation Contents Pre and Post 1 2 3 4 5 6 Automating more than tests Test status Contents 3.2
  • 54. 3-Pre and Post Processing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 What is pre- and post-processing? ● Pre-processing ■ automation of setup tasks necessary to fulfil test case prerequisites ● Post-processing ■ automation of post-execution tasks necessary to complete verification and house-work ● These terms are useful because: ■ there are lots of tasks, they come in packs, many are the same, and they can be easily automated 3.3 Automated tests/automated testing Select / identify test cases to run Set-up test environment: • create test environment • load test data Repeat for each test case: • set-up test pre-requisites • execute • compare results • log results • analyse test failures • report defect(s) • clear-up after test case Clear-up test environment: • delete unwanted data • save important data Summarise results Automated tests Select / identify test cases to run Set-up test environment: • create test environment • load test data Repeat for each test case: • set-up test pre-requisites • execute • compare results • log results • clear-up after test case Clear-up test environment: • delete unwanted data • save important data Summarise results Analyse test failures Report defects Automated testing Automated processManual process 3.4
  • 55. 3-Pre and Post Processing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Examples ● pre-processing ■ copy scripts from a common script set (e.g. open, saveas) ■ delete files that shouldn’t exist when test starts ■ set up data in files ■ copy files to where the tool expects to find them ■ save normal default file and rename the test file to the default (for this test) ● post-processing ■ copy results to where the comparison process expects to find them ■ delete actual results if they match expected (or archive if required) ■ rename a file back to its normal default 3.5 Outside the box: Jonathan Kohl ■ task automation (throw-away scripts) ► entering data sets to 2 browsers (verify by watching) ► install builds, copy test data ■ support manual exploratory testing ■ testing under the GUI to the database (“side door”) ■ don’t believe everything you see ► 1000s of automated tests pass too quickly ► monitoring tools to see what was happening ► “if there’s no error message, it must be ok” – defects didn’t make it to the test harness – overloaded system ignored data that was wrong Chapter 19, pp 355-373, Experiences of Test Automation 3.6
  • 56. 3-Pre and Post Processing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Automation + execution comparison traditional test automation DSTL structured testware architecture loosen your oracles ETA, monkeys manual testing 3.7 Managing Successful Test Automation Contents Pre and Post 1 2 3 4 5 6 Automating more than tests Test status 3.8
  • 57. 3-Pre and Post Processing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Test status – pass or fail? ● tool cannot judge pass or fail ■ only “match” or “no match” ■ assumption: expected results are correct ● when a test fails (i.e. the software fails) ■ need to analyse the failure ► true failure? write up bug report ► test fault? fix the test (e.g. expected result) ► known bug or failure affecting many automated tests? – this can eat a lot of time in automated testing – solution: additional test statuses 3.9 Test statuses for automation • other possible additional test statuses – test blocked – environment problem (e.g. network down, timeouts) – set-up problems (files missing) – test needs to be changed but not done yet Compare to No differences found Differences found (true) expected outcome Pass Fail expected fail outcome Expected Fail Unknown don’t know / missing Unknown Unknown 3.10
  • 58. 3-Pre and Post Processing presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Managing Successful Test Automation Note in your Summary Sheet the key points for you from this session Pre and Post 1 2 3 4 5 6 Summary: key points • Pre- and post processing to automate setup and clear-up tasks • Test status is more than just pass / fail 3.11
  • 59. 4-Scripting presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Scripting Techniques Managing Successful Test Automation Ref. Chapter 3: Scripting Techniques “Software Test Automation” 1 Managing 2 Architecture 3 Pre- and Post 4 Scripting 6 Advice5 Comparison 4.1 Managing Successful Test Automation Contents Scripting 1 2 3 4 5 6 Objectives of scripting techniques Different types of scripts Domain specific test language 4.2
  • 60. 4-Scripting presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Scripting ● the backbone of automation ● four broad approaches ■ linear ■ structured ■ data-driven ■ keyword-driven ● issues ■ more code means more maintenance ■ programmers like programming ► may write new code even when unnecessary ● do you know your script to test ratio? test specific programming required framework supports abstraction 4.3 Objectives of scripting techniques ● implement your testware architecture ● to reduce costs ■ make it easier to build automated tests ► avoid duplication ■ avoid excessive maintenance costs ► greater reuse of functional, modular scripting ● greater return on investment ■ better testing support ■ greater portability ► environments & hardware platforms ● enhance capabilities ■ achieve more testing for same (or less) effort ► testing beyond traditional manual approaches probably best achieved by data-driven or keyword-driven 4.4
  • 61. 4-Scripting presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 re-usable modules / script library Progression of automation implementation low-level instructions structured code data driven keyword driven manual test scripts input data test definitions get quote - motor, age, … create policy – motor, age, … Technical Test Analyst Test Analyst easier maintenance Source: Mark Fewster, Grove Consultants (grove.co.uk) 4.5 Managing Successful Test Automation Contents Scripting 1 2 3 4 5 6 Objectives of scripting techniques Different types of scripts Domain specific test language 4.6
  • 62. 4-Scripting presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Example application Tax Calculator Name: Earnings January February March Tax band: Tax due: input fields calculated outputs £290 B £45 Fred £100 £80 £110 Test 1: 1. open application 2. enter name & earnings 3. save results 4. close application 4.7 Test Tool Structured scripting (manual) test procedures Test Tool softwareundertest create test scripts script library high-level instructions and test data low-level “how to” instructions automatorstesters softwareundertest 4.8
  • 63. 4-Scripting presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 About structured scripting ● script library for re-used scripts ■ part of testware architecture implementation ► shared scripts interface to software under test ► all other scripts interface to shared scripts ● reduced costs ■ maintenance ► fewer scripts affected by software changes ■ build ► individual control scripts are smaller and easier to read (a ‘higher’ level language is used) 4.9 Example structured scripts 10 Sub Test1() Call OpenApplication(“TaxCalculator”) Call CalculateTax(“Pooh”, 100, 150, 125) Call SaveAsAndClose(“Test 1 Results”) End Sub Sub OpenApplication(Application) Workbooks.Open Filename:= _ "ATT “ & Application & ”.xls“ End Sub Sub CalculateTax(Name, M1, M2, M3) ActiveCell.FormulaR1C1 = Name Range("C6").Select ActiveCell.FormulaR1C1 = M1 Range("C7").Select ActiveCell.FormulaR1C1 = M2 Range("C8").Select ActiveCell.FormulaR1C1 = M3 End Sub Sub SaveAsAndClose(Filename) ActiveWorkbook.SaveAs Filename:= _ Filename & “.xls", FileFormat:= _ xlNormal, Password:="“ ActiveWorkbook.Close End Sub Supporting scripts: Main test script: additional tests can be created more easily
  • 64. 4-Scripting presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Usable (re-usable) scripts ● to re-use a script, need to know: ■ what does this script do? ■ what does it need? ■ what does it deliver? ■ what state when it starts? ■ what state when it finishes? ● information in a standard place for every script ■ can search for the answers to these questions 4.11 Test Tool Test Tool automatorstesters Data driven (manual) test procedures softwareundertest script library low-level “how to” instructions high-level instructions test data create data files create control scripts 4.12
  • 65. 4-Scripting presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 About data driven ● test data extracted from scripts ■ placed into separate data files ● control script reads data from data file ■ one script implements several tests by reading different data files (reduces script maintenance per test) ● reduced build cost ■ faster and easier to automate similar test procedures ■ many test variations using different data ● multiple control scripts required ■ one for each type of test (with varying data) 4.13 Example data-driven script 14 Name Earn1 Earn2 Earn3 Pooh 100 150 125 Piglet 75 90 80 Roo 120 110 65 data file: TaxCalcData.csv Sub RunTests() For each Row in file Call OpenApplication(“TaxCalculator”) Open TaxCalcData.csv Read Name Read Earn1 Read Earn2 Read Earn3 Call CalculateTax(Name, Earn1, _ Earn2, Earn3) Call SaveAsAndClose(Test & “ Results”) Next Row End Sub main script (runs all tests in data file)
  • 66. 4-Scripting presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Test Tool Test Tool automatorstesters Keywords (basic) softwareundertest (manual) test procedures create test definitions control script high-level instructions and test data script library low-level “how to” instructions and keywords single control script: “interpreter” / ITE 4.15 About keywords ● single control script (Interactive Test Environment) ■ improvements to this benefit all tests (ROI) ■ extracts high-level instructions from scripts ● ‘test definition’ ■ independent of tool scripting language ■ a language tailored to testers’ requirements ► software design ► application domain ► business processes ● more tests, fewer scripts Unit test: calculate one interest payment System test: summarise interest for one customer Acceptance test: end of day run, all interest payments 4.16
  • 67. 4-Scripting presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Example keyword-driven scripts CalculateTax Pooh, 100, 150, 125 CalculateTax Piglet, 75, 90, 80 CalculateTax Roo, 120, 110, 65 Keyword file (option 1) CalculateTax Pooh, 100, 150, 125 Piglet, 75, 90, 80 Roo, 120, 110, 65 Keyword file (option 2) Keyword file (Robot Framework) Test Case Action Arg1 Arg2 Arg3 Arg4 Check Tax Calculate Tax Pooh 100 150 125 Calculate Tax Piglet 75 90 80 Calculate Tax Roo 120 110 65 CalculateTax Datafile.txt Keyword file (option 3) Name Earn1 Earn2 Earn3 Pooh 100 150 125 Piglet 75 90 80 Roo 120 110 65 Datafile.txt 4.17 Minimising automation code ● more code means more maintenance ■ better to reuse existing code than write new code ► that does the same thing ■ achieved by ► clear objectives – e.g. automate for minimal maintenance ► use of appropriate scripting approach – abstraction ► careful design – consider sets of tests, not individual tests ► consistency (standards) 4.18
  • 68. 4-Scripting presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Managing Successful Test Automation Contents Scripting 1 2 3 4 5 6 Objectives of scripting techniques Different types of scripts Domain specific test language 4.19 procedures /definitions test high-level instructions and test data Test Tool Merged test procedure/test definition softwareundertest create control script high-level instructions and test data script library low-level “how to” instructions and keyword scripts single control script: “interpreter” / ITE language for testers 4.20
  • 69. 4-Scripting presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Domain Specific Test Language (DSTL) ● test procedures and test definitions similar ■ both describe sequences of test cases ► giving test inputs and expected results ● combine into one document ■ can include all test information ■ avoids extra ‘translation’ step ■ testers specify tests regardless manual/automated ■ automators implement required keywords 4.21 Keywords in the test definition language ● multiple levels of keywords possible ■ high level for business functionality ■ low level for component testing ● composite keywords ■ define keywords as sequence of other keywords ■ gives greater flexibility (testers can define composite keywords) but risk of chaos ● format ■ freeform, structured, or standard notation ► (e.g. XML) 4.22
  • 70. 4-Scripting presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Example use of keywords Create a new account, order 2 items and check out Firstname Surname Email address Password Create Account Edward Brown ebrown@gmail.com apssowdr Item Num Items Check Price for Items Order Item 1579 3 15.30 Order Item 2598 12.99 Total Checkout 28.29 4.23 Documenting keywords Name Name for this keyword Purpose What this keyword does Parameters Any inputs needed, outputs produced Pre- conditions What needs to be true before using it, where valid Post- conditions What will be true after it finishes Error conditions What errors it copes with, what is returned Example An example of the use of the keyword Source: Martin Gijsen. See also Hans Buwalda book & articles 4.24
  • 71. 4-Scripting presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Example keyword Create account *mandatory Name Create account Purpose Creates a new account Parameters *First name: 2 to 32 characters *Last name: 2 to 32 characters *Email address: also serves as account id *Password: 4 - 32 characters Pre-conditions Account doesn't exist for this person Post-conditions Account created (including email confirmation) order Screen displayed Error conditions Account already exists Example (see example) 4.25 Example keyword Order item *mandatory Name Order item Purpose Order one or more of a specific item Parameters *Item number: 1000 to 9999, in catalogue Number of items wanted: 1 to Max-for-item. If blank, assumes 1 Pre-conditions Valid account logged in Item in stock (sufficient for the order) Prices available for item (including discounts for many) Post-conditions Item(s) appear in shopping basket Number of available items decreased by number ordered Error conditions Insufficient items in stock Example (see example) 4.26
  • 72. 4-Scripting presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Implementing keywords ● ways to implement keywords ■ scripting language (of a tool) ■ programming language (e.g. Java) ■ use what your developers are most familiar with! ● ways of supporting a DSTL ■ commercial, open source or home-grown framework ■ spreadsheet or database for test descriptions 4.27 Frameworks ● commercial tools ■ ATRT, ATSM, Axe, Certify, eCATT, FASTBoX, GUIdancer, Liberation, Ranorex, te52, TestComplete, TestDrive, TestingWhiz, Tosca Testsuite, zest ● open source ■ Cacique, FitNesse, Helium, Jameleon, JET, JSystem, Jubula, Maveryx, Open2Test, Power Tools, QAliber Test Builder, Rasta, Robot Framework, SAFS, SpecFlow, STAF, TAF, TAF Core, TestMaker, Xebium ● I can email you my Tool List ■ test execution and framework tools ■ info@dorothygraham.co.uk 4.28
  • 73. 4-Scripting presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Test Tool softwareundertest script libraries tool dependenttool independent Execution-tool-independent framework frame- work test procedures /definitions tool independent scripting language Another Test Tool sut Test Tool softwareundertest script libraries some tests run manually 4.29 Managing Successful Test Automation Note in your Summary Sheet the key points for you from this session Scripting 1 2 3 4 5 6 Summary ● objectives of good scripting ■ reduce costs, enhance capabilities ● many types of scripting ■ structured, data-driven, keyword ● keyword/DSTL the most sophisticated ■ yields significant benefits ● increased productivity ■ customised front end, tool independence 4.30
  • 74.
  • 75. 5-Comparison presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Automated Comparison Managing Successful Test Automation Ref. Chapter 4: Automated Comparison “Software Test Automation” 1 Managing 2 Architecture 3 Pre- and Post 4 Scripting 6 Advice5 Comparison 5.1 Managing Successful Test Automation Contents Comparison 1 2 3 4 5 6 Automated test verification Test sensitivity 5.2
  • 76. 5-Comparison presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Perverse persistence: Michael Williamson ■ testing Webmaster tools at Google (new to testing) ■ QA used Eggplant (image processing tool) ■ new UI broke existing automation ■ automate 4 or 5 functions ■ comparing bitmap images – inaccurate and slow ■ testers had to do automation maintenance ► not worth developers learning tool’s language ■ after 6 months, went for more appropriate tools ■ QA didn’t use the automation, tested manually! ► tool was just running in the background Chapter 17, pp 321-338, Experiences of Test Automation 5.3 Checking versus testing ■ checking confirms that things are as we think ► e.g. check that the code still works as before ■ testing is a process of exploration, discovery, investigation and learning ► e.g. what are the threats to value to stakeholders, give information ■ checks are machine-decidable ► if it’s automated, it’s probably a check ■ tests require sapience ► including “are the checks good enough” Source: Michael Bolton, www.developsense.com/blog/2009/08/testing-vs-checking/ 5.4
  • 77. 5-Comparison presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 General comparison guidelines ● keep as simple as possible ● well documented ● standardise as much as possible ● avoid bit-map comparison ● poor comparisons destroy good tests ● divide and conquer: ■ use a multi-pass strategy ■ compare different aspects in each pass 5.5 Two types of comparison ● dynamic comparison ■ done during test execution ■ performed by the test tool ■ can be used to direct the progress of the test ► e.g. if this fails, do that instead ■ fail information written to test log (usually) ● post-execution comparison ■ done after the test execution has completed ■ good for comparing files or databases ■ can be separated from test execution ■ can have different levels of comparison ► e.g. compare in detail if all high level comparisons pass 5.6
  • 78. 5-Comparison presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Scribble Comparison types compared diffs.txt Differences Log log.txt countries2.dcm Expected Output countries.dcm Initial Document countries2.dcm Edited Document Compare Test script: -test input -comparison instructions scribble1.scp Error message as expected? dynamic comparison post-execution comparison 5.7 Comparison process ● few tools for post-execution comparison ● simple comparators come with operating systems but do not have pattern matching ■ e.g, Unix ‘diff’, Windows ‘UltraCompare’ ● text manipulation tools widely available ■ sed, awk, grep, egrep, Perl, Tcl, Python ● use pattern matching tools with a simple comparator to make a ‘comparison process’ ● use masks and filters for efficiency 5.8
  • 79. 5-Comparison presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Managing Successful Test Automation Contents Comparison 1 2 3 4 5 6 Automated test verification Test sensitivity Contents 5.9 Test sensitivity ● the more data there is available: ■ the easier it is to analyse faults and debug ● the more data that is compared: ■ the more sensitive the test ● the more sensitive a test: ■ the more likely it is to fail ■ (this can be both a good and bad thing) 5.10
  • 80. 5-Comparison presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Sensitive versus specific(robust) test Unexpected change occurs Sensitive test verifies the entire outcome Test outcome Specific test verifies this field only Test is supposed to change only this field 5.11 Unexpected change occurs for every test Three tests, each changes a different field If all tests are sensitive, they all show the unexpected change If all tests are specific, the unexpected change is missed Test outcome Too much sensitivity = redundancy 5.12
  • 81. 5-Comparison presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Using test sensitivity ● sensitive tests: ■ few, at high level ■ breadth / sanity checking tests ■ good for regression / maintenance ● specific/robust tests: ■ many, at detailed level ■ focus on specific aspects ■ good for development A good test automation strategy will plan a combination of sensitive and specific tests 5.13 Managing Successful Test Automation Note in your Summary Sheet the key points for you from this session Comparison 1 2 3 4 5 6 Summary: key points • Balance dynamic and post-execution comparison • Balance sensitive and specific tests • Use masking and filters to adjust sensitivity 5.14
  • 82. 5-Comparison presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Comparison example ● Expected output ■ Date 11 May 2011 ■ Order No X43578 ■ Login J Smith ■ Add 1 mouse ■ Add 1 mountain bike ■ Add_to_total $15.99 ■ Add_to_total $249.99 ■ Total_due $265.98 ■ Logout J Smith ● Actual output ■ Login M Jones ■ Order No X54965 ■ Add 1 mountain bike ■ Add_to_total $249.99 ■ Add 1 toaster ■ Add_to_total $35.45 ■ Logout J Smith ■ Total_due $285.34 ■ Date 16 Aug 2011 Has this test passed? 5.15 Simple automated comparison ● Expected output ■ Date 11 May 2011 ■ Order No X43578 ■ Login J Smith ■ Add 1 mouse ■ Add 1 mountain bike ■ Add_to_total $15.99 ■ Add_to_total $249.99 ■ Total_due $265.98 ■ Logout J Smith ● Actual output ■ Login M Jones ■ Order No X54965 ■ Add 1 mountain bike ■ Add_to_total $249.99 ■ Add 1 toaster ■ Add_to_total $35.45 ■ Logout J Smith ■ Total_due $285.34 ■ Date 16 Aug 2011 Fail Fail Fail Fail Fail Fail Fail Fail Fail 5.16
  • 83. 5-Comparison presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Filter: alphabetical order ● Expected output ■ Add 1 mountain bike ■ Add 1 mouse ■ Add_to_total $15.99 ■ Add_to_total $249.99 ■ Login J Smith ■ Logout J Smith ■ Date 11 May 2011 ■ Order No X43578 ■ Total_due $265.98 ● Actual output ■ Add 1 mountain bike ■ Add 1 toaster ■ Add_to_total $249.99 ■ Add_to_total $35.45 ■ Login M Jones ■ Logout J Smith ■ Date 16 Aug 2011 ■ Order No X54965 ■ Total_due $285.34 Pass Fail Fail Fail Fail Pass Fail Fail Fail 1st Pass is a coincidence, 2nd Pass is actually a bug! 5.17 Filters: replacement by object type ● Expected output ■ Add <item> ■ Add <item> ■ Add_to_total $15.99 ■ Add_to_total $249.99 ■ Login J Smith ■ Logout J Smith ■ Date <date> ■ Order No <orderno> ■ Total_due $265.98 ● Actual output ■ Add <item> ■ Add <item> ■ Add_to_total $249.99 ■ Add_to_total $35.45 ■ Login M Jones ■ Logout J Smith ■ Date <date> ■ Order No <orderno> ■ Total_due $285.34 This has helped eliminate things we aren’t interested in Pass Pass Fail Fail Fail Pass Pass Pass Fail 5.18
  • 84. 5-Comparison presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Filters: replacement for everything? ● Expected output ■ Add <item> ■ Add <item> ■ Add_to_total $<amt> ■ Add_to_total $<amt> ■ Login <name> ■ Logout <name> ■ Date <date> ■ Order No <orderno> ■ Total_due $<amt> ● Actual output ■ Add <item> ■ Add <item> ■ Add_to_total $<amt> ■ Add_to_total $<amt> ■ Login <name> ■ Logout <name> ■ Date <date> ■ Order No <orderno> ■ Total_due $<amt> Pass Pass Pass Pass Pass Pass Pass Pass Pass Everything passed – isn’t this great? Actually, no – not checking name or total 5.19 Variables – what needs to be done ● Expected output ■ Add 1 <item> ■ Add 1 <item> ■ Add_to_total T1=$15.99 ■ Add_to_total T2=$249.99 ■ Login NAME=J Smith ■ Logout J Smith=NAME? ■ Date <date> ■ Order <orderno> ■ Total_due $265.98 =T1+T2? ● Actual output ■ Add 1 <item> ■ Add 1 <item> ■ Add_to_total T1=$249.99 ■ Add_to_total T2=$35.45 ■ Login NAME=M Jones ■ Logout J Smith =NAME? ■ Date <date> ■ Order <orderno> ■ Total_due $285.34 =T1+T2? “Variable=” implemented as “store but ignore in comparison” “=Variable/expression” implemented as “check it is equal to” P P P P P F P P F 5.20
  • 85. 6-Advice presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Final Advice and Direction Managing Successful Test Automation 1 Managing 2 Architecture 3 Pre- and Post 4 Scripting 6 Advice5 Comparison 6.1 What next? ● we have looked at a number of ideas about test automation today ● what is your situation? ■ what are the most important things for you now? ■ where do you want to go? ■ how will you get there? ● make a start on your test automation strategy now ■ adapt it to your own situation tomorrow 6.2
  • 86. 6-Advice presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Strategy exercise ● your automation strategy / action plan ■ review the exercises ► automation objectives, 3rd page ► responsibility ► measurement ► architecture ● Your strategy ■ identify the top 3 changes you want to make to your automation ■ note your plans now 6.3 Dealing with high level management ● management support ■ building good automation takes time and effort ■ set realistic expectations ● benefits and ROI ■ make benefits visible (charts on the walls) ■ metrics for automation ► to justify it, compare to manual test costs over iterations ► automation health: on-going continuous improvement – build cost, maintenance cost, failure analysis cost – coverage of system tested 6.4
  • 87. 6-Advice presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Dealing with developers ● critical aspect for successful automation ■ automation is development ► may need help from developers ► automation needs development standards to work – testability is critical for automatability – why should they work to new standards if there is “nothing in it for them”? ■ seek ways to cooperate and help each other ► run tests for them – in different environments – rapid feedback from smoke tests ► help them design better tests? 6.5 Standards and technical factors ● standards for the testware architecture ■ where to put things ■ what to name things ■ how to do things ► but allow exceptions if needed ● new technology can be great ■ but only if the context is appropriate for it (e.g. Model-Based Testing) ● use automation “outside the box” 6.6
  • 88. 6-Advice presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 On-going automation ● you are never finished ■ don’t “stand still” - schedule regular review and re-factoring of the automation ■ change tools, hardware when needed ■ re-structure if your current approach is causing problems ● regular “pruning” of tests ■ don’t have “tenured” test suites ► check for overlap, removed features ► each test should earn its place 6.7 Information and web sites ■ www.TestAutomationPatterns.org ■ www.AutomatedTestingInstitute.com ► TestKIT Conference, Washington DC ■ tool information ► commercial and open source: http://testertools.com ► open source tools – www.opensourcetesting.org – http://sourceforge.net – http://riceconsulting.com (search on “cheap and free tools”) ► LinkedIn group: QA Automation Architect ■ www.ISTQB.org ► Expert level in Test Automation (in progress) 6.8
  • 89. 6-Advice presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Managing Successful Test Automation Note in your Summary Sheet the key points for you from this session Advice 1 2 3 4 5 6 Summary: successful test automation • assigned responsibility for automation tasks • realistic, measured objectives (testing ≠ automation) • technical factors – architecture, levels of abstraction, DSTL, scripting, comparison, pre and post processing • management support, ROI, continuous improvement Free book! 6.9 and now … ● any final questions / comments? ● please evaluate this tutorial (vs its objectives) ■ high mark – thanks very much! ■ low mark – please explain (so we can improve) ► (see Session 0 for tutorial objectives, what is covered and what is intentionally excluded) ● email Dot for ROI spreadsheet / tool list (info@DorothyGraham.co.uk) ● email Mark for automation consultancy & questions (mark@grove.co.uk) 6.10
  • 90. 6-Advice presented by Mark Fewster mark@grove.co.uk © Mark Fewster & Dorothy Graham 2014 Any more questions? Please email me! mark@grove.co.uk Thank you for coming today I hope this will be useful for you All the best with your automation! Managing Successful Test Automation 6.11