8. Manual v. automated
• Automation is cool, it gets respect … but …
– Automation is code
• like any code it will take time, have bugs, cause false
positives and require maintenance
• it will have an imperfect oracle
• Manual testing is tedious … but …
– It finds over 90% of the bugs in the system
9. Manual testing
• Scripted manual testing
– From rigid (click this button, enter this value) to
flexible (achieve this result)
– Requires preparatory work
– Scripts act as their own documentation
• Exploratory testing
– Minimizes preparatory work
– Documentation of testing is extra work
10. Manual testing
• We need to get better at it … A LOT better at it
• What is the right prep time v. test time?
• What are the right parts of the app to focus
on?
– Can we pinpoint problematic functionality?
– Can we focus on changed code?
– Should we focus on buggy parts?
• Where is the method in the madness?
12. Exploratory testing
• Freestyle exploratory testing
– Ad hoc exploration of an application’s features
– No rules, not necessary to account for coverage,
past tests, etc.
– Good for:
• familiarizing yourself with an application to prepare for
more sophisticated methods
• quick smoke tests, sometimes finds bugs
13. Exploratory testing
• Scenario-based exploratory testing
– An extension of traditional scenario testing
– Start with:
• user stories
• e2e scenarios
• test scenarios
– Then widen the scope and inject variation as you
execute the scenarios
14. Exploratory testing
• Strategy-based exploratory testing
– Freestyle testing combined with known bug
finding techniques
• documented strategies like boundary values and
combinatorial testing
• intuitive instinct from experience or familiarity with the
application
– Focus is on learning tips and tricks and recognizing
when to apply them
15. Exploratory testing
• Feedback-based exploratory testing
– Starts out freestyle until a test history is built up
– That feedback then guides future exploration
• Test history
• Coverage
• Churn
• …
– Tools are often an integral part of this type of
testing as they ‘remember’ history for us
16. The tourist metaphor
• You’re visiting Hyderabad for the first time
– You can go it alone
– You can buy a guidebook
– You can hire a guide
– You can take the bus tour
• On return visits you can
– Repeat the best tours
– Choose to visit new places
17. The tourist metaphor
• You’re testing an application
– What do you use as a guide?
– Don’t just wander the app
• Always have a goal in mind
• When presented with a choice, allow the goal to guide
you
• Track which goals work for you
– Coverage of code, features, UI
– Find important bugs
– Force the software to best exhibit its functionality
18. The guidebook tour
• Guidebooks help tourists navigate choices
– Boil down the possibilities to a ‘best of’ list
– Many tourist follow this advice exclusively
• Use
– User manual, follow its advice
– Online help (the F1 tour)
– Third party advice (the blogger’s tour)
– Detractors (the pundit’s tour)
– Competing products (the competitor’s tour)
19. The antisocial tour
• Tours have rules and there are places off limits
• Antisocial tourists ignore those rules
– Enter illegal input (the crime spree tour)
– Enter the least likely input (the opposite tour)
– Choose an input, change your mind, use undo, use
cancel, then do it all over again
20. The back alley tour
• Explore the back alleys of your app
• Do you track feature usage?
– Tour the least popular features
– Mix and match popular and unpopular features
(mixed destination tour)
21. The couch potato tour
• Being a nonparticipant on a tour is a waste of
money
• Sometimes for a tester, it finds bugs
– Do as little as possible on every screen
– Leave fields blank, enter as little info as possible,
always take the shortest path
• Why?
– Default values have to be set, used
– Decisions have to be made by the software
– Just because you aren’t working hard doesn’t mean
the software isn’t either
22. The obsessive compulsive tour
• OCD tours would likely be quite unpopular for
real tourists
– Avoiding sidewalk cracks
– Walking the same street over and over obsessively
• But testers can use repetition to their
advantage
– Developers often code based on an expected path
– Repeating inputs individually and in sequence can
often lead to bugs
23. The all nighter tour
• The ‘clubbing tour’ is about staying out all
night, a real test of the constitution
• Software should stay out all night too
– Keep it working, never shut it down
– Keep files open without saving or closing them
• This tests time-out functionality, finds
memory problems and leaks that are masked
by shutdown-restart
24. The TOGOF tour
• Test-One-Get-One-Free
• Run multiple copies of the app
• Put them through their paces
• Try and open the same files, access the same
resources … can they coexist?
• Why TOGOF? Because if you find a bug in one
copy, you’ve broken them all
25. The saboteur
• The sabotage tour
• Ask the software to do something
– Access a resource
– Open a file
– Perform a tasks
• Understand the resource it uses
– Memory
– Files
– Network
• Then prevent it from doing so
– Take away memory
– Delete/rename files
– Turn off the network
26. The collector’s tour
• Some people need to do it all
– Pictures of landmarks, samples at a wine tasting,
signatures from the Disney characters
– They don’t want to miss anything
• Testers can learn from this
– Force the software to produce every output
• Buy items from every department
• Put every special character in an email
• Fill a document with every possible graphic
• …
27. The supermodel tour
• This tour goes only skin deep
– Ignore the function and meaning and look only at
the interface
– It’s about first impressions
• Watch the interface elements
– Do they look good and render properly?
– Are interface elements consistent across the app?
– As you make changes does the UI refresh properly?
28. Types of exploratory testing
• Scenario • Then inject variation
– Start with a scenario – Freestyle guidance
• Strategy – Tour metaphors
– Start with accumulated
testing knowledge
• Feedback
– Test for a while
– Build up a history
– Use the history to guide
additional testing
29. Obstacles to exploratory testing
• Where’s the accountability?
• How do we track completeness?
• What metrics are important?
WE NEED TOOLS!
Perhaps this is a case of feature conflict…it’s the ‘pub crawl across England starting in Newcastle” feature conflicting with the ‘don’t drive through Amsterdam when the wife is in the car” feature. Resulting in a 1600 mile unwanted detour.
Someone actually wrote this error message…
Bug bash, first round of bugs in any app, initial parts of development
Finds the most important set of bugs – those that users will hit in mainline scenarios. UAT is largely composed of such testing.
Some people are just better testers. Nose for defects. Tricks/tips that just assist them in finding a bug in the most tested s/w.
Different teams have different metrics that are important. Coverage could be one. Churn is a smart metric, but there are no tools yet that can distinguish how we can select tests that are impacted
Different teams have different metrics that are important. Coverage could be one. Churn is a smart metric, but there are no tools yet that can distinguish how we can select tests that are impacted