SlideShare une entreprise Scribd logo
1  sur  53
Télécharger pour lire hors ligne
The Limits of Testing
and How to Exceed
Them
Craig Stuntz
https://speakerdeck.com/craigstuntz
?
I have a couple of questions for you.

How many of you are QAs? Developers? (I’m a developer)

Consider the product you’re working on right now.
What kind of bugs do
you want in your final,
QA approved product?
“None?”
People want to say “none,” but that’s setting a high bar to clear.
https://www.flickr.com/photos/filipbossuyt/21409291292/
Not impossible, though! Jumping over a bar 2 meters in the air isn’t easy, but it can be done if you’re prepared to work at it. Most people (product owners?) will be
unwilling to pay the price.

So if you want no defects, I’ll tell you how to do that. Cut most of your features. Then do it again.
80/20
80/20 rule for software: If you cut 80% of the features, maybe 20% of users will notice.
Most software has far too many features.

This is the bottom of the third page of fifth tab of the options dialog for the Java plug-in. If you select the highlighted option, this instructs Java to not install malware on
your machine during security updates. Naturally, it’s de-selected by default.

So there is plenty of room to eliminate features.
What kind of bugs do
you want in your final,
QA approved product?
https://www.flickr.com/photos/10159247@N04/4335602802/
In ancient times, programmer life was simple. Dinosaurs roamed the Earth, we didn’t write unit tests, and we employed people to slowly and painstakingly find bugs for
us.
<BEEP!>
<BEEP!>
And then we decided testing was good.

And then people said we should test all the …. time.

And from then on our software was perfectly reliable and secure. We can all go home.

That’s the end of my presentation, thanks for coming….
Now it turns out that fuzzing software makes security bugs jump out at you in a way that tests never will.
Now it turns out that static analysis makes resource leak bugs jump out at you in a way that tests never will.

Now it turns out that…

Wait. This is getting complicated.

What to do?
Agenda
• Whole project quality
• Goal Driven
• Realistic
I’m interested in building correct software. Sometimes people start by writing this off as impossible.

It’s easier to dismiss something as impossible than to ask if you can bite off a big chunk of it.

Whole project quality - not just individual pieces of testing piled on top of each other

Goal driven - Other techniques complement testing to find errors that unit tests can’t find

Realistic - These methods are useful on real-world software, today
https://www.flickr.com/photos/taylor-mcbride/3732682242/
It turns out the QA landscape is huge and there are some beautiful techniques available that you can combine to implement a realistic plan for achieving a desired level of
quality.

The biggest danger that will stop you from getting there is looking to just one technique to solve all your problems. Focus on the goal, not the mechanism.
Immediate
Digression
Manual Testing
Really useful, but doesn’t fit the theme of the rest of the talk.

Still, really useful, so let’s talk anyway!

How is manual testing fundamentally different than unit tests, automated tests, etc.?
Sometimes we think of manual testing as poking weird values into inputs. And hey, it works sometimes: Both Android and iPhone lock screens broken by “boredom
testing.” But computers can do this faster.

The best application for manual testing: What is something that computers can never do by themselves today?
https://medium.com/backchannel/how-technology-led-a-hospital-to-give-a-patient-38-times-
his-dosage-ded7b3688558
This is life or death. This is an alert screen.

Epic EMR. One of 17,000 alerts the UCSF physicians received in that month alone.

Contains the number “160” in 14 different places.

Nurse clicked through this and patient received 38 1/2 tablets of an antibiotic. He’s fortunate to have survived.

Use human testing for things only people can do!
https://lobste.rs/s/fdmbn5
For the rest of this presentation I’m going to talk about tests performed by a computer.

For many people, unit tests are both a design methodology and the first line of defense against bugs.
Let’s Write a parseInt!
let parseInt (value: string) : int = ???
Because I’m a NIH developer, and because it’s a really simple example to play with, I’ll write my own parseInt. It’s simple, right? Maybe too simple to say anything
worthwhile?
Test First!
[<TestMethod>]
member this.``Should parse 0``() =
let actual = parseInt "0"
actual |> should equal 0
But I believe in test first and TDD, so… What sort of tests do I need for parseInt?

This looks like a good start. Of course, this test does not pass yet, because I haven't implemented the method. That failure is an important piece of information! If I can’t
parse 0, my parseInt isn’t very good.

So let's say that I go and implement some parseInt code. At least enough to make the test pass. Now, this test tells me very little about the correctness of the method.
That's interesting! Implementing the method removed information from the system! That seems really weird, but…
Test First!
[<TestMethod>]
member this.``Should parse 0``() =
let actual = parseInt "0"
actual |> should equal 0
[<TestMethod>]
member this.``Should parse 1``() =
let actual = parseInt "1"
actual |> should equal 1
Maybe I should add another test.

Am I missing anything?
Test First!
[<TestMethod>]
member this.``Should parse -1``() =
let actual = parseInt "-1"
actual |> should equal -1
[<TestMethod>]
member this.``Should parse with whitespace``() =
let actual = parseInt " 123 "
actual |> should equal 123
Test First!
[<TestMethod>]
member this.``Should parse +1 with whitespace``() =
let actual = parseInt " +1 "
actual |> should equal 1
[<TestMethod>]
member this.``Should do ??? with freeform prose``() =
let actual = parseInt "A QA engineer walks into a bar…"
actual |> should equal ???
Anything else? null, MaxInt+1, non-%20 whitespace, MaxInt, MinInt, 1729?

I’m starting to realize I have more questions than answers!
More Questions
• Is this for trusted or non-trusted input?
• Can I trust that my function will be invoked
correctly?
• What is the culture of the input?
1) Trusted = exception; untrusted = fail gracefully.

2) For a private method, maybe. For a library function, no! Need tests per invocation?

3) , $, etc.?

It sounds like we might need a lot of tests. How many? Does it seem weird that we’re talking more about corner cases than “success?” Does this teeny little helper
method really need to be perfect? I just wanna parse 123!
Getting one digit wrong really can get your company into the headlines.

Also, what about security sensitive code. Hashes, RNGs.

Does it seem like test case suggestions focused on error cases? Even if 90% of the time we get expected input, I’m far more interested in the reasons which explain 90%
of the failures.
Bad Error
Handling Kills
“Almost all catastrophic
failures (92%) are
the result of incorrect
handling of non-fatal
errors explicitly
signaled in software.”
https://www.usenix.org/conference/osdi14/technical-sessions/presentation/yuan
Only tested software designed for high reliability. (Cassandra, HDFS, Hadoop…)

“But it does suggest that top-down testing, say, using input and error injection techniques, will be challenged by the large input and state space.”
Simple Testing Can Prevent Most Critical Failures, Yuan et. al.
92% of the time the catastrophe was caused not by the error itself but rather the combination of the error and then handling it incorrectly!
How Can I Be
Completely Confident
in a Simple Function?
(Or at least do the right thing when it fails)
(And also insure it’s always called correctly)
(Every. Single. Time)
Let’s face it, this is the bare minimum first step for trusting an application, right?

You might ask, “Why is this idiot rambling on about parseInt? I have 10 million lines of code to test.” I think it’s sometimes informative to start with the simplest thing
which could possibly work.
Unit Tests
• Helping you think
through bottom-up
designs
• Preventing
regressions
• Getting you to the
point where at least
something works.
Are Great
• Showing overall
design consistency
(top-down)
• Finding security holes
• Proving correctness
or totality of
implementation
Not So Helpful
We can use techniques like strong typing, fuzzing, and formal methods to compliment testing to give more control over code correctness. You will still need tests, but
you’ll get much more “coverage” with fewer tests.

Looking at the lists here, a theme emerges. To write a test, you needed a mental model of how your function should work. Having written the tests, however, you have
thrown away the model. All that's left are the examples.
When My Test
Fails
I know I’ve found a bug
(useful!)
Passes
I know my function works for at
least one input out of billions
(maybe useless?)
Does this make sense to everyone? Do you agree that a passing test doesn’t tell you much about the overall quality of the system?

Is there a way to ensure we always get correct output for any input? 

Yes, but before we even get there, there’s a bigger problem we haven’t talked about yet.
How Can I Be Completely
Confident When
Composing Two Functions?
(Composing two correct functions should produce
the correct result.)
(Every. Single. Time)
Let’s face it, this is the bare minimum second step for trusting an application, right?

More generally, I would like to be able to build complete, correct programs from a foundation of correct functions. Now verifying my 10 million lines of code is easy; start
with correct functions, then combine them correctly!
parseIntAndReturnAbsoluteValue = abs ∘ parseInt
If I have two good functions, like abs and parseInt, I would like to be able to combine them in order to produce a correct program.

But there’s a problem: parseInt, as written, isn’t total (define). I can call it with strings which aren’t integers, and it’s really hard to use tests to ensure I call it correctly
100% of the time. How do I know it will always return something useful?
let parseInt (str) =
!" implementation
One thing I need to do is ensure that people call my function passing a string as the argument, and that the thing it returns is actually an integer, in every case.
let parseInt (value: string) : int =
!" implementation
That’s not too hard. I can prove this with the type system.

As long as I don’t do anything which subverts the type system (unsafe casts, unchecked exceptions, null — or use a language which won’t allow it!), I can at least be sure
I’m in the right ballpark.

But how do I ensure I’m only passed a string representing an integer? Or should I? Can I force the caller to “do the right thing” and handle the error if they happen to
pass a bad value.
public static bool TryParse(
string s,
out int result
)
{
!!.
}
Again, you can do it with the type system! I’m showing a C# example here, since the idiomatic F# solution is different.
public static bool TryParse(
string s,
out int result
)
{
!!.
}
!" appropriate when input is “trusted”
int betterBeAnInt = ParseInt(input);
!" appropriate for untrusted input
int couldBeAnInt;
if (TryParse(input, out couldBeAnInt))
{
!" !!.
It is now difficult to invoke the function without testing success. You have to go out of your way. This probably eliminates the need to use tests to ensure that every case
in which this function is invoked checks the success of the function.

Consider input validation. Bad input is in the contract. Exceptions inappropriate. Instead of returning an integer, return an integer and a Boolean.
But There’s Still The
Matter of That String
Argument
We can prove that we do the right thing when our parseInt correctly classifies a given input value as a legitimate integer and parses it, or rejects it as invalid, but how can
we show that we do that correctly? Aren’t we back at square one? Types are super neat because you get this confidence essentially for free, and it never fails, but even
the F# type system can’t make sure I return the right integer.
State Space
0
}1 {
A
B
In principle, your app, or your function, is a black box. Same input, same output. Easy to test, right?

This application should have only two possible states!

To be totally confident in your system you need to test, by some means, the entire state space (LoC discussion).
State Space
“Hello”
}“World” {
A
B
⚅ 🕑
It gets harder quickly. If my inputs are two strings instead of two bits, I now have considerably more possible test cases!

(Click) In the real world, you have additional “inputs” like time and randomness, and whatever is in your database.
Formal Methods
Using formal methods means the design is driven by a mathematical formalism. By definition, this is not test driven development, although you will probably still write
tests. Formal methods are sometimes considered controversial in the software development community, because they acknowledge the existence and utility of math.
____ + 1234 ____
[ t]*[+-]?[0-9]+[ t]*
It’s easier to use formal methods if there’s an off-the-shelf formalism you can use. For the problem of parsing, these exist!

One way to reduce the input domain of the parseInt function from an untestably large number of potential states is to use a regular expression. This is not the sort of
regular expression you might encounter in Perl or Ruby; it is a much more restricted syntax typically used on the front end of a compiler. The important point, here, is that
we can reduce the number of potential state of the function to a number that you can count on your fingers.
0
1
2
3 4
[ t]
[+-]
[0-9]
[0-9]
[0-9]
[ t]
[+-]
REs convert to FSMs.

3+4 are accepting states.

4-5 states, 2 of them accepting, well less than “any possible string!”
Totality checking. Breaking my vow to avoid showing implementations.

Lots of code here, but the important word is at the top.

I’ve hesitated about showing implementations until now, but I can’t avoid it here, because…

The proof is built into the implementation
When
My
Test Type Checker
Fails
I know I’ve found a
bug
(useful!)
I might have a bug
(sometimes useful,
sometimes
frustrating)
Passes
I know my function
works for at least one
input out of billions
(maybe useless?)
There is a class of
bugs which cannot
exist
(awesome!)
We can expand this chart now.

Tests and types are not opponents; they complement each other.

Where one succeeds, the other fails, and vice versa.
Property Based
Testing
Still, there are cases where it’s hard to use formal methods.

Not every problem has an off-the-shelf formalism ready to use. 

But we don’t have to just give up and accept unit tests as the best we can do!
let parsedIntEqualsOriginalNumber =
fun (number: int) !→
number = parseInt (number.ToString())
> open FsCheck;;
> Check.Quick parsedIntEqualsOriginalNumber;;
Falsifiable, after 1 test (1 shrink) (StdGen
(1764712907,296066647)):
Original:
-2
Shrunk:
-1
val it : unit = ()
>
Can you state things about your system which will always be true?

What must be true for my system to work?

Looks like I have to do some work on my implementation here!

Important: I didn’t have to specify the failing case, as I would with a unit test. FsCheck found it for me. In unit testing, you start with a mental model of the specification,
and write your own tests. With property based testing, you write down the specification, and the tests are generated for you.
PBT: Great for helping to find bugs in specific routines.

Fuzzing: Great for finding unhanded errors in entire systems.
http://colin-scott.github.io/blog/2015/10/07/fuzzing-raft-for-fun-and-profit/
It often makes sense to write a custom fuzzer. It’s not hard, and the return is huge. This example more similar to property based testing, since it uses the stated invariants
from the Raft specification to test an implementation.

(Policy story)
Runtime Validation
Sometimes the most important value to test is the only one that matters to you at runtime.

Assertions are a little under-used, because we tend to think of them as checking trivial things.

But using the techniques of property-based testing, we can do end to end validation of our system.
let input = " +123 "
let number = parseInt input !" 123
let test = number.ToSting() !" "123"
if test <> input !" true!
then
let testNumber = parseInt test !" 123
if number <> testNumber !" false (yay!)
then failwith "Uh oh!"
!" We’re safe now! Use number…
Similar to property based testing
http://lefthandedgoat.github.io/canopy/
Integration testing should always be automated. Deals with coupling between systems not covered by type safety (DB, DOM, etc.)

Use Canopy

Also: write integration test method.
The Quality Landscape
• Manual testing
• Integration tests
• Unit tests
• Runtime validation
• Property based testing
• Fuzzing
• Formal methods
• Static analysis
• Type systems
• Totality checking
The long and the short of it: Think big! Don’t “test all the ___ing time” because somebody told you to. Keep your eyes on the prize of software correctness.

Ask yourself which things are most important to the overall quality of your system. Pick the tool(s) which give you the biggest return. 

Synopsis of each.
Craig Stuntz
@craigstuntz
Craig.Stuntz@Improving.com
http://blogs.teamb.com/craigstuntz
http://www.meetup.com/Papers-We-Love-Columbus/

Contenu connexe

Tendances

Exploratory Testing in an Agile Context
Exploratory Testing in an Agile ContextExploratory Testing in an Agile Context
Exploratory Testing in an Agile Context
Elisabeth Hendrickson
 
Test Bash Netherlands Alan Richardson "How to misuse 'Automation' for testing...
Test Bash Netherlands Alan Richardson "How to misuse 'Automation' for testing...Test Bash Netherlands Alan Richardson "How to misuse 'Automation' for testing...
Test Bash Netherlands Alan Richardson "How to misuse 'Automation' for testing...
Alan Richardson
 
Classic Testing Mistakes 0226
Classic Testing Mistakes 0226Classic Testing Mistakes 0226
Classic Testing Mistakes 0226
MBA_Community
 

Tendances (20)

Add More Security To Your Testing and Automating - Saucecon 2021
Add More Security To Your Testing and Automating - Saucecon 2021Add More Security To Your Testing and Automating - Saucecon 2021
Add More Security To Your Testing and Automating - Saucecon 2021
 
Effective Software Testing for Modern Software Development
Effective Software Testing for Modern Software DevelopmentEffective Software Testing for Modern Software Development
Effective Software Testing for Modern Software Development
 
Risk Mitigation Using Exploratory and Technical Testing - QASymphony Webinar ...
Risk Mitigation Using Exploratory and Technical Testing - QASymphony Webinar ...Risk Mitigation Using Exploratory and Technical Testing - QASymphony Webinar ...
Risk Mitigation Using Exploratory and Technical Testing - QASymphony Webinar ...
 
Technology Based Testing
Technology Based TestingTechnology Based Testing
Technology Based Testing
 
Automating Strategically or Tactically when Testing
Automating Strategically or Tactically when TestingAutomating Strategically or Tactically when Testing
Automating Strategically or Tactically when Testing
 
Bug Advocacy
Bug AdvocacyBug Advocacy
Bug Advocacy
 
Jeremias Rößler
Jeremias RößlerJeremias Rößler
Jeremias Rößler
 
Automation testing: how tools are important?
Automation testing: how tools are important?Automation testing: how tools are important?
Automation testing: how tools are important?
 
Testing trapeze-2014-april
Testing trapeze-2014-aprilTesting trapeze-2014-april
Testing trapeze-2014-april
 
How To Test With Agility
How To Test With AgilityHow To Test With Agility
How To Test With Agility
 
Exploratory Testing in an Agile Context
Exploratory Testing in an Agile ContextExploratory Testing in an Agile Context
Exploratory Testing in an Agile Context
 
Test Bash Netherlands Alan Richardson "How to misuse 'Automation' for testing...
Test Bash Netherlands Alan Richardson "How to misuse 'Automation' for testing...Test Bash Netherlands Alan Richardson "How to misuse 'Automation' for testing...
Test Bash Netherlands Alan Richardson "How to misuse 'Automation' for testing...
 
Automating Pragmatically - Testival 20190604
Automating Pragmatically - Testival 20190604Automating Pragmatically - Testival 20190604
Automating Pragmatically - Testival 20190604
 
Automating Tactically vs Strategically SauceCon 2020
Automating Tactically vs Strategically SauceCon 2020Automating Tactically vs Strategically SauceCon 2020
Automating Tactically vs Strategically SauceCon 2020
 
Classic Testing Mistakes 0226
Classic Testing Mistakes 0226Classic Testing Mistakes 0226
Classic Testing Mistakes 0226
 
Practical Test Automation Deep Dive
Practical Test Automation Deep DivePractical Test Automation Deep Dive
Practical Test Automation Deep Dive
 
More Aim, Less Blame: How to use postmortems to turn failures into something ...
More Aim, Less Blame: How to use postmortems to turn failures into something ...More Aim, Less Blame: How to use postmortems to turn failures into something ...
More Aim, Less Blame: How to use postmortems to turn failures into something ...
 
Effective debugging
Effective debuggingEffective debugging
Effective debugging
 
Cinci ug-january2011-anti-patterns
Cinci ug-january2011-anti-patternsCinci ug-january2011-anti-patterns
Cinci ug-january2011-anti-patterns
 
TestWorksConf: Experience exploratory testing
TestWorksConf: Experience exploratory testingTestWorksConf: Experience exploratory testing
TestWorksConf: Experience exploratory testing
 

Similaire à The limits of unit testing by Craig Stuntz

I Smell A RAT- Rapid Application Testing
I Smell A RAT- Rapid Application TestingI Smell A RAT- Rapid Application Testing
I Smell A RAT- Rapid Application Testing
Peter Presnell
 
5-Ways-to-Revolutionize-Your-Software-Testing
5-Ways-to-Revolutionize-Your-Software-Testing5-Ways-to-Revolutionize-Your-Software-Testing
5-Ways-to-Revolutionize-Your-Software-Testing
Mary Clemons
 
Are Automated Debugging Techniques Actually Helping Programmers
Are Automated Debugging Techniques Actually Helping ProgrammersAre Automated Debugging Techniques Actually Helping Programmers
Are Automated Debugging Techniques Actually Helping Programmers
Chris Parnin
 
Testing concepts ppt
Testing concepts pptTesting concepts ppt
Testing concepts ppt
Rathna Priya
 
Testing concepts ppt
Testing concepts pptTesting concepts ppt
Testing concepts ppt
Rathna Priya
 
Showing How Security Has (And Hasn't) Improved, After Ten Years Of Trying
Showing How Security Has (And Hasn't) Improved, After Ten Years Of TryingShowing How Security Has (And Hasn't) Improved, After Ten Years Of Trying
Showing How Security Has (And Hasn't) Improved, After Ten Years Of Trying
Dan Kaminsky
 

Similaire à The limits of unit testing by Craig Stuntz (20)

How to complement TDD with static analysis
How to complement TDD with static analysisHow to complement TDD with static analysis
How to complement TDD with static analysis
 
assertYourself - Breaking the Theories and Assumptions of Unit Testing in Flex
assertYourself - Breaking the Theories and Assumptions of Unit Testing in FlexassertYourself - Breaking the Theories and Assumptions of Unit Testing in Flex
assertYourself - Breaking the Theories and Assumptions of Unit Testing in Flex
 
I Smell A RAT- Rapid Application Testing
I Smell A RAT- Rapid Application TestingI Smell A RAT- Rapid Application Testing
I Smell A RAT- Rapid Application Testing
 
An Introduction to Prometheus (GrafanaCon 2016)
An Introduction to Prometheus (GrafanaCon 2016)An Introduction to Prometheus (GrafanaCon 2016)
An Introduction to Prometheus (GrafanaCon 2016)
 
5-Ways-to-Revolutionize-Your-Software-Testing
5-Ways-to-Revolutionize-Your-Software-Testing5-Ways-to-Revolutionize-Your-Software-Testing
5-Ways-to-Revolutionize-Your-Software-Testing
 
Static analysis is most efficient when being used regularly. We'll tell you w...
Static analysis is most efficient when being used regularly. We'll tell you w...Static analysis is most efficient when being used regularly. We'll tell you w...
Static analysis is most efficient when being used regularly. We'll tell you w...
 
Automated tests
Automated testsAutomated tests
Automated tests
 
bug-advocacy
bug-advocacybug-advocacy
bug-advocacy
 
TDD Best Practices
TDD Best PracticesTDD Best Practices
TDD Best Practices
 
Are Automated Debugging Techniques Actually Helping Programmers
Are Automated Debugging Techniques Actually Helping ProgrammersAre Automated Debugging Techniques Actually Helping Programmers
Are Automated Debugging Techniques Actually Helping Programmers
 
How to fix bug or defects in software
How to fix bug or defects in software How to fix bug or defects in software
How to fix bug or defects in software
 
Testing concepts ppt
Testing concepts pptTesting concepts ppt
Testing concepts ppt
 
Testing concepts ppt
Testing concepts pptTesting concepts ppt
Testing concepts ppt
 
Prometheus (Prometheus London, 2016)
Prometheus (Prometheus London, 2016)Prometheus (Prometheus London, 2016)
Prometheus (Prometheus London, 2016)
 
5 Essential Tips for Load Testing Beginners
5 Essential Tips for Load Testing Beginners5 Essential Tips for Load Testing Beginners
5 Essential Tips for Load Testing Beginners
 
Static analysis and ROI
Static analysis and ROIStatic analysis and ROI
Static analysis and ROI
 
Static analysis and ROI
Static analysis and ROIStatic analysis and ROI
Static analysis and ROI
 
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...
 
Designing with capabilities (DDD-EU 2017)
Designing with capabilities (DDD-EU 2017)Designing with capabilities (DDD-EU 2017)
Designing with capabilities (DDD-EU 2017)
 
Showing How Security Has (And Hasn't) Improved, After Ten Years Of Trying
Showing How Security Has (And Hasn't) Improved, After Ten Years Of TryingShowing How Security Has (And Hasn't) Improved, After Ten Years Of Trying
Showing How Security Has (And Hasn't) Improved, After Ten Years Of Trying
 

Plus de QA or the Highway

Jeff Van Fleet and John Townsend - Transition from Testing to Leadership.pdf
Jeff Van Fleet and John Townsend - Transition from Testing to Leadership.pdfJeff Van Fleet and John Townsend - Transition from Testing to Leadership.pdf
Jeff Van Fleet and John Townsend - Transition from Testing to Leadership.pdf
QA or the Highway
 

Plus de QA or the Highway (20)

KrishnaToolComparisionPPT.pdf
KrishnaToolComparisionPPT.pdfKrishnaToolComparisionPPT.pdf
KrishnaToolComparisionPPT.pdf
 
Ravi Lakkavalli - World Quality Report.pptx
Ravi Lakkavalli - World Quality Report.pptxRavi Lakkavalli - World Quality Report.pptx
Ravi Lakkavalli - World Quality Report.pptx
 
Caleb Crandall - Testing Between the Buckets.pptx
Caleb Crandall - Testing Between the Buckets.pptxCaleb Crandall - Testing Between the Buckets.pptx
Caleb Crandall - Testing Between the Buckets.pptx
 
Thomas Haver - Mobile Testing.pdf
Thomas Haver - Mobile Testing.pdfThomas Haver - Mobile Testing.pdf
Thomas Haver - Mobile Testing.pdf
 
Thomas Haver - Example Mapping.pdf
Thomas Haver - Example Mapping.pdfThomas Haver - Example Mapping.pdf
Thomas Haver - Example Mapping.pdf
 
Joe Colantonio - Actionable Automation Awesomeness in Testing Farm.pdf
Joe Colantonio - Actionable Automation Awesomeness in Testing Farm.pdfJoe Colantonio - Actionable Automation Awesomeness in Testing Farm.pdf
Joe Colantonio - Actionable Automation Awesomeness in Testing Farm.pdf
 
Sarah Geisinger - Continious Testing Metrics That Matter.pdf
Sarah Geisinger - Continious Testing Metrics That Matter.pdfSarah Geisinger - Continious Testing Metrics That Matter.pdf
Sarah Geisinger - Continious Testing Metrics That Matter.pdf
 
Jeff Sing - Quarterly Service Delivery Reviews.pdf
Jeff Sing - Quarterly Service Delivery Reviews.pdfJeff Sing - Quarterly Service Delivery Reviews.pdf
Jeff Sing - Quarterly Service Delivery Reviews.pdf
 
Leandro Melendez - Chihuahua Load Tests.pdf
Leandro Melendez - Chihuahua Load Tests.pdfLeandro Melendez - Chihuahua Load Tests.pdf
Leandro Melendez - Chihuahua Load Tests.pdf
 
Rick Clymer - Incident Management.pdf
Rick Clymer - Incident Management.pdfRick Clymer - Incident Management.pdf
Rick Clymer - Incident Management.pdf
 
Robert Fornal - ChatGPT as a Testing Tool.pptx
Robert Fornal - ChatGPT as a Testing Tool.pptxRobert Fornal - ChatGPT as a Testing Tool.pptx
Robert Fornal - ChatGPT as a Testing Tool.pptx
 
Federico Toledo - Extra-functional testing.pdf
Federico Toledo - Extra-functional testing.pdfFederico Toledo - Extra-functional testing.pdf
Federico Toledo - Extra-functional testing.pdf
 
Andrew Knight - Managing the Test Data Nightmare.pptx
Andrew Knight - Managing the Test Data Nightmare.pptxAndrew Knight - Managing the Test Data Nightmare.pptx
Andrew Knight - Managing the Test Data Nightmare.pptx
 
Melissa Tondi - Automation We_re Doing it Wrong.pdf
Melissa Tondi - Automation We_re Doing it Wrong.pdfMelissa Tondi - Automation We_re Doing it Wrong.pdf
Melissa Tondi - Automation We_re Doing it Wrong.pdf
 
Jeff Van Fleet and John Townsend - Transition from Testing to Leadership.pdf
Jeff Van Fleet and John Townsend - Transition from Testing to Leadership.pdfJeff Van Fleet and John Townsend - Transition from Testing to Leadership.pdf
Jeff Van Fleet and John Townsend - Transition from Testing to Leadership.pdf
 
DesiradhaRam Gadde - Testers _ Testing in ChatGPT-AI world.pptx
DesiradhaRam Gadde - Testers _ Testing in ChatGPT-AI world.pptxDesiradhaRam Gadde - Testers _ Testing in ChatGPT-AI world.pptx
DesiradhaRam Gadde - Testers _ Testing in ChatGPT-AI world.pptx
 
Damian Synadinos - Word Smatter.pdf
Damian Synadinos - Word Smatter.pdfDamian Synadinos - Word Smatter.pdf
Damian Synadinos - Word Smatter.pdf
 
Lee Barnes - What Successful Test Automation is.pdf
Lee Barnes - What Successful Test Automation is.pdfLee Barnes - What Successful Test Automation is.pdf
Lee Barnes - What Successful Test Automation is.pdf
 
Jordan Powell - API Testing with Cypress.pptx
Jordan Powell - API Testing with Cypress.pptxJordan Powell - API Testing with Cypress.pptx
Jordan Powell - API Testing with Cypress.pptx
 
Carlos Kidman - Exploring AI Applications in Testing.pptx
Carlos Kidman - Exploring AI Applications in Testing.pptxCarlos Kidman - Exploring AI Applications in Testing.pptx
Carlos Kidman - Exploring AI Applications in Testing.pptx
 

Dernier

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Dernier (20)

Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 

The limits of unit testing by Craig Stuntz

  • 1. The Limits of Testing and How to Exceed Them Craig Stuntz https://speakerdeck.com/craigstuntz
  • 2. ? I have a couple of questions for you. How many of you are QAs? Developers? (I’m a developer) Consider the product you’re working on right now.
  • 3. What kind of bugs do you want in your final, QA approved product?
  • 4. “None?” People want to say “none,” but that’s setting a high bar to clear.
  • 5. https://www.flickr.com/photos/filipbossuyt/21409291292/ Not impossible, though! Jumping over a bar 2 meters in the air isn’t easy, but it can be done if you’re prepared to work at it. Most people (product owners?) will be unwilling to pay the price. So if you want no defects, I’ll tell you how to do that. Cut most of your features. Then do it again.
  • 6. 80/20 80/20 rule for software: If you cut 80% of the features, maybe 20% of users will notice.
  • 7. Most software has far too many features. This is the bottom of the third page of fifth tab of the options dialog for the Java plug-in. If you select the highlighted option, this instructs Java to not install malware on your machine during security updates. Naturally, it’s de-selected by default. So there is plenty of room to eliminate features.
  • 8. What kind of bugs do you want in your final, QA approved product?
  • 9. https://www.flickr.com/photos/10159247@N04/4335602802/ In ancient times, programmer life was simple. Dinosaurs roamed the Earth, we didn’t write unit tests, and we employed people to slowly and painstakingly find bugs for us.
  • 10. <BEEP!> <BEEP!> And then we decided testing was good. And then people said we should test all the …. time. And from then on our software was perfectly reliable and secure. We can all go home. That’s the end of my presentation, thanks for coming….
  • 11. Now it turns out that fuzzing software makes security bugs jump out at you in a way that tests never will.
  • 12. Now it turns out that static analysis makes resource leak bugs jump out at you in a way that tests never will. Now it turns out that… Wait. This is getting complicated. What to do?
  • 13. Agenda • Whole project quality • Goal Driven • Realistic I’m interested in building correct software. Sometimes people start by writing this off as impossible. It’s easier to dismiss something as impossible than to ask if you can bite off a big chunk of it. Whole project quality - not just individual pieces of testing piled on top of each other Goal driven - Other techniques complement testing to find errors that unit tests can’t find Realistic - These methods are useful on real-world software, today
  • 14. https://www.flickr.com/photos/taylor-mcbride/3732682242/ It turns out the QA landscape is huge and there are some beautiful techniques available that you can combine to implement a realistic plan for achieving a desired level of quality. The biggest danger that will stop you from getting there is looking to just one technique to solve all your problems. Focus on the goal, not the mechanism.
  • 15. Immediate Digression Manual Testing Really useful, but doesn’t fit the theme of the rest of the talk. Still, really useful, so let’s talk anyway! How is manual testing fundamentally different than unit tests, automated tests, etc.?
  • 16. Sometimes we think of manual testing as poking weird values into inputs. And hey, it works sometimes: Both Android and iPhone lock screens broken by “boredom testing.” But computers can do this faster. The best application for manual testing: What is something that computers can never do by themselves today?
  • 17. https://medium.com/backchannel/how-technology-led-a-hospital-to-give-a-patient-38-times- his-dosage-ded7b3688558 This is life or death. This is an alert screen. Epic EMR. One of 17,000 alerts the UCSF physicians received in that month alone. Contains the number “160” in 14 different places. Nurse clicked through this and patient received 38 1/2 tablets of an antibiotic. He’s fortunate to have survived. Use human testing for things only people can do!
  • 18. https://lobste.rs/s/fdmbn5 For the rest of this presentation I’m going to talk about tests performed by a computer. For many people, unit tests are both a design methodology and the first line of defense against bugs.
  • 19. Let’s Write a parseInt! let parseInt (value: string) : int = ??? Because I’m a NIH developer, and because it’s a really simple example to play with, I’ll write my own parseInt. It’s simple, right? Maybe too simple to say anything worthwhile?
  • 20. Test First! [<TestMethod>] member this.``Should parse 0``() = let actual = parseInt "0" actual |> should equal 0 But I believe in test first and TDD, so… What sort of tests do I need for parseInt? This looks like a good start. Of course, this test does not pass yet, because I haven't implemented the method. That failure is an important piece of information! If I can’t parse 0, my parseInt isn’t very good. So let's say that I go and implement some parseInt code. At least enough to make the test pass. Now, this test tells me very little about the correctness of the method. That's interesting! Implementing the method removed information from the system! That seems really weird, but…
  • 21. Test First! [<TestMethod>] member this.``Should parse 0``() = let actual = parseInt "0" actual |> should equal 0 [<TestMethod>] member this.``Should parse 1``() = let actual = parseInt "1" actual |> should equal 1 Maybe I should add another test. Am I missing anything?
  • 22. Test First! [<TestMethod>] member this.``Should parse -1``() = let actual = parseInt "-1" actual |> should equal -1 [<TestMethod>] member this.``Should parse with whitespace``() = let actual = parseInt " 123 " actual |> should equal 123
  • 23. Test First! [<TestMethod>] member this.``Should parse +1 with whitespace``() = let actual = parseInt " +1 " actual |> should equal 1 [<TestMethod>] member this.``Should do ??? with freeform prose``() = let actual = parseInt "A QA engineer walks into a bar…" actual |> should equal ??? Anything else? null, MaxInt+1, non-%20 whitespace, MaxInt, MinInt, 1729? I’m starting to realize I have more questions than answers!
  • 24. More Questions • Is this for trusted or non-trusted input? • Can I trust that my function will be invoked correctly? • What is the culture of the input? 1) Trusted = exception; untrusted = fail gracefully. 2) For a private method, maybe. For a library function, no! Need tests per invocation? 3) , $, etc.? It sounds like we might need a lot of tests. How many? Does it seem weird that we’re talking more about corner cases than “success?” Does this teeny little helper method really need to be perfect? I just wanna parse 123!
  • 25. Getting one digit wrong really can get your company into the headlines. Also, what about security sensitive code. Hashes, RNGs. Does it seem like test case suggestions focused on error cases? Even if 90% of the time we get expected input, I’m far more interested in the reasons which explain 90% of the failures.
  • 26. Bad Error Handling Kills “Almost all catastrophic failures (92%) are the result of incorrect handling of non-fatal errors explicitly signaled in software.” https://www.usenix.org/conference/osdi14/technical-sessions/presentation/yuan Only tested software designed for high reliability. (Cassandra, HDFS, Hadoop…) “But it does suggest that top-down testing, say, using input and error injection techniques, will be challenged by the large input and state space.”
  • 27. Simple Testing Can Prevent Most Critical Failures, Yuan et. al. 92% of the time the catastrophe was caused not by the error itself but rather the combination of the error and then handling it incorrectly!
  • 28. How Can I Be Completely Confident in a Simple Function? (Or at least do the right thing when it fails) (And also insure it’s always called correctly) (Every. Single. Time) Let’s face it, this is the bare minimum first step for trusting an application, right? You might ask, “Why is this idiot rambling on about parseInt? I have 10 million lines of code to test.” I think it’s sometimes informative to start with the simplest thing which could possibly work.
  • 29. Unit Tests • Helping you think through bottom-up designs • Preventing regressions • Getting you to the point where at least something works. Are Great • Showing overall design consistency (top-down) • Finding security holes • Proving correctness or totality of implementation Not So Helpful We can use techniques like strong typing, fuzzing, and formal methods to compliment testing to give more control over code correctness. You will still need tests, but you’ll get much more “coverage” with fewer tests. Looking at the lists here, a theme emerges. To write a test, you needed a mental model of how your function should work. Having written the tests, however, you have thrown away the model. All that's left are the examples.
  • 30. When My Test Fails I know I’ve found a bug (useful!) Passes I know my function works for at least one input out of billions (maybe useless?) Does this make sense to everyone? Do you agree that a passing test doesn’t tell you much about the overall quality of the system? Is there a way to ensure we always get correct output for any input? Yes, but before we even get there, there’s a bigger problem we haven’t talked about yet.
  • 31. How Can I Be Completely Confident When Composing Two Functions? (Composing two correct functions should produce the correct result.) (Every. Single. Time) Let’s face it, this is the bare minimum second step for trusting an application, right? More generally, I would like to be able to build complete, correct programs from a foundation of correct functions. Now verifying my 10 million lines of code is easy; start with correct functions, then combine them correctly!
  • 32. parseIntAndReturnAbsoluteValue = abs ∘ parseInt If I have two good functions, like abs and parseInt, I would like to be able to combine them in order to produce a correct program. But there’s a problem: parseInt, as written, isn’t total (define). I can call it with strings which aren’t integers, and it’s really hard to use tests to ensure I call it correctly 100% of the time. How do I know it will always return something useful?
  • 33. let parseInt (str) = !" implementation One thing I need to do is ensure that people call my function passing a string as the argument, and that the thing it returns is actually an integer, in every case.
  • 34. let parseInt (value: string) : int = !" implementation That’s not too hard. I can prove this with the type system. As long as I don’t do anything which subverts the type system (unsafe casts, unchecked exceptions, null — or use a language which won’t allow it!), I can at least be sure I’m in the right ballpark. But how do I ensure I’m only passed a string representing an integer? Or should I? Can I force the caller to “do the right thing” and handle the error if they happen to pass a bad value.
  • 35. public static bool TryParse( string s, out int result ) { !!. } Again, you can do it with the type system! I’m showing a C# example here, since the idiomatic F# solution is different.
  • 36. public static bool TryParse( string s, out int result ) { !!. } !" appropriate when input is “trusted” int betterBeAnInt = ParseInt(input); !" appropriate for untrusted input int couldBeAnInt; if (TryParse(input, out couldBeAnInt)) { !" !!. It is now difficult to invoke the function without testing success. You have to go out of your way. This probably eliminates the need to use tests to ensure that every case in which this function is invoked checks the success of the function. Consider input validation. Bad input is in the contract. Exceptions inappropriate. Instead of returning an integer, return an integer and a Boolean.
  • 37. But There’s Still The Matter of That String Argument We can prove that we do the right thing when our parseInt correctly classifies a given input value as a legitimate integer and parses it, or rejects it as invalid, but how can we show that we do that correctly? Aren’t we back at square one? Types are super neat because you get this confidence essentially for free, and it never fails, but even the F# type system can’t make sure I return the right integer.
  • 38. State Space 0 }1 { A B In principle, your app, or your function, is a black box. Same input, same output. Easy to test, right? This application should have only two possible states! To be totally confident in your system you need to test, by some means, the entire state space (LoC discussion).
  • 39. State Space “Hello” }“World” { A B ⚅ 🕑 It gets harder quickly. If my inputs are two strings instead of two bits, I now have considerably more possible test cases! (Click) In the real world, you have additional “inputs” like time and randomness, and whatever is in your database.
  • 40. Formal Methods Using formal methods means the design is driven by a mathematical formalism. By definition, this is not test driven development, although you will probably still write tests. Formal methods are sometimes considered controversial in the software development community, because they acknowledge the existence and utility of math.
  • 41. ____ + 1234 ____ [ t]*[+-]?[0-9]+[ t]* It’s easier to use formal methods if there’s an off-the-shelf formalism you can use. For the problem of parsing, these exist! One way to reduce the input domain of the parseInt function from an untestably large number of potential states is to use a regular expression. This is not the sort of regular expression you might encounter in Perl or Ruby; it is a much more restricted syntax typically used on the front end of a compiler. The important point, here, is that we can reduce the number of potential state of the function to a number that you can count on your fingers.
  • 42. 0 1 2 3 4 [ t] [+-] [0-9] [0-9] [0-9] [ t] [+-] REs convert to FSMs. 3+4 are accepting states. 4-5 states, 2 of them accepting, well less than “any possible string!”
  • 43. Totality checking. Breaking my vow to avoid showing implementations. Lots of code here, but the important word is at the top. I’ve hesitated about showing implementations until now, but I can’t avoid it here, because… The proof is built into the implementation
  • 44. When My Test Type Checker Fails I know I’ve found a bug (useful!) I might have a bug (sometimes useful, sometimes frustrating) Passes I know my function works for at least one input out of billions (maybe useless?) There is a class of bugs which cannot exist (awesome!) We can expand this chart now. Tests and types are not opponents; they complement each other. Where one succeeds, the other fails, and vice versa.
  • 45. Property Based Testing Still, there are cases where it’s hard to use formal methods. Not every problem has an off-the-shelf formalism ready to use. But we don’t have to just give up and accept unit tests as the best we can do!
  • 46. let parsedIntEqualsOriginalNumber = fun (number: int) !→ number = parseInt (number.ToString()) > open FsCheck;; > Check.Quick parsedIntEqualsOriginalNumber;; Falsifiable, after 1 test (1 shrink) (StdGen (1764712907,296066647)): Original: -2 Shrunk: -1 val it : unit = () > Can you state things about your system which will always be true? What must be true for my system to work? Looks like I have to do some work on my implementation here! Important: I didn’t have to specify the failing case, as I would with a unit test. FsCheck found it for me. In unit testing, you start with a mental model of the specification, and write your own tests. With property based testing, you write down the specification, and the tests are generated for you.
  • 47. PBT: Great for helping to find bugs in specific routines. Fuzzing: Great for finding unhanded errors in entire systems.
  • 48. http://colin-scott.github.io/blog/2015/10/07/fuzzing-raft-for-fun-and-profit/ It often makes sense to write a custom fuzzer. It’s not hard, and the return is huge. This example more similar to property based testing, since it uses the stated invariants from the Raft specification to test an implementation. (Policy story)
  • 49. Runtime Validation Sometimes the most important value to test is the only one that matters to you at runtime. Assertions are a little under-used, because we tend to think of them as checking trivial things. But using the techniques of property-based testing, we can do end to end validation of our system.
  • 50. let input = " +123 " let number = parseInt input !" 123 let test = number.ToSting() !" "123" if test <> input !" true! then let testNumber = parseInt test !" 123 if number <> testNumber !" false (yay!) then failwith "Uh oh!" !" We’re safe now! Use number… Similar to property based testing
  • 51. http://lefthandedgoat.github.io/canopy/ Integration testing should always be automated. Deals with coupling between systems not covered by type safety (DB, DOM, etc.) Use Canopy Also: write integration test method.
  • 52. The Quality Landscape • Manual testing • Integration tests • Unit tests • Runtime validation • Property based testing • Fuzzing • Formal methods • Static analysis • Type systems • Totality checking The long and the short of it: Think big! Don’t “test all the ___ing time” because somebody told you to. Keep your eyes on the prize of software correctness. Ask yourself which things are most important to the overall quality of your system. Pick the tool(s) which give you the biggest return. Synopsis of each.