4. Software Testing
According to ANSI/IEEE 1059 standard:
“A process of analyzing a software item to detect the differences between existing
and required conditions (that is effects/errors/bugs) and to evaluate the features of
the software item.”
5. Software Testing
According to ANSI/IEEE 1059 standard:
“A process of analyzing a software item to detect the differences between existing
and required conditions (that is effects/errors/bugs) and to evaluate the features of
the software item.”
6. Software Testing
According to ANSI/IEEE 1059 standard:
“A process of analyzing a software item to detect the differences between existing
and required conditions (that is effects/errors/bugs) and to evaluate the features of
the software item.”
10. Testworthy
A test is testworthy if we think
the information we can get is
worth the time spending on it,
regardless of risks, requirements,
test approaches.
Decision Tree can be an useful
tool for test triage [S. Rober,
2004]
11. Testworthy
A test is testworthy if we think
the information we can get is
worth the time spending on it,
regardless of risks, requirements,
test approaches.
Decision Tree can be an useful
tool for test triage [S. Rober,
2004]
22. Traceability Matrix
A document, usually in the form of a
table, used to assist in determining the
completeness of a relationship by
correlating any two baselined documents
using a many-to-many relationship
comparison. [Gotel, Orlena 2012]
Each requirement is linked with its
associated test case so that testing can
be done as per the mentioned
requirements. Furthermore, Bug ID is
also included and linked with its
associated requirements and test case.
23.
24. Automated Testing
Program Structural
Coverage Test
Model Based Test
Combinatorial Test
Adaptive Random Test
Search Based Test
Fuzzing
Mutation Testing
Test Automation
Frameworks
Module Based
Library Architecture
Data Driven
Keyword Driven
Hybrid Testing
Behavior Driven
Development
25. Automated Testing
Program Structural
Coverage Test
Model Based Test
Combinatorial Test
Adaptive Random Test
Search Based Test
Fuzzing
Mutation Testing
Test Automation
Frameworks
Module Based
Library Architecture
Data Driven
Keyword Driven
Hybrid Testing
Behavior Driven
Development
26.
27. Infinite Monkey Theorem
“The infinite monkey theorem states that a monkey
hitting keys at random on a typewriter keyboard for
an infinite amount of time will almost surely type a
given text, such as the complete works of William
Shakespeare.” [Borel E. 1913]
Given infinite time, random input should produce all
possible output. The Infinite Monkey Theorem
translates to the idea that any problem can be solved,
with the input of sufficient resources and time.
28. Fuzzing
Input: invalid, unexpected, or random data
Monitored: for exceptions such as crashes, or
failing built-in code assertions or for finding
potential memory leaks.
Types:
Generation-based or mutation-based:
whether inputs are generated from scratch or
by modifying existing inputs.
Dumb or smart (Monkey): whether it is aware
of input structure or not.
White, grey, or black-box: depending on
whether it is aware of program structure.
29. What Can be Fuzzed?
Anything that consumes complex (Structured) inputs:
● Parsers of any kind (xml, json, asn.1, pdf, truetype)
● Media codecs (audio, video, raster & vector images)
● Network protocols (HTTP, RPC, SMTP, MIME)
● Crypto (boringssl, openssl)
● Compression (zip, gzip, bzip2, brotli)
● Formatted output (sprintf, template engines)
● Compilers and interpreters (Javascript, PHP, Perl, Python, Go, Clang)
● Regular expression matchers (PCRE, RE2, libc’s regcomp)
● Text/UTF processing (icu)
● Databases (SQLite)
● Browsers, text editors/processors (Chrome, vim, OpenOffice)
● OS Kernels (Linux), drivers, supervisors and VMs
Must have for everything that consumes untrusted inputs, open to internet
or otherwise security sensitive.
30. Modern Fuzzers
Grammar-based generation
- Generate random inputs according to grammar rules
- Peach, packetdrill, csmith, gosmith, syzkaller
Blind mutation
- Requires a corpus of representative inputs, apply random mutations to them
- ZZUF, Radamsa
Grammar reverse-engineering
- Learn grammar from existing inputs using algorithmic approach of machine learning
- Sequitur algorithm, go-fuzz
Symbolic execution + SAT solver
- Synthesize inputs with maximum coverage using black magic
- KLEE
Coverage-guided fuzzers
- Genetic algorithm that strives to maximize code coverage
- libFuzzer, AFL, honggfuzz, syzkaller
31.
32. Mutation Testing
The premise in mutation testing is that small changes are made in a module and
then the original and mutant modules are compared. [R. Lipton 1971]
Basic Assumption of Mutation Testing
1) Competent programmer hypothesis: The program to
be tested has been written by a competent programmer.
2) The Coupling Effect: The test data that distinguishes
all programs differing from a correct one by only simple
errors are so sensitive that they are also implicitly
distinguished more complex errors.
37. Mutation Testing…
A mutation, changing the original source code “270E”
on line number 10 into “41FC”. A mutant killed by one unit test.A mutant killed by many unit tests.
39. Mutation Testing in Practice
Advantages of Mutation Testing:
• Attain high coverage of the source program.
• Mutation testing brings a good level of error detection.
• This method uncovers ambiguities in the source code.
Disadvantages of Mutation Testing:
• Mutation testing is an extremely costly and
time-consuming process because all the programs
mutants should be separately generated.
• Totally automation dependent.
• Since this method involves changing the source code, it
is not applicable for the black box testing.
Mutation Testing as a part of TDD Workflow
40.
41. If you have ever stood in line at the
grocery store, you can appreciate the
need for process improvement. In thi
s case, the "process" is called the ch
eck-out process, and the purpose of
the process is to pay for and bag yo
ur groceries. The process begins with
you stepping into line, and ends wit
h you receiving your receipt and leav
ing the store. You are the customer (
you have the money and you have c
ome to buy food), and the store is t
he supplier.
Improving business processes is para
mount for businesses to stay compet
itive in today's marketplace. Some ra
ndom texts are also included to mak
e this more paper like. In fact more
news paper like.
Over the last 10 to 15 years com
panies have been forced to impro
ve their business processes becau
se we, as customers, are demandi
ng better and better products an
d services. And if we do not recei
ve what we want from one suppli
er, we have many others to choos
e from (hence the competitive iss
ue for businesses). Many compani
es began business process impro
vement with a continuous improv
ement model. This model attempt
s to understand and measure the
current process, and make perfor
mance improvements accordingly.
This one seems ok but little bit tu
ning will make it better.
THE QA NEWS
Software Design – CAS (703) Enamul Haque
“Make something really stronger so that nothing can break it!”
7th December 2017
42. References
1) R. Lipton, “Fault Diagnosis of Computer Programs,” Student Report, Carnegie
Mellon University, 1971.
2) Richard A. DeMillo, Richard J. Lipton, and Fred G. Sayward. Hints on test data
selection: Help for the practicing programmer. IEEE Computer, 11(4):34-41. A
pr 1978
3) Adactin, Image: “Performance Testing”. n.d. Web 5 Dec 2017 http://www.adact
in.com/performance-testing/
4) Enhops, Image: “Security Testing”. n.d. Web 5 Dec 2017 http://www.enhops.c
om/security-testing
5) Guru99, Image: “Usability Testing”. n.d. Web 5 Dec 2017 https://www.guru99.c
om/usability-testing-tutorial.html
6) Traceability Matrix: Gotel, Orlena; Cleland-Huang, Jane; Hayes, Jane Huffman
; Zisman, Andrea; Egyed, Alexander; Grünbacher, Paul; Dekhtyar, Alex;
Antoniol, Giuliano; Maletic, Jonathan Cleland-Huang, Jane; Gotel, Orlena; Zis
man, Andrea, eds. Software and Systems Traceability. Springer London. pp. 3
–22. Jan 2012
43. References
7) Saswat Anand , Edmund K. Burke , Tsong Yueh Chen , John Clark , Myra B.
Cohen , Wolfgang Grieskamp , Mark Harman , Mary Jean Harrold , Phil
Mcminn, An orchestrated survey of methodologies for automated software
test case generation, Journal of Systems and Software, v.86 n.8,
p.1978-2001, Aug 2013
8) Robert Sabourin, What Not to Test, Better Software magazine November /
December 2004, Web http://www.amibugshare.com/articles/Article_What_No
t_To_Test.zip
9) Software Testing Help, “Test Automation Frameworks”, October 27, 2017
Web 5 Dec 2017 http://www.softwaretestinghelp.com/test-automation-framew
orks-selenium-tutorial-20/
10) Filip van Laenen, “Mutation Testing: Better code by making bugs”, Leanpub,
13 April 2016. Web ebook 5 Dec 2017 http://leanpub.com/mutationtesting
11) Christian Holler, “ADBFuzz – A Fuzz Testing Harness for Firefox Mobile
Mozilla”, 9 Mar 2012. Web Blog 5 Dec 2017 https://blog.mozilla.org/security/2
012/03/09/adbfuzz-a-fuzz-testing-harness-for-firefox-mobile/