3. www.evologicsoftech.com
How Important is Software
Testing?
• In March 1992, a man received a bill for his as
yet unused credit card stating that he owed
$0.00. He ignored it and threw it away. In April,
he received another and threw that one away
too.
• The following month, the credit card company
sent him a very nasty note stating they were
going to cancel his card if he didn't send them
$0.00 by return of post. He called them and
talked to them; they said it was a computer error
and told him they'd take care of it.
• ….
4. www.evologicsoftech.com
How Important is Software
Testing?
Cost of
Inadequate
Software Testing
Potential Cost
Reduction from
Feasible
Improvements
Financial Services $3.3 billion $1.5 billion
Total U.S.
Economy
$59.5 billion $22.2 billion
* NIST Report: The Economic Impact of Inadequate Infrastructure for Software Testing, 2002.
8. www.evologicsoftech.com
Competition is Only a Click
Away
Source: Jupiter/NFO Consumer Survey
53%
24%
9% 1%
13%
53% did not leave
24% returned, only after going to competitor’s site
13% did not return after completing session
9% left site, never returned
1% gave no answer
11. www.evologicsoftech.com
Significance in Terms of Effort?
• Depending on the risk and complexity of
the application under test,
– Testing comprises about 30-80% of total
SDLC time and sometimes even more
– Testing is about 50% of the total application
development cost
– Testing can account for 70% of costs during
the application life
(All numbers from Mercury Interactive)
17. www.evologicsoftech.com
What is Quality ?
• Quality from the
– Customer’s Viewpoint Fit for use, or other customer
needs
– Producer’s Viewpoint Meeting requirements
18. www.evologicsoftech.com
Quality Function
• Software quality includes activities related
to both
– process,
– and to the product
• Quality Assurance is about the work
process
• Quality Control is about the product
19. www.evologicsoftech.com
What is Quality Assurance?
• Quality assurance activities are work process oriented.
• They measure the process, identify deficiencies, and
suggest improvements.
• The direct results of these activities are changes to the
process.
• These changes can range from better compliance with
the process to entirely new processes.
• The output of quality control activities is often the input to
quality assurance activities.
• Audits are an example of a QA activity which looks at
whether and how the process is being followed. The end
result may be suggested improvements or better
compliance with the process.
20. www.evologicsoftech.com
What is Quality Control?
• Quality control activities are work product oriented.
• They measure the product, identify deficiencies, and
suggest improvements.
• The direct results of these activities are changes to the
product.
• These can range from single-line code changes to
completely reworking a product from design.
• They evaluate the product, identify weaknesses and
suggest improvements.
• Testing and reviews are examples of QC activities since
they usually result in changes to the product, not the
process.
• QC activities are often the starting point for quality
assurance (QA) activities.
22. www.evologicsoftech.com
QA and QC Summery
• QA
• Assurance
• Process
• Preventive
• Quality Audit
• QC
• Control
• Product
• Detective
• Testing
23. www.evologicsoftech.com
Quality... it’s all about the
End-User
• Does this software product work as
advertised?
– Functionality, Performance, System & User
Acceptance ... testing
• Will the users be able to do their jobs using
this product?
– Installability, Compatibility, Load/Stress ...
testing
• Can they bet their business on this
software product?
– Reliability, Security, Scalability ... testing
27. www.evologicsoftech.com
What is Testing?
• Testing is the process of demonstrating
that errors are not present
• The purpose of testing is to show that a
program performs its intended functions
correctly
• Testing is the process of establishing
confidence that a program does what it is
supposed to do
• These definitions are incorrect in that they describe
almost the opposite of what testing should be
viewed as
28. www.evologicsoftech.com
What is objective of Testing?
• Objective of testing is to find all possible bugs (defects)
in a work product
• Testing should intentionally attempt to make things go
wrong to determine if things happen when they shouldn't
or things don't happen when they should.
29. www.evologicsoftech.com
What is Testing?
• DEFINATION: Testing is process of trying to discover
every conceivable fault or weakness in a work product.
• Testing is a process of executing a program with the
intent of finding an error.
• A good test is one that has a high probability of finding
an as yet undiscovered error.
• A successful test is one that uncovers an as yet
undiscovered error
30. www.evologicsoftech.com
The purpose of finding defects
is to get them fixed
• The prime benefit of testing is that it
results in improved quality. Bugs get fixed.
• We take a destructive attitude toward the
program when we test, but in a larger
context our work is constructive.
• We are beating up the program in the
service of making it stronger.
31. www.evologicsoftech.com
Secondary benefits include
• Demonstrate that software functions appear to be
working according to specification.
• That performance requirements appear to have been
met.
• Data collected during testing provides a good indication
of software reliability and some indication of software
quality.
32. www.evologicsoftech.com
What is Testing Summary
• Identify defects
– when the software doesn’t work
• Verify that it satisfies specified
requirements
– verify that the software works
33. www.evologicsoftech.com
Mature view of software testing
• A mature view of software testing is to see
it as a process of reducing the risk of
software failure in the field to an
acceptable level [Bezier 90].
34. www.evologicsoftech.com
What exactly Does a Software
Tester Do?
• The Goal of a software tester is to find
defects,
• and find them as early as possible,
• and make sure they get fixed.
35. www.evologicsoftech.com
What does testing mean to
testers?• Testers hunt errors
– Detected errors are celebrated - for the good of the
work product
• Testers are destructive - but creatively so
– Testing is a positive and creative effort of destruction
• Testers pursue errors, not people
– Errors are in the work product, not in the person who
made the mistake
• Testers add value
– by discovering errors as early as possible
36. www.evologicsoftech.com
How testers do it?
• By examining the user’s requirements, internal
structure and design, functional user interface
etc
• By executing the code, application software
executable etc
47. www.evologicsoftech.com
Test Life Cycle Model
Testing happens throughout the software life cycle
Requireme
nts
Functional
Specs.
Design
Coding
Build
Software
Build
System
Release for
Use
Development
Activity
Review
Code
Review
Design Review
Revised Test
Plan
Specs Review
Test Plan
Acceptan
-ce Test
System
Test
Integration
Test
V & V
Activities
Unit
Test
Static
Testing
Dynamic
Testing
48. www.evologicsoftech.com
What is Verification?
• Verification is process of confirming whether software meets its
specifications
• The process of reviewing/ inspecting deliverables throughout
the life cycle.
• Inspections, walkthroughs and reviews are examples of
verification techniques.
• Verification is the process of examining a product to discover its
defects.
• Verification is usually performed by STATIC testing, or
inspecting without execution on a computer.
• Verification is examining Product Requirements, Specifications,
Design, Code for large fundamental problems oversight,
omission.
• Verification is a “human” examination or review of the work
product.
• Verification
– determining if phase completed correctly
• “are we building the product right?”
49. www.evologicsoftech.com
What is Validation?
• Validation is process of confirming whether software
meets users requirements
• The process of executing something to see how it
behaves.
• Unit, integration, system, and acceptance testing are
examples of validation techniques.
• Validation is the process of executing a product to
expose its defects.
• Validation is usually performed by DYNAMIC testing,
or testing with execution on a computer.
• Eg. If we use software A on hardware B and do C
then D should happen (expected result)
• Validation
– determining if product as whole satisfies
requirements
• “are we building the right product?”
50. www.evologicsoftech.com
Some test techniques
StaticStatic DynamicDynamic
StructuralStructural
BehaviouralBehavioural
FunctionalFunctionalNon-functionalNon-functional
ReviewsReviews
WalkthroughsWalkthroughs
Data
Flow
Data
Flow
StatementStatement
Branch/DecisionBranch/Decision
Branch ConditionBranch Condition
Equivalence
Partitioning
Equivalence
Partitioning
Boundary
Value Analysis
Boundary
Value Analysis
UsabilityUsability
PerformancePerformance
InspectionInspection
Control
Flow
Control
Flow
etc.etc. etc.etc.
54. www.evologicsoftech.com
Walkthrough
• Semi formal meetings, where participants come to the meeting;and
the author gives the presentation.
• Objective:
– To detect defects and become familiar with the material
• Elements:
– A planned meeting where only the presenter must prepare
– A team of 2-7 people, led by the author
– Author usually the presenter.
• Inputs:
– Element under examination, objectives for the walkthroughs
applicable standards.
• Output:
– Defect report
55. www.evologicsoftech.com
Inspection
• Formal meeting, characterized by individual preparation by all participants prior to the
meeting.
• Objectives:
– To obtain defects and collect data.
– To communicate important work product information .
• Elements:
– A planned, structured meeting requiring individual preparation by
all participants.
– A team of people, led by an impartial moderator who assure that
rules are being followed and review is effective.
– Presenter is “reader” other than the author.
– Other participants are inspectors who review,
– Recorder to record defects identified in work product
56. www.evologicsoftech.com
Checklists : the verification tool
• An important tool specially in formal meetings
like inspections
• They provide maximum leverage on on
verification
• There are generic checklists that can be applied
at a high level and maintained for each type of
inspection
• There are checklists for requirements,functional
design specifications, internal design
specifications, for code
59. www.evologicsoftech.com
White-box testing
• White box tests require knowledge of the
internal program structure and are derived
from the internal design specification or the
code.
• They will not detect missing function (I.e.
those described in the functional design
specification but not supported by the
internal specification or code)
61. www.evologicsoftech.com
Black-box testing
• Are derived from functional design
specification, without regard to the internal
program structure.
• Tests the product against the end user,
external specifications.
• Is done without any internal knowledge of
the product
• It will not test hidden functions (i.e
functions implemented but not described in
the functional design specification), and
errors associated with them will not be
found in black-box testing.
66. www.evologicsoftech.com
Unit Testing
• Unit testing is the process of testing the
individual components of a program
• The purpose is to discover discrepancies
between the module’s interface
specification and its actual behavior
67. www.evologicsoftech.com
Unit Testing
• Carried out at earliest stage
• Focuses on the smallest testable units/
components of the software
• Typically used to verify the control flow
and data flow
• It requires the knowledge of the code
hence performed by the developers
68. www.evologicsoftech.com
Unit testing
• Testing a given module (X) in isolation
may require:
1) a driver module which transmits test cases in the form of input
arguments to X and either prints or interprets the result produced by
X
2) zero or more “stub” modules each of which simulates the function of
a module called by X. It is required for each module that is directly
subordinate to X in the execution hierarchy. If X is a terminal module
(I.e. it calls no other modules), then no stubs are required
69. www.evologicsoftech.com
Build scaffolding for incomplete programs
• Stubs and drivers are code that are (temporarily)
written in order to unit test a program
• Driver – code that is executed to accept test case
data, pass it to the component being tested, obtain
result (or pass/fail).
– main () {
foo(1,1,1,1,1);
foo(1,2.1,2,1);
}
• Stub is a dummy subprogram . . . May do minimal
data manipulation, print verification of component
entry, return
– assignTA(prof, course) {
print “You successfully called assignTA for,” prof,
course
70. www.evologicsoftech.com
Integration Testing
• Integration testing is the process of
combining and testing multiple
components together
• To assure that the software units/
components operate properly when
combined together
• To discover errors in the interface
between the components, Verify
Communication Between Units
• It is done by developers/ QA teams
71. www.evologicsoftech.com
Types of Integration Problems
• Wrong call orders
• Wrong parameters
• Missing functions
• Overlapping functions
• Resource problems (memory etc)
• Configuration/ version control
72. www.evologicsoftech.com
Function Testing
• Function testing is the process of
attempting to detect discrepancies
between a program’s functional
specification and its actual behavior
• It verifies that the software provides
expected services
• Includes positive and negative scenarios
i.e. valid inputs and invalid inputs
73. www.evologicsoftech.com
System Testing
• System testing is the process of attempting to
demonstrate that a program or system does not
meet its original requirements and objectives, as
stated in the requirements specification
• It tests business functions and performance
goals when the application works as a whole
• It verifies software operation from the
perspective of the end user, with different
configurations/setups
74. www.evologicsoftech.com
System Testing
• Is performed by a testing group before the product is
made available to customers. It can begin whenever the
product the product has sufficient functionality to execute
some of the tests or after unit and integration testing are
completed.
• It can be conducted in parallel with function testing,
because the tests usually depend on functional
interfaces, it may be wise to delay system testing until
functional testing has demonstrated some pre-defined
level of reliability, e.g. 40% of the function testing is
complete.
75. www.evologicsoftech.com
The steps of system testing are:
• Decompose and analyze the requirements
specification
• Partition the requirements into logical
categories and, for each component,
make a list of the detailed requirements
• For each type of system testing:
• For each relevant requirement,
determine inputs and outputs
• Develop the requirements test cases
76. www.evologicsoftech.com
The steps of system testing are:
• Develop a requirements coverage matrix
which is simply a table in which an entry
describes a specific subtest that adds
value to the requirements coverage, the
priority of that subtest, the specific test
cases in which that subtest appears
• Execute the test cases and measure logic
coverage
• Develop additional tests, as indicated by
the combined coverage information
79. www.evologicsoftech.com
Usability Testing
• Usability testing is the process of
attempting to identify discrepancies
between the user interface of a product
and the human engineering requirements
of its potential users.
• Usability testing collects information on
specific issues from the intended users.
• It often involves evaluation of a product’s
presentation rather than its functionality.
80. www.evologicsoftech.com
Usability Testing
• Usability testing involves having the users
work with the products and observing their
responses to it.
• Unlike Beta testing, which also involves
the user, it should be done as early as
possible in the development cycle.
• The real customer is involved as early as
possible, even at the stage when only
screens drawn on paper are available.
85. www.evologicsoftech.com
Performance testing
• To determine whether the program meets
its performance requirements
• The IEEE standard 610.12-1990 (Software
Engineering Terminology) defines
performance testing: Testing conducted to
evaluate the compliance of a system or
component with specified performance
requirements.
86. www.evologicsoftech.com
Performance testing
• Many programs have specific
performance efficiency objectives
• Performance testing focuses on
performance parameters such as
– transaction response time,
– throughput etc.
• e.g. in database systems the response
time relates to the time to obtain a report
after clicking on a specific button
88. www.evologicsoftech.com
Stress Testing
• The IEEE standard 610.12-1990 (Software
Engineering Terminology) defines
• stress testing: Testing conducted to
evaluate a system or component at or
beyond the limits of its specified
requirements.
89. www.evologicsoftech.com
Stress Testing
• Stress testing is subjecting a system to an unreasonable load
while denying it the resources (e.g., RAM, Disc, CPU, network
bandwidth etc.) needed to process that load.
• The idea is to stress a system to the breaking point in order to
find bugs that will make that break potentially harmful.
• The system is not expected to process the overload without
adequate resources, but to behave (e.g., fail) in a decent
manner (e.g., not corrupting or losing data).
• Bugs and failure modes discovered under stress testing may
or may not be repaired depending on the application, the
failure mode, consequences, etc. The load (incoming
transaction stream) in stress testing is often deliberately
distorted so as to force the system into resource depletion.
90. www.evologicsoftech.com
Configuration testing
• To determine whether the program operates
properly when the software or hardware is
configured in a required manner
• It is the process of checking the operation of the
software with all various types of hardware.
• e.g. for applications to run on Windows-based
PC used in homes and businesses.
• The PC : different manufacturers such as
Compaq, Dell, Hewlett Packard, IBM and
others.
• Components : disk drives, video, sound,
modem, and network cards.
• Options and memory
• Device drivers
91. www.evologicsoftech.com
Compatibility testing
• Testing whether the system is compatible with
other systems with which it should communicate
• It means checking that your software interacts
with and shares information correctly with other
software
• e.g.What other software (operating systems,
web browser etc.) your software is designed to
be compatible with?
• Upgrading to a new database program and
having all your existing databases load in
92. www.evologicsoftech.com
Security testing
• It attempts to verify that protection
mechanisms built into a system will protect it
from improper penetration
• Security is a primary concern when
communicating and conducting business
especially business critical transactions
• Checking
– How the website authenticates users?
– How the website encrypt data?
– How safe is credit card or user information?
– How does the website handle access rights?
93. www.evologicsoftech.com
Installability
• To identify the ways in which the
installation procedures lead to incorrect
results
• Installation options
– New
– Upgrade
– Customized/Complete
– Under normal & abnormal conditions
• Important - Makes first impression on
the end user
96. www.evologicsoftech.com
Acceptance testing
• Acceptance testing is the process of
comparing the end product to the current
needs of its end users.
• It is usually performed by the customer or
end user after the testing group has
satisfactorily completed usability, function,
and system testing
• It usually involves running and operating
the software in production mode for a pre-
specified period
97. www.evologicsoftech.com
Acceptance testing
• If the software is developed under
contract, acceptance testing is performed
by the contracting customer. Acceptance
criteria are defined in the contract.
• If product is not developed under contract,
the developing organization can arrange
for alternative forms of acceptance testing
– ALPHA
– BETA
98. www.evologicsoftech.com
Acceptance Testing
• ALPHA and BETA testing are each employed as a
form of acceptance testing
• Often both are used, in which case BETA follows
ALPHA
• Both involve running and operating the software in
production mode for a pre-specified period
• The ALPHA test is usually performed by end users
inside the development organization. The BETA test
is usually performed by a selected subset of actual
customers outside the company, before the software
is made available to all customers
99. www.evologicsoftech.com
Alpha testing
• At developer site by Customer
• Developer
– Looking over the shoulder
– Recording errors, usage problems
• Controlled environment
100. www.evologicsoftech.com
Beta testing
• At one/ more customer sites by end user
• Developer not present
• “live” situation, developer not in control
• Customer records problems (real,
imagined) and reports to developer
102. www.evologicsoftech.com
Retesting
• Why retest?
– Because any software product that is actively
used and supported must be changed from
time to time, and every new version of a
product should be retested
104. www.evologicsoftech.com
Regression testing
• Regression testing is not another testing
activity
• It is a re-execution of some or all of the
tests developed for a specific testing
activity for each build of the application
• Verify that changes or fixes have not
introduced new problems
• It may be performed for each activity (e.g.
unit test, function test, system test etc)
106. www.evologicsoftech.com
Smoke Testing
• Smoke testing determines whether the
system is sufficiently stable and
functional to warrant the cost of
further, more rigorous testing.
• Smoke testing is also called Sanity
testing.
111. www.evologicsoftech.com
Equivalence Partitioning
• An equivalence class is a subset of data
that is representative of a larger class.
• Equivalence partitioning is a technique for
testing equivalence classes rather than
undertaking exhaustive testing of each
value of the larger class.
112. www.evologicsoftech.com
Equivalence Partitioning
• If we expect the same result from two
tests, you consider them equivalent. A
group of tests from an equivalence class
if,
– They all test the same thing
– If one test catches a bug, the others probably
will too
– If one test doesn’t catch a bug, the others
probably won’t either
113. www.evologicsoftech.com
Equivalence Partitioning
• For example, a program which edits credit
limits within a given range ($10,000-
$15,000) would have three equivalence
classes:
– Less than $10,000(invalid)
– Between $10,000 and $15,000(valid)
– Greater than $15,000 (invalid)
114. www.evologicsoftech.com
Equivalence Partitioning
• Partitioning system inputs and outputs into
‘equivalence sets’
– If input is a 5-digit integer between 10,000
and 99,999 equivalence partitions are
<10,000, 10,000-99,999 and >99,999
• The aim is to minimize the number of test
cases required to cover these input
conditions
115. www.evologicsoftech.com
Equivalence Partitioning
• Equivalence classes may be defined
according to the following guidelines:
– If an input condition specifies a range, one
valid and two invalid equivalence classes are
defined.
– If an input condition requires a specific value,
then one valid and two invalid equivalence
classes are defined.
– If an input condition is boolean, then one valid
and one invalid equivalence class are defined.
116. www.evologicsoftech.com
Equivalence Partitioning
Summary
• Divide the input domain into classes of data for which test cases can
be generated.
• Attempting to uncover classes of errors.
• Based on equivalence classes for input conditions.
• An equivalence class represents a set of valid or invalid states
• An input condition is either a specific numeric value, range of
values, a set of related values, or a Boolean condition.
• Equivalence classes can be defined by:
• If an input condition specifies a range or a specific value, one valid
and two invalid equivalence classes defined.
• If an input condition specifies a Boolean or a member of a set, one
valid and one invalid equivalence classes defined.
• Test cases for each input domain data item developed and
118. www.evologicsoftech.com
Boundary value analysis
• A technique that consists of developing
test cases and data that focus on the input
and output boundaries of a given function.
• In same credit limit example, boundary
analysis would test:
– Low boundary plus or minus one ($9,999 and
$10,001)
– On the boundary ($10,000 and $15,000)
– Upper boundary plus or minus one ($14,999
and $15,001)
119. www.evologicsoftech.com
Boundary value analysis
• Large number of errors tend to occur at boundaries of the
input domain
• BVA leads to selection of test cases that exercise boundary
values
• BVA complements equivalence partitioning. Rather than
select any element in an equivalence class, select those at
the ''edge' of the class
• Examples:
• For a range of values bounded by a and b, test (a-1), a,
(a+1), (b-1), b, (b+1)
• If input conditions specify a number of values n, test with
(n-1), n and (n+1) input values
• Apply 1 and 2 to output conditions (e.g., generate table of
minimum and maximum size)
120. www.evologicsoftech.com
Example: Loan application
Customer Name
Account number
Loan amount requested
Term of loan
Monthly repayment
Term:
Repayment:
Interest rate:
Total paid back:
6 digits, 1st
non-zero
£500 to £9000
1 to 30 years
Minimum £10
2-64 chars.
121. www.evologicsoftech.com
Account
number
5 6 7
invalid
valid
invalid
number of digits:
first character:
invalid: zero
valid: non-zero
Conditions Valid
Partitions
Invalid
Partitions
Valid
Boundaries
Invalid
Boundaries
Account
number
6 digits
1st
non-zero
< 6 digits
> 6 digits
1st
digit = 0
non-digit
100000
999999
5 digits
7 digits
0 digits
122. www.evologicsoftech.com
Error Guessing
• Based on the theory that test cases can be developed
based upon the intuition and experience of the Test
Engineer
• For example, in an example where one of the inputs is
the date, a test engineer might try February 29,2000 or
9/9/99
126. www.evologicsoftech.com
White-box methods for internal-
based tests
• There are thre basic forms of logic
coverage:
– statement coverage
– decision (branch) coverage
– condition coverage
127. www.evologicsoftech.com
Statement coverage
Execute all statements at least once
• This is the weakest coverage criterion.
• It requires execution of every line of code at least once.
• Many lines check the value(s) of some variable(s) and
make decisions based on this. To check each of the
decision-making functions of the line, the programmer has
to supply different values, to trigger different decisions. As
an example consider the following:
• If (A<B and C=5)
– THEN do SOMETHING
• SET D=5
128. www.evologicsoftech.com
Statement coverage
• If (A<B and C=5)
– THEN do SOMETHING
• SET D=5
• To test these lines we should explore the
following cases:
a) A<B and C=5 (Something is done, then D is set to 5)
b) A<B and C!=5 (Something is not done, D is set to 5)
c) A>=B and C=5 (Something is not done, D is set to 5)
c) A>=B and C!=5 (Something is not done, D is set to 5)
129. www.evologicsoftech.com
Decision (branch) coverage
Execute each decision direction at least once
• For branch coverage, the program can
use case (a) and any one of the other
three.
• At a branching point a program does one
thing if a condition (such as A<B and C=5)
is true, and something else if the condition
is false. To test a branch, the programmer
must test once when the condition is true
and once when it’s false.
134. www.evologicsoftech.com
Test Planning
• It is the process of defining a testing
project such that it can be properly
measured and controlled
• It includes test plan, test strategy, test
requirements and testing resources
135. www.evologicsoftech.com
Test Design
• It is the process of defining test
procedures and test cases that verify that
the test requirements are met
• Specify the test procedures and test cases
that are easy to implement, easy to
maintain and effectively verify the test
requirements.
136. www.evologicsoftech.com
Test Development
• It is the process of creating test
procedures and test cases that verify
the test requirements
• Automated Testing using Tools
• Manual Testing
137. www.evologicsoftech.com
Test basis
• The basis of test is the source material (of the product
under test) that provide the stimulus for the test. In other
words, it is the area targeted as the potential source of
an error:
• Requirements-based tests are based on the
requirements document
• Function-based tests are based on the functional design
specification
• Internal-based tests are based on the internal design
specification or code.
• Function-based and internal-based tests will fail to detect
situations where requirements are not met. Internal-
based tests will fail to detect errors in functionality
138. www.evologicsoftech.com
Test
• An activity in which a system or
component is executed under specified
conditions, the result are observed or
recorded, and an evaluation is made of
some aspect of the system or component.
• A set of one or more test cases.
139. www.evologicsoftech.com
Two basic requirements for all
validation tests
• Definition of result
– A necessary part of a test case is a definition
of the expected output or result.
• Repeatability
140. www.evologicsoftech.com
Test Case
• A set of test inputs, execution conditions, and
expected results developed for a particular objective
• The smallest entity that is always executed as unit,
from beginning to end
• A test case is a document that describes an input,
action, or event and an expected response, to
determine if a feature of an application is working
correctly
• A test case should contain particulars such as test
case identifier, test case name, objective, test
conditions/setup, input data requirements, steps,
and expected results
• Test case may also include prerequisites
141. www.evologicsoftech.com
Test Case
• A "test case" is another name for "scenario". It is a
particular situation to be tested, the objective or goal of
the test script. For example, in an ATM banking
application, a typical scenario would be: "Customer
deposits cheque for $1000 to an account that has a
balance of $150, and then attempts to withdraw $200".
• Every Test Case has a goal; that is, the function to be
tested. Quite often the goal will begin with the words "To
verify that ..." and the rest of the goal is a straight copy of
the functional test defined in the Traceability Matrix. In
the banking example above, the goal might be worded
as "To verify that error message 63 ('Insufficient cleared
funds available') is displayed when the customer
deposits a cheque for $1000 to an account with a
balance of $150, and then attempts to withdraw $200".
142. www.evologicsoftech.com
Test Case Components
• The structure of test cases is one of the things that stays remarkably the
same regardless of the technology being tested. The conditions to be tested
may differ greatly from one technology to the next, but you still need to know
three basic things about what you plan to test:
• ID #: This is a unique identifier for the test case. The identifier does not
imply
a sequential order of test execution in most cases. The test case ID can
also
be intelligent. For example, the test case ID of ORD001 could indicate a test
case for the ordering process on the first web page.
• Condition: This is an event that should produce an observable result. For
example, in an e-commerce application, if the user selects an overnight
shipping option, the correct charge should be added to the total of the
transaction. A test designer would want to test all shipping options, with
each option giving a different amount added to the transaction total.
143. www.evologicsoftech.com
Test Case Components
• Procedure: This is the process a tester needs to perform to invoke
the condition and observe the results. A test case procedure should
be limited to the steps needed to perform a single test case.
• Expected Result: This is the observable result from invoking a test
condition. If you can’t observe a result, you can’t determine if a test
passes or fails. In the previous example of an e-commerce shipping
option, the expected results would be specifically defined according
to the type of shipping the user selects.
• Pass/Fail: This is where the tester indicates the outcome of the test
case. For the purpose of space, I typically use the same column to
indicate both "pass" (P) and "fail" (F). In some situations, such as
the regulated environment, simply indicating pass or fail is not
enough information about the outcome of a test case to provide
adequate documentation. For this reason, some people choose to
also add a column for "Observed Results."
• Defect Number Cross-reference: If you identify a defect in the
execution of a test case, this component of the test case gives you a
way to link the test case to a specific defect report.
144. www.evologicsoftech.com
• Sample Business Rule
• A customer may select one of the following options for
shipping when ordering products. The shipping cost will
be based on product price before sales tax and the
method of shipment according to the table below.
• If no shipping method is selected, the customer receives
an error message, "Please select a shipping option." The
ordering process cannot continue until the shipping
option has been selected and confirmed.
145. www.evologicsoftech.com
Characteristics of a good test
• An excellent test case satisfies the
following criteria:
– It has a reasonable probability of catching an
error
– It is not redundant
– It’s the best of its breed
– It is neither too simple nor too complex
146. www.evologicsoftech.com
Test Execution
• It is the process of running a set of
test procedures against target
software build of the application under
test and logging the results.
• Automation
148. www.evologicsoftech.com
What should our overall
validation strategies be?
• Requirements-based tests should employ the
black-box strategy. User requirements can be
tested without knowledge of the internal design
specification or the code.
• Function-based tests should employ the black-
box strategy. Using the functional-design
specification to design function-based tests is
both necessary and sufficient.
• Internal-based tests must necessarily employ
the white-box strategy. Tests can be formulated
by using the functional design specification.
153. www.evologicsoftech.com
How many testers does it take
to change a light bulb?
• None. Testers just noticed that the room
was dark. Testers don't fix the problems,
they just find them
154. www.evologicsoftech.com
What is Testing?
• Objective of testing is to find all possible
bugs (defects) in a work product
• Testing is process of trying to discover
every conceivable fault or weakness in a
work product
155. www.evologicsoftech.com
The purpose of finding defects
• The purpose of finding defects is to get
them fixed
• The prime benefit of testing is that it
results in improved quality. Bugs get fixed
156. www.evologicsoftech.com
What exactly Does a Software
Tester Do?
• The Goal of a software tester is to find
defects,
• and find them as early as possible,
• and make sure they get fixed.
• The best tester isn’t the one who finds the
most bugs or who embarrasses the most
programmers. The best tester is the one
who gets the most bugs fixed.
157. www.evologicsoftech.com
What Do You Do When You
Find a defect?
• Report a defect
• The point of writing Problem Reports is to
get bugs fixed.
- Testing Computer Software
158. www.evologicsoftech.com
What is definition of defect?
• A flaw in a system or system component
that causes the system or component to
fail to perform its required function.
- SEI
- A defect, if encountered during execution,
may cause a failure of the system.
159. www.evologicsoftech.com
Some typical defect report fields
• Summery
• Date reported
• Detailed description
• Assigned to
• Severity
• Detected in Version
• Priority
• System Info
• Status
• Reproducible
• Detected by
• Screen prints, logs,
etc.
162. www.evologicsoftech.com
Who reads our defect reports?
• Project Manager
• Executives
• Development
• Customer Support
• Marketing
• Quality Assurance
• Any member of the Project Team
165. www.evologicsoftech.com
Downstream affects of a poorly
written subject line
# ID Status Build SeverityPrioritySubject Entered By
1 310103 Open 6.6.00 2 - High2 - HighLogin box problem
2 310174 Open 6.5.20 3 - Medium3 - MediumResult Pane
3 310154 Open 6.5.20 2 - High1 - HighAdmin modiule - Cannot creat a new
employee record
166. www.evologicsoftech.com
What may happens to these
defects?
• What if your Manager or Team lead is in a
meeting where they are looking at
hundreds of defects for possible deferral
• The Product team cannot make the right
decisions about this software error
169. www.evologicsoftech.com
Who reads the description part
of the defect report?
• Development
• Project Manager
• Customer Support
• Quality Assurance
• Any member of the Project Team
172. www.evologicsoftech.com
What’s missing in these
descriptions?
• The product build that the error was found
on.
• Operating System that it was tested on.
• Steps to reproduce.
174. www.evologicsoftech.com
How serious is the defect?
Sev. Desc. Criteria
1 Show
Stopper
Core dumps, Inability to install/uninstall the
product, Product will not start, Product hangs or
Operating System freezes, No workaround is
available, Data corruption, Product abnormally
terminates
2 High Workaround is available, Function is not working
according to specifications, Severe performance
degradation, Critical to a customer
3 Medium Incorrect error messages, Incorrect data,
Noticeable performance inefficiencies
4 Low Misspellings, Grammatical errors, Enhancement
requests. Cosmetic flaws
175. www.evologicsoftech.com
How to decide priority?
Pr
ior
ity
Desc. Criteria
1 Show
Stopper
Immediate fix, block further testing, very visible
2 High Must fix before the product is released
3 Medium Should fix if time permits
4 Low Would like fix but can be released as is
177. www.evologicsoftech.com
An example of a bug:
• Jane Doe is typing a letter using a word
processor program. The program is
version 2.01 and runs only on Windows
NT. Jane had typed something in
incorrectly and pressed Escape. When
she pressed Escape the program exited.
Jane opened the program again, only to
find that ALL of her work was gone.
Annoyed with what happened Jane got up
& walked away.
178. www.evologicsoftech.com
Investigate the bug further?
• Repeat the scenario & see if the user is
prompted to save there work?
• What happens if you open a file & make
changes and then press Escape? Are you
prompted to save?
• What if you are typing text in and then you open
a file on top? Does your work get replaced with
the file that was just opened?
179. www.evologicsoftech.com
What is the bug?
• Jane was not prompted to save her work.
When she pressed Escape, all of it was
lost when the program exited.
181. www.evologicsoftech.com
What part is the Failure?
• The user is not prompted to save
changes, nor are they automatically
saved. All of the changes are lost.
182. www.evologicsoftech.com
Every bug has a condition and a
failure
• Condition – is usually the circumstances
or the steps that were executed prior to
the bug appearing.
• Failure – is the bug itself.
183. www.evologicsoftech.com
What we need to write in the
defect report
• What are the conditions and the Failure?
• Use the Failure info. for your Subject line
• Steps to reproduce the Software Error
• Determine the Severity & Priority
• Include the OS & build number
• Information on other conditions tested
186. Submitted Assigned Resolved Validated
Duplicated
Change
request
submitted
to database
Change
request
is assigned
to engineer.
Change
request
is open.
Postponed Change request
is resolved
in some
approved way.
Outcome of
change
request
is verified.
UnAssigned
Close
Reject Fix
Change Request Life Cycle
191. www.evologicsoftech.com
Test Plan Objectives
• To identify the items that are subject to testing
• To communicate , at high level, the extent of
testing
• To define the roles and responsibilities for test
activities
• To provide an accurate estimate of effort
required to complete the testing defined in the
plan
• To define the infrastructure and support
required.
192. www.evologicsoftech.com
Why Planning and not the Plan
• The test plan is simply a by-product of
the detailed planning process that’s
undertaken to create it. It’s the planning
process that matters, not the resulting
document.
• The ultimate goal of the test planning
process is communicating the software
test team’s intent, its expectations, and
its understanding of the testing that’s to
be performed.
193. www.evologicsoftech.com
Why Planning and not the Plan
• The result of the test planning process must
be a clear, concise, agreed-on definition of
the product’s quality and reliability goals
• Defines the scope and general directions for
a testing project
• Describe and justify test strategy
• The Software test plan is the primary means
by which software testers communicate to
the product development team what they
intend to do
194. www.evologicsoftech.com
Test plan - Outline
• Test-plan identifier
• Introduction
• Test items
• Features to be tested
• Features not be
tested
• Approach
• Item pass/fail criteria
• Suspension criteria
and resumption
• Test deliverables
• Testing tasks
• Environmental needs
• Responsibilities
• Staffing and training
needs
• Schedule
• Risks and
contingencies
• ApprovalsSource: ANSI/IEEE Std 829-1998, Test Documentation
195. www.evologicsoftech.com
Test Plan Details
1. Test-Plan identifier
– Specify unique identifier
2. Introduction
– Objectives,Background,scope and
references are included.
– Summarize the software items and software
features to be tested.The need for each item
and its history may be included.
196. www.evologicsoftech.com
Test Plan Details
3. Test items
- identify the test items including their
version/revision level.
- Supply references to the following item
documentation e.g.
- Requirements specification,design
specification, User guide, Operations guide,
Installation guide
-Reference any incident reports relating to
the test items.
197. www.evologicsoftech.com
Test Plan Details
4. Features to be tested
- identify all software features and
combinations of software features to be tested.
- identify the test-design specification
associated with each feature .
5. Features not to be tested
- identify features and significant combination
of software features which will not be tested
and the reasons.
198. www.evologicsoftech.com
Test Plan Details
6. Approach
Describes the overall approach to testing.
Specify the major activities , techniques, and
tools which are used to test the designated
group of features.
- The approach should be described in
sufficient detail to permit identification of major
testing tasks and estimation of time required to
do each one.
- Specify any additional completion criteria,
constraints .
199. www.evologicsoftech.com
Test Plan Details
7. Item pass/fail criteria
specify the criteria to be used to determine
whether each item has passed or failed
testing.
8. Suspension criteria and resumption
requirements –
criteria used to suspend all or a portion of the
testing activity.
specify the testing activities which must be
repeated , when testing is resumed.
200. www.evologicsoftech.com
Test Plan Details
9. Test deliverables
identify the deliverable documents. Such as
- test plan
- test design specifications
- test case specifications
- test procedure specifications
- test logs
- test incident reports
- test summary reports
- test input data and test output data
- test tools(e.g modules, drivers and stubs)
201. www.evologicsoftech.com
Test Plan Details
10. Testing tasks
- identify the set of tasks necessary to
prepare for and perform testing.identify
all intertask dependencies and any
special skills required.
203. www.evologicsoftech.com
Test Plan Details
12. Responsibilities
- Identify groups responsible for
managing, designing, preparing,
executing, checking and resolving.
- these groups may include – testers,
developers, operation staff, user
representatives, technical support team.
205. www.evologicsoftech.com
Test Plan Details
14. Schedule
- estimate the time required to do each
testing task.
- specify the schedule for each testing
task
- for each resource(i.e. facilities, tools,
and staff) specify its period of use.
206. www.evologicsoftech.com
Test Plan Details
15. Risks and contingencies
-identify the high risk assumptions of the
test plan. Specify contingency plans for
each .
-e.g. delayed delivery of test items might
require increased night shifts scheduling.
208. www.evologicsoftech.com
Example – Payroll system
• The system is used to perform following
major functions
– Maintain employee information
– Maintain payroll history information
– Prepare payroll checks
– prepare payroll tax reports
– Prepare payroll history reports
209. www.evologicsoftech.com
Plan
Identifier – TP1
Introduction-
1. Objective –
i. To detail the activities required to
prepare for and conduct the system test.
ii. To communicate to all responsible
parties the tasks which they are to perform
and the schedule to be followed in performing
the tasks.
iii. To define the sources of the
information used to prepare the plan.
iv. To define the test tools and
environment needed to conduct the system
test.
211. www.evologicsoftech.com
Test items
• All items which make up the corporate
payroll system.
• Documents which provide basis for
defining correct operation
– Requirements documents
– Design description
– Reference manual
• Items to be tested are
– Program modules,User
procedures,operator procedures
212. www.evologicsoftech.com
Features to be tested
• Database conversion
• Complete payroll processing for
salaried employees only
• For hourly employees only
• For all employees
• Periodic reporting
• Security
• Recovery
• performance
213. www.evologicsoftech.com
Features not to be tested
• Certain reports will not tested because
they are not to be used when the
system is initially installed.
• e.g
– Training schedule reports
– Salary reports
214. www.evologicsoftech.com
Approach
• The test personnel will use the system
documentation to prepare all test
design, case, and procedure
specifications.
• Personnel from the payroll accounting
departments will assist in developing
the test designs and test cases.
215. www.evologicsoftech.com
Testing done will be for
the following :
• Verification of converted database using “data
base auditor”---for checking value ranges
within a record & required relationships
between the records.
• Counting input & output records
• Payroll processing – using sets of records of
employees. salaried employees, hourly
employees, Merged set of these two.
• Security testing -It is an attempted access
without proper password to
-Online data entry &
216. www.evologicsoftech.com
Testing done will be for
the following :
• Recovery testing
• Performance testing
– Performance will be evaluated against the
performance requirements by measuring the run
times of several jobs using production data
volumes.
• Regression
– Regression test will be done on a new version to
test the impacts of the modifications
217. www.evologicsoftech.com
Item pass /fail criteria
• The system must satisfy standards
requirements for system Pass/Fail stated
in development standards & procedures
• The system must also satisfy memory
requirements – must not be greater than
64K
• Entry criteria – integration testing must be
completed.
• Exit criteria – execution of all test cases
and completion of testing.
218. www.evologicsoftech.com
Suspension Criteria &
Resumption Requirements
• SUSPENSION : for e.g.Inability to convert
employee information database will cause
suspension of all activities. Show stopper
defects .
• RESUMPTION:After suspension has
occurred a regression test will be run.
Arrival of build with fixed defects.
219. www.evologicsoftech.com
Test deliverables
The following documents will be generated by
the system test group & will be delivered to
configuration management group
DOCUMENTATION:
• System test plan
• System test design specifications
• System test case specifications
• System test logs
• System test incident report log
• System test incident reports
• System test summary
223. www.evologicsoftech.com
Staffing & training needs
Following staff is needed
Test group:
• Test Manager 1
• Senior Test Analyst 1
• Test Analyst 2
• Test Technician 1
Payroll supervisor: 1
TRAINING:
The corporate Pay roll department personnel
must be trained to do the data entry
transactions .
225. www.evologicsoftech.com
Risks & contingencies
• If the testing schedule is significantly
impacted by system failure, the
development manager has agreed to
assign a full-time person to test group to
do debugging
228. www.evologicsoftech.com
Test Reporting
• Test reports are a means of
communicating test status and findings
both within the project team and to
management
• Test report are a key mechanism available
to the test manager to communicate the
value of testing to the project team and IS
management.
229. www.evologicsoftech.com
Test Reporting
• Reports usually focus on the test work products
produced, defect identified during the test
process, the efficiency of the test an test status.
• The frequency of test reports should be at the
discretion of the team, and the extensiveness of
the test process. Generally large testing projects
will require much more interim reporting than will
small test projects with very limited test staff.
230. www.evologicsoftech.com
Test Reporting
• Requirements Tracing
• Function Test Matrix
• This maps the system functions to the test case that
validates that function
• The function test matrix show that tests must be
performed in order to validate the functions. This matrix
will be used to determine what tests are needed, and the
sequencing of tests. It will also be used to determine the
status of testing
231. www.evologicsoftech.com
• In a large testing project, it is easy to lose track
of what has been tested and what should be
tested. The question often arises, "Is the testing
comprehensive enough?"
• A simple way of determining what to test is to go
through the source documents (Business
Requirements, Functional Specifications,
System Design Document, etc.) paragraph by
paragraph and extract each requirement. A
simple matrix is built, with the following format:
232. www.evologicsoftech.com
Test Assets
• As we develop our software we also
create test assets/ test ware such as
– Test Plan
– Test Cases
– Test Data
– Test Results
– Defect Reports
233. www.evologicsoftech.com
Source
Document
Section Requirement Test Case
Func Spec 3.1
An account number must be entered, of 8 digits and a 1
digit checksum. Invalid account numbers must be
diagnosed with an appropriate error message.
147
Func Spec 3.1
The account number must exist in the Customer
Database; if not, an appropriate error message must be
displayed.
148
Func Spec 3.2
Given a valid account number, the customer details must
be retrieved from the Customer Database and displayed
on the CD-102 screen form.
149
234. www.evologicsoftech.com
• Basically, the Traceability Matrix relates each functional requirement to a
specific test case. Thus if someone says, "Did we test that the account
numbers must be valid?", the matrix indicates which test case does that
test.
• The Traceability Matrix is created before any test cases are written, because
it is a complete list of what has to be tested.
• Sometimes there is one test case for each requirement; other times, several
requirements can be validated by one longer test case.
• Frequently, there are NO business requirements available; although this is a
major mistake (as no one has documented what the business needs), such
is life. In this case there won't be a source reference document, so there is
no "traceability"; the matrix is then simply called a "Test Matrix".
235. www.evologicsoftech.com
Requirements Tracing/ Function Test Matrix
• The test case numbers can be color
coded or coded with number or symbol to
indicate the following:
• Black: Test not yet developed
• Blue: Test developed and executed
• Red: Test developed but not executed
237. www.evologicsoftech.com
Functional Testing Status
Reporting
• Objective: The purpose of this report is to present what functions have been
fully tested, what function have been tested but contain errors, and what
functions have not been tested. The report will include 100 percent of the
functions to be tested in accordance with the test plan
• Example: A sample of this test report showing that 50 percent of the
functions tested have errors, 40 percent are fully tested, and 10 percent of
the functions have not been tested is illustrated in the above graph
• How to Interpret the Report: The report is designed to show status. It is
intended for the test manager and/or customer of the software system. The
interpretation will depend heavily on the point in the test process at which
the report is prepared. As the implementation date approaches, a high
number of functions tested with uncorrected errors, plus functions not
tested, would raise concerns about meeting the implementation date.
239. www.evologicsoftech.com
Functions Working Timeline
• Objective: The purpose of this report is to show the status of testing
and the probability that the development and test groups Will have
the system ready on the projected implementation date
• The example of the functions working timeline (Figure) shows the
normal projection for having functions working. This report assumes
a September implementation date and shows from January through
September the percent of functions that should be working correctly
at any point in time. The actual line shows that the project is doing
better than projected.
• How to Interpret the Report: If the actual is performing better than
the planned, the probability of meeting the implementation date is
high. On the other hand, if the actual percent of functions working is
less than planned, both the test manager and development team
should be concerned and may want to extend the implementation
date or add additional resources to testing and/or development.
241. www.evologicsoftech.com
Defects Uncovered versus
Corrected Gap Timeline
• Objective: The purpose of this report is to show the backlog of detected but
uncorrected defects. It merely requires recording defects when they have
been detected, and recording them again when they have been successfully
corrected.
• Example: The example in graph shows a project beginning in January with a
projected September implementation shows the cumulative number of
defects uncovered in test and the second line shows the cumulative number
of defects corrected by the development team, which have been retested to
demonstrate that correctness. The gap then represents the number of
uncovered but uncorrected defects at any point in time.
• How to interpret the Report: The ideal project would have a very small gap
between these two timelines. If the gap becomes large, it is indicative that
the backlog of uncorrected defects is growing and the probability of the
development team correcting them prior to implementation date is
decreasing. The development team needs to manage this gap to ensure
that it remains minimal.
244. www.evologicsoftech.com
Defect Distribution Report
• Objective: The purpose of this report is to show how defects are distributed
among the modules/units being tested. It shows the total cumulative defects
uncovered for each module being tested at any point in time.
• Example: The defect distribution report example shows eight units under
test and the number of defects that have been uncovered in each of those
units to date. The report could be enhanced to show the extent of testing
that has occurred on the modules. For example, it might be color coded by
the number of tests, or the number of tests might be incorporated into the
bar as a number, such as the number 6 for unit that has undergone six tests
at the point in time that the report was prepared.
• How to Interpret this Report: This Report can help identify modules that
have an excessive defect rate. A variation of the report could show the
cumulative defects by test. Foe example, the defects uncovered in test 1,
the cumulative defects uncovered by the end of test 2, the cumulative
defects uncovered by test 3, and so forth. Certain modules that have
abnormally high defect are ones that frequently have ineffective
architecture, and are candidates for rewrite rather than additional testing.
248. www.evologicsoftech.com
Why not just "test everything"?
Total for 'exhaustive' testing:
20 x 4 x 3 x 10 x 2 x 100 =20 x 4 x 3 x 10 x 2 x 100 = 480,000 tests480,000 tests
If 1 second per test, 8000 mins, 133 hrs, 17.7 days
(not counting finger trouble, faults or retest)
Average: 10 fields / screen
2 types input / field
(date as Jan 3 or 3/1)
(number as integer or decimal)
Around 100 possible values
system has
20 screens
Avr. 4 menus
3 options / menu
10 secs = 34 wks, 1 min = 4 yrs, 10 min = 40 yrs
249. www.evologicsoftech.com
How much to test?
Quantity
Amount of Testing
Number of
Missed Bugs
Cost of
Testing
Under
Testing
Over
Testing
Optimal Amount
of Testing
250. www.evologicsoftech.com
How much testing?
• It depends on RISKRISK
– risk of missing important faults
– risk of incurring failure costs
– risk of releasing untested or under-tested
software
– risk of losing credibility and market share
– risk of missing a market window
– risk of over-testing, ineffective testing
251. www.evologicsoftech.com
Testing addresses risk
• So, risks help us both to determine what to
test and how much to test. It is risk that
helps the tester to prioritize the tests which
WILL be run above all the tests which
COULD be run.
252. www.evologicsoftech.com
What is Risk?
• Risk can be defined as a combination of
the likelihood of a problem occurring, and
the impact it would have on user
260. www.evologicsoftech.com
When should you stop testing?
• When the desired number of test cases have
been executed
• All identified defects have been addressed
• When cost of testing outweigh potential cost of
not fixing a bug
• When you are confident that the system works
correctly
• It depends on the risks for your system
264. www.evologicsoftech.com
How can World Wide Web sites
be tested?
• Web applications are special kind of
client/server
• What additional tests are required?
265. www.evologicsoftech.com
e-Business Application Testing
Challenges
• Content is dynamic
–Personalization means every user gets a
different page
• Wide variety of browsers and platforms
–Every environment has potential problems
• The application keeps changing
–New builds every day!
267. www.evologicsoftech.com
Common Web Site Testing
Objectives
General page layout
• frames
• images
• tables
Functional testing
• of each transaction
• using different sets of valid data
• across different browsers
Regression testing
• hardware and software upgrades
• web site enhancements
270. www.evologicsoftech.com
Functional Testing
(verify web site works for different
browsers)
View
Shopping
Cart
Delete
an order
Create
an order
Create new
account
View
Shopping
Cart
Delete
an order
Create
an order
Create new
account
274. www.evologicsoftech.com
Testing Tools
• Capture/Playback - Capture user interaction with
application, playback and compare against a
baseline (Automated functional testing tools)
• Test Management: Create test documents (test
plans, cases), track test execution progress
during test cycle
• Performance/Stress Test: Measure performance
of application under expected load and stress
conditions.
• Defect Tracking: Report errors found during test
execution, track through fix and re-test.
275. www.evologicsoftech.com
Why Use Testing Tools?
No Testing Manual Testing
• Time consuming
• Low reliability
• Human resources
• Inconsistent
Automated Testing
Speed
Repeatability
Coverage
Reliability
Reusability
Programming
capabilities
Cost
278. www.evologicsoftech.com
CMM- Capability Maturity
Model
• It is an industry-standard model for
defining and assessing the maturity of a
software company’s development
process .
• It was developed by SEI(Software
engineering Institute) and Carnegie
Mellon University, under direction of the
U.S. DoD.
279. www.evologicsoftech.com
CMM levels
• Its 5 levels provide a simple means to
assess a company’ software
development maturity and determine
the key process areas they could
improve to move up to the next level of
maturity.
• Level 1: Initial – Ad hoc and chaotic
process. A project’s success depends
on heroes and luck. Unpredictable and
poorly controlled.
280. www.evologicsoftech.com
CMM levels
• Level 2 : Repeatable – Project level thinking.
Can repeat previously mastered tasks.
• Level 3 : Defined – Organizational level
thinking. Process characterized, fairly well
understood and standardized.
• Level 4 : Managed – Process measured and
controlled. Process is under statistical control.
Product quality is specified quantitatively
beforehand.
• Level 5 : Optimizing – focus on process
improvement. New technologies and
processes are attempted.
281. www.evologicsoftech.com
Verifying requirements
• To ensure that users’ needs are properly
understood before translating them into
design.
• Written from customer or market perspective.
• Properties of good requirements
specifications are:
– Precise, unambiguous, and clear
– Consistent
– Relevant
– Testable - i.e. measurable
– Traceable
– Achievable
282. www.evologicsoftech.com
Verifying the functional design
• Functional design is the process of translating
user requirements into the set of external
interfaces.
• Written from an engineering perspective
• Checklist for functional design specification
covers certain items like
– Check if each requirement has been implemented.
– What’s missing
– Watch for vague terms like – some, sometimes,
often, mostly, most etc.
283. www.evologicsoftech.com
Verifying the internal design
• Internal design is the process of translating the
functional specification into detailed set of data
structures, data flows, and algorithms.
• Internal design checklist covers items like
– Does the design document contain a description
of the procedure that was used to do preliminary
design?
– Is there a model of the user interface to the
computing system?
– Is there a high-level functional model of the
proposed computing system?
284. www.evologicsoftech.com
Verifying the code
• Coding is the process of translating the
detailed design specification into a specific
set of code.
• Verifying the code involves the following
activities
– Comparing the code with the design specification
– Examine the code against a language-specific
checklist
• Code checklist – sample items
– Data reference errors – unintialized variables
– Data declaration errors – variables with similar
names
– Computation errors – target variable type smaller
than the right hand expression
285. www.evologicsoftech.com
Coverage
• How do we measure how thoroughly
tested a product is ?
– The measure of “testedness” for a software
are the degrees to which the collective set f
test cases enhance the
• Requirements coverage
• Function coverage
• Logic coverage
287. www.evologicsoftech.com
Statement coverage
• It is determined by assessing the
proportion of statements visited by set
of proposed test cases.
• 100% statement coverage is where
every statement in the program is
visited by at least one test.
• Disadvantage
– it is insensitive to some control structures.
– It does not report whether loops reach their
termination condition.
288. www.evologicsoftech.com
Decision/Branch coverage
• It is determined by assessing the
proportion of decision branches
exercised by the set of proposed test
cases. 100% branch coverage is where
every decision branch in the program is
visited by at least one test.
• Disadvantage –it ignores branches
within Boolean expressions which occur
due to operators.
289. www.evologicsoftech.com
Condition coverage
• It is determined by assessing that each
condition in a decision takes all
possible outcomes at least once.
• it measures the sub expressions
independently of each each other.
• Has better sensitivity to the control flow
than decision coverage.
294. www.evologicsoftech.com
Equivalence Partitioning
• Naturally, we should have reason to believe that
test cases are equivalent. Test are often lumped
into the same equivalence class when:
– They involve the same input variable
– They result in similar operations in the program
– They affect the same output variables
– None force the program to do error handling or all of
them do.
Let me tell you a funny story,
The above is a rather old funny story found on the www, but current reality is not very different.
As per a recent study on economic impact of an inadequate infrastructure for software testing in the U.S., The annual cost on financial services from inadequate software testing is estimated at $3.3 billion. The total national annual costs of an inadequate infrastructure for software testing is estimated to be $59.5 billion.
Electronic copies of NIST Planning Report 02-3, The Economic Impacts of Inadequate Infrastructure for Software Testing, can be obtained from www.nist.gov /director/prog-ofc/report02-3.pdf.
Jupitor Communications surveyed twenty-four hundred people asking what their reaction to a technical problem. Forty-six percent of the people reported that they left the website due to technical or performance problems.
Twenty-four percent of them came back but only after establishing relationships with their competitors. So that in the long term, value to you as a customer is greatly diminished. Okay, you’re sharing them with your competitor now. And twenty-two percent never came back at all and in fact 13% left after the session and never came back. Ninety percent left right away and didn’t even complete that transactions.
So you get one chance on the Internet economy. It’s important that you make the most of it. And there’s a lot of supporting data that coincides with it. Forester, for example, did a survey and said that fifty-eight percent of the people reported leaving web sites due to technical performance problems.
Another very interesting one is the Net Effect Survey and what they did a survey on was that fifty-seven percent of the people quit using their shopping card, their electronic shopping card and left them there and only a third of the people actually completed that purchase. So just picture this in your mind, you walk into Walmart and out of every three people that were in the store that day, two of them left shopping carts full of good in the aisle way and that’s disastrous for a company. And the reasons that they did that were a couple. One was performance issues, one was that it wasn’t easy to navigate, they didn’t understand, they didn’t know what return policies were, they had questions, it wasn’t clear, they didn’t know how to complete the purchase. So it tied back into the performance and reliability and usability aspect of it.
SDLC from Waterfall to Iterative models, RAD, RUP, eXtreme Programming, Spiral, Agile, Evolutionary prototyping, Adaptive, Iterative, SCRUM
Testing is no longer the last step, it is an on-going part of the iterative development process.
Testing is not a one-time activity - applications need to be tested throughout their lifecycle. Every version upgrade, module addition or enhancement, as well as every implementation at a new site or increase in user load needs to be put through comprehensive testing.
A program can be written in an hour with thousands of possible logical branches to test. That makes testing the most time-consuming, labor-intensive and costly part of the development cycle.
Product situation it may be as high as 60%
In maintenance situation it may be as high as 75%
The percentage may be more (or less) in different environments, but the important issue is that the cost of testing is very significant.
If testing is so important and significant part of the project lifecycle, how is it managed in most of the projects?
Let&apos;s talk about testing
What is it?
Why do we do it?
The process of evaluating software to verify that it satisfies specified requirements and to detect errors
It is generally believed that software can never be made perfect.
Testing is necessary because the existence of faults in software is inevitable.
Consequently, we test software to find as many faults as we can to ensure that a high quality product with a minimum of faults is delivered. Identifying defects is testing team’s primary responsibility. However there is another important responsibility that is verifying, the application’s functionality meet the user requirements.
Progressive distortion Although this slide is a cartoon, it reflects the all too common situation where the views of the main protagonists in a development project have different views of what is required, how the product is to be used and how it is to be built.
The problem of progressive distortion occurs when, at each stage of the development process, the differing viewpoints causes a distortion of the product from the requirement of the user.
Also, because software systems are built as a series of transformations from requirements to designs to code, at each stage the faults introduced magnify the divergence from the requirement. Faults in and misinterpretation of requirements are particularly expensive in that these have the greatest impact on the end product.
Rather than simply reading the program or using error checklists. The participants “play computer”. The person designated as the tester comes to the meeting armed with small set of paper test cases – representative sets of inputs (and expected outputs) for the program or module. During the meeting, each test case is mentally executed. That is the test data are walked through the logic of the program. The state of the program (i.e. the values of the variables) is monitored on paper or a blackboard.
Of course, the test cases must be simple in nature and few in number, because people “execute” programs at a rate that is many orders of magnitude slower than a machine. Hence, the test cases themselves do not play a critical role, rather, they serve as a vehicle for getting started for questioning the programmer about his or her logic and assumptions. In most walkthroughs, more errors are found during the process of questioning the programmer than are found during the process of questioning the programmer than are found directly by the test cases themselves.
As in the inspection, the attitude of the participants is crucial. Comments should be directed toward the program rather than the programmer. In other words, errors are not viewed as weaknesses in the person who committed them. Rather, they are viewed as being inherent in the difficulty of program development and as a result of the as-yet primitive nature of current programming methods.
The walkthrough should have a follow-up process similar to that described for the inspection process. Also, the side effects observed from inspections (identification of error-prone sections and education in errors, style, and techniques) also apply to the walkthrough process.
A skeletal or special-purpose implementation of a software module, used to develop or test a component that calls or is otherwise dependent on it. After [IEEE].
A smoke test or build verification test is a subset (usually automated) of a full test that broadly exercises parts of the application to determine whether further testing is worth or not.
Smoke testing is also called Sanity testing.
During this phase the application is tested for the stability or to be more precise to check if the application is not &quot;insane&quot;.
A smoke test is a small set of tests to determine the sanity of the build. The system integrator should do this test before the build is distributed to the group. Ie: you plug everything together and see if it &apos;smokes&apos;.
In software testing, smoke test is run right after a new build has been installed. It is used to verify that build versions are correct; that the build itself is complete and that all major functions still work.
The following are lesser used methods
Cause effect graphing
Syntax testing
State transition testing
Graph matrix
http://www.riceconsulting.com/web_test_cases.htm
Intro – There is a lot of wasted time in R&D trying to figure out how to interpret a Failure. This lost time can be eliminated by writing clear, concise Failures. This is not just a BMC issue, it’s present in the Software industry. Today, I’d like to show you some good & bad examples of Failures. Understanding the issue may help us all write better, more descriptive Failures.
Effective tester looks to the effect of the bug report, and tries to write it in a way that gives each bug its best chance of being fixed. Also, a bug report is successful if it enables an informed business decision. Sometimes, the best decision is to not fix the bug. The excellent bug raises the issue and provides sufficient data for a good decision.
SEI definition for defect
Have the group take 5 min’s and write down there definition.
Write some of these on the board, or read them out load.
There’s really not 1 exact definition that applies to all aspects of quality.
There’s really not 1 exact definition that applies to all aspects of quality.
Besides the Severity/Priority
Ask the group why?
Let the group answer this
Downstream affects of a poorly written subject line
The project manager will use it in when reviewing the list of bugs that haven’t been fixed.
Executives will read it when reviewing the list of bugs that won’t be fixed. They might only spend additional time on bugs with “interesting summaries.
Most likely the defect will be deferred.
What build is the person testing with?
What OS?
What are the steps to reproduce the error?
How would someone ever begin to fix this bug without knowing what the random characters were, how big a bunch is, and what kind of weird stuff was happening?
These are 3 VERY IMPORTANT items
Severity indicates hoe\w bad the bug is and reflects its impact to the product and to the user
Severity should be predefined for consistency
Ask the class for an example of each of these.
Ex. Sev. 2 – Misspelling on a screen that the user sees every time they open the product
Ex. Sev. 3 – Broken link in the Help
Ex. Sev. 4 – misspelling on an error message
Priority determines the order they get fixed (usually subjective).
Bug that corrupts a user’s data is more severe than one that’s a simple misspelling. But, what if the data corruption can occur only is such a very rare instance that no user is ever likely to get it and the misspelling causes every user to have problems installing the software? Which is more important to fix?
If you were the Test Engineer testing this product, would it be time to log this failure? - No
Personally, - this raises a lot of questions in my mind about other things that could be broken.
Ask what the error is?
It is very important to distinguish between the two. As a Test Engineer you want to alter the conditions to see if you get different types of Failures.
The latter a bug is reported, the less likely it is to be fixed, especially if it’s a very minor bug.
The earlier you find a bug, the more time that remains in the schedule to get it fixed. Suppose we find an embarrassing misspelling in a Help file few months before the software is released. That bug has a very high likelihood of being fixed. If we find the same bug a few hours before the release, odds are it won’t be fixed.
Intro – There is a lot of wasted time in R&D trying to figure out how to interpret a Failure. This lost time can be eliminated by writing clear, concise Failures. This is not just a BMC issue, it’s present in the Software industry. Today, I’d like to show you some good & bad examples of Failures. Understanding the issue may help us all write better, more descriptive Failures.
Listed are three common web site testing objectives.
For general page layout, you want to make sure that the visual display of each page conforms to design standards
For functional testing, you would want to check that your customer are able to make use of your site’s services or transactions successfully
You would also want to check that any changes in your web site’s setup or configuration won’t jeopardize the services you offer to your customer.
Let me illustrate these objectives further.