1. Manual Testing v1.0
Software Testing
Testing is the process of executing a program with the
intention of finding errors.
A Quality Measurement Activity aimed at evaluating a
software item against the given system requirements.
This Includes, but not limited to the process of executing
program or application with the intent of finding software
bugs.
2. Manual Testing v1.0
Definitions – Software Testing
Testing is a process of gathering information by making
observations and comparing them to expectations.
A test is an experiment designed to reveal information, or
answer a specific question, about the software or system.
-Dale Emery & Elisabeth Hendrickson
Testing is an Empirical technical investigation done to
provide stakeholders, information about quality of a product
or a service.
--Cem Karnar
3. Manual Testing v1.0
Why Testing is needed ?
Errors occur because we are not perfect and, even if we
were, we are working under constraints such as delivery
deadlines.
Testing is the measurement of software quality. We measure
how closely we have achieved quality by testing the relevant
factors such as
a)Correctness
b)Reliability,
c)Usability
d)Maintainability,
e)Reusability,
f)Testability..etc.,
-- Continued
4. Manual Testing v1.0
Why Testing is needed ?
Cost of failure associated with defective products getting
shipped and used by customer is enormous
To find out whether the integrated product work as per the
customer requirements
To identify as many defects as possible before the customer
finds
5. Manual Testing v1.0
Evolution of Software Testing
In 1950 Testing is considered as Debugging.
In 1960 Testing is considered as separate activity from
debugging.
In 1968 Software Engineering is first used in NATO
workshop at West German.
In 1970 Software Testing is considered as technical
discipline activity.
For more information Visit
http://www.testingreferences.com/testinghistory.php
8. Manual Testing v1.0
Introduction to Process
What is Process?
A framework for the task are required to build high
quality software
Why it is important?
It provides stability, control and organization to an
activity that if left uncontrolled become quite chaotic
Example
ATM card Transaction.
9. Manual Testing v1.0
Process Standards ( ISO )
What is ISO?
International Organization for Standards
The ISO standards are structured
around the Process Approach concept
Process Approach - Understand and
organize company resources and activities
to optimize how the organization operates.
Eg :
System Approach to Management -
Determine sequence and interaction of
processes and manage them as a system.
Processes must meet customer
requirements.
10. Manual Testing v1.0
ISO 9001 and 14001
ISO – 9001
ISO 9001 defines the rules and guidelines for implementing a
quality management system into organizations of any size or
description. The standard includes process-oriented quality
management standards that have a continuous improvement
element.
Strong emphasis is given to customer satisfaction.
ISO – 14001
ISO 14001 defines Environmental Management best practices
for global industries. The standard is structured like the ISO
9001 standard. ISO 14001 gives Management the tools to
control environmental aspects, improve environmental
performance and comply with regulatory standards. The
standards apply uniformly to organizations of any size or
description.
11. Manual Testing v1.0
CMM History
Capability Maturity Model (CMM) is a collection of
instructions an organization can follow with the purpose to gain
better control over its software development process
History of CMM
1991: SW-CMM v1.0, released.
1993: SW-CMM v1.1, released.
1997: SW-CMM revisions halted in support for CMMI.
2000: CMMI v1.02, released.
2002: CMMI v1.1, released
12. Manual Testing v1.0
Capability Maturity Model
Introduction to CMM
CMM Levels Diagram
CMM Level Description
Introduction to CMMI
Different KPA
14. Manual Testing v1.0
CMM Level Description
Level 1 : Initial / adhoc
Company has no standard process for software
development. Nor does it have a project-tracking system
that enables developers to predict costs or finish dates
with any accuracy.
Level 2 : Repeatable
Company has installed basic software management
processes and controls. But there is no consistency or
coordination among different groups.
Level 3 : Defined
Company has pulled together a standard set of processes
and controls for the entire organization so that developers
can move between projects more easily and customers can
begin to get consistency from different groups. (Cont..)
15. Manual Testing v1.0
CMM Level Description
Level 4 : Managed
In addition to implementing standard processes,
company has installed systems to measure the quality
of those processes across all projects.
Level 5 : Optimizing
Company has accomplished all of the above and can now
begin to see patterns in performance over time, so it can
tweak its processes in order to improve productivity and
reduce defects in software development across the entire
organization.
16. Manual Testing v1.0
Key Process Area ( KPA)
CMM
18 Key Process Area.
CMMI
25 Key Process Area.
.
17. Manual Testing v1.0
KPA for CMMI
Defined ( 15 KPA )
Decision Analysis and Resolution
Integrated Project Management
Integrated Supplier Management
Integrated Teaming
Measurement and Analysis
Organizational Environment for Integration
Organizational Process Definition
Organizational Process Focus
Organizational Training
Product Integration
Requirements Development
Risk Management
Technical Solution
Validation
Verification
. (Cont…)
18. Manual Testing v1.0
KPA for CMMI
Managed ( 2 KPA )
Organizational Process Performance
Quantitative Project Management
Optimizing ( 2 KPA )
Causal Analysis and Resolution
Organizational Innovation and Deployment.
.
19. Manual Testing v1.0
Deming PDCA Cycle
To effectively manage and improve your processes, use PDCA
cycle as a guide.
PLAN:
Design or revise business process components to improve
results.
DO:
Implement the plan and measure its performance.
CHECK:
Assess the measurements and report the results to decision
makers.
ACT:
Decide on changes needed to improve the process.
22. Manual Testing v1.0
Introduction to Quality
What is Quality?
Quality is totality of features and characteristics of a product
or services that bear on its ability to satisfy stated or implied
needs – ISO 8402
Conformance to requirements – Producers view.
Quality is “degree of excellence”
Definition of Quality
Quality is defined as meeting the
customer’s requirements the first
time and every time.
Quality is also absence of defects
and meets customer expectations.
23. Manual Testing v1.0
QA & QC
Quality Assurance
All those planned and systematic
actions necessary to provide adequate
confidence that a product or service
will satisfy given requirements for
quality.
Quality Control
The operational techniques and
activities that are used to fulfill
requirements for quality.
24. Manual Testing v1.0
Difference – QA & QC
Quality Assurance Quality Control
Prevention Based Detection Based
Process Oriented Product oriented
Organization Level Producer Responsibility
Phase Building Activity End Phase Activity
25. Manual Testing v1.0
QA Activities
Quality Assurance Activities
Conduct of Formal Technical Review (FTR)
Enforcement of Standards (Customer imposed
standards or management imposed standards)
Control of Change (Assess the need for change,
document the change)
Measurement (Software Metrics to measure the
quality, quantifiable)
26. Manual Testing v1.0
Static Testing
What is Static Testing
Verification performed without executing the system’s code
Different types of Static Testing:
Code Walkthrough
Inspection
Reviews
27. Manual Testing v1.0
Code Walkthrough
What is a Code Walkthrough?
A 'walkthrough' is an informal meeting for evaluation
or informational purposes.
Little or no preparation is usually required.
Conducted by Development team.
Language knowledge is required.
28. Manual Testing v1.0
Inspection
What is an inspection?
An inspection is more formalized than a
'walkthrough‘.
Typically with 3-8 people including a moderator,
reader (the author of whatever is being reviewed), and a
recorder to take notes.
The subject of the inspection is typically a document
such as a requirement spec or a test plan
Purpose is to find problems and see what is missing,
not to fix anything.
(Cont..)
29. Manual Testing v1.0
Attendees should prepare for this type of meeting by
reading through the document.
The result of the inspection meeting should be a
written report.
Most cost-effective methods of ensuring quality.
Skills may have low visibility to software development
organization.
Bug prevention is far more cost effective than bug
detection.
30. Manual Testing v1.0
Reviews
A process or meeting during which a work product, or set of
work products, is presented to project personnel, managers,
users or other interested parties for comment or approval
Review the product, not the producer.
Set an agenda and maintain it.
Limit debate.
Take written notes.
31. Manual Testing v1.0
Types of Reviews
Informal Reviews
One-on-One meeting ( Peer Review )
Request for input.
No agenda required.
Occurs as needed throughout each phase.
Semi Formal Reviews
Facilitated by the author of the material.
No solutions are discussed for issues.
Occurs one or more times during a phase.
( Cont..)
32. Manual Testing v1.0
Types of Reviews
Formal Reviews
Facilitated by knowledgeable person (Moderator)
Moderator is assisted by recorder.
Planned in advance, material is distributed.
Issue raised are captured and published.
Defect found are tracked through resolution.
Formal review may be held at any time.
34. Manual Testing v1.0
Introduction to Software Testing
Quality Control Activities
Execution of code
What is Software Testing?
Software testing is the process used to help identify the
correctness, completeness, security and quality of developed
computer software.
Definition
Software Testing is the process of executing a program
with the intent of finding bugs.
Testing is a process of exercising or evaluating a system
component, by manual or automated means to verify that it
satisfies a specified requirement.
35. Manual Testing v1.0
Goals of Testing
Determine the quality of the executable work product.
Provide input regarding the readiness of the application for
launch.
Help identify defects (via associated failures).
Provide input to improve the development process.
36. Manual Testing v1.0
Who Tests the Software
.
Dev Team Testing Team
Must learn about the system. But, will
attempt to Break it.. and is driven by
Quality
Understands the system but, will
test gently and, it is driven by
“delivery”
37. Manual Testing v1.0
Verification (Process Oriented)
.
Are we developing the “System Right”?
Verification involves checking to see whether the program
conforms to specification.
It focuses on process correctness.
Verification typically involves
Reviews and meetings to evaluate documents, plans, code,
requirements, and specifications.
This can be done with checklists, issues lists, walkthroughs,
and inspection meetings
38. Manual Testing v1.0
Validation (Product Oriented)
.
Are we developing the “ Right System”?
Validation is concerned with whether the right functions of
the program have been properly implemented, and that this
function will properly produce the correct output when given
some input value.
Validation typically involves actual testing and takes place
after verifications are completed.
The term 'IV & V' refers to Independent Verification and
Validation.
39. Manual Testing v1.0
Context of V&V
.
Static Dynamic
Reviews Inspection Code Walkthrough
V & V
Execution of
Code
Unit Testing Integration
Testing
System
Testing
41. Manual Testing v1.0
Testing Methods
.
Functional or Black Box Testing
Black box testing - We are checking the functionality of the
application
Logical or White Box Testing
White box testing – Checking the logic and structure of
program
Developers will do the White box testing
Gray Box Testing
Combination of White & Black box is testing is called Gray
box testing
42. Manual Testing v1.0
White-Box Testing
.
Also called ‘Structural Testing’ is used for testing the code
keeping the system specs in mind.
Different Methods in White Box Testing
Path Coverage
Statement Coverage
Decision Coverage
Condition Coverage
44. Manual Testing v1.0
White box Testing.
. Does White Box Testing lead to 100%
correct program?
The Answer is:
NO
It is not possible to exhaustively test
every program path because the
number paths is simply too large…
45. Manual Testing v1.0
Black box Testing.
.
What is Black Box Testing
Testing based on external specifications without
knowledge of how the system is constructed..
Black Box
Known Inputs Known Outputs
???
47. Manual Testing v1.0
Introduction to SDLC
.
What is SDLC?
The software development life cycle (SDLC) is the
entire process of formal, logical steps taken to develop a
software product.
Levels of SDLC
Requirements Gathering
Systems Design
Code Generation
Testing
Maintenance
(Cont..)
52. Manual Testing v1.0
Waterfall Model
.
Introduction
First proposed software development model in 1970 by W.
W. Royce
Flow steadily through the phases
Linear Sequential Model
Each phase well defined with start & end point
53. Manual Testing v1.0
Waterfall Model – Pros & Cons
.
Pros
Minimizes planning overhead since it can be done up front
Structure minimizes wasted effort, so it works well for
technically weak or inexperienced staff.
Cons
Inflexible for analyst to collect all requirement
Customer should have patience still the phase end to see
the working deliverable.
Adding new requirement at later phase is difficult.
54. Manual Testing v1.0
Prototype Model
.
Prototype Model
No detailed requirement
Build a prototype
Customer evaluation
Mechanism for identifying requirements
Prototype is thrown away
“FIRST SYSTEM” which developers build something
immediate
55. Manual Testing v1.0
Prototype Model – Pros & Cons
.
Pros
Better understanding of requirements.
Good starting point for other process models (e.g.
waterfall).
Prototype may be used as a starting point rather than
thrown away.
Cons
Bad idea: prototypes typically have poor design and
quality.
Bad decisions during prototyping may propagate to the
real product.
56. Manual Testing v1.0
Incremental Model
.
Incremental Model
Combines the element of waterfall & prototyping
First increment is the core functionality
Successive increments are add/fix functionality and Final
increment is the complete product
Outcome of each iteration: tested, integrated, executable
system
58. Manual Testing v1.0
Incremental Model -Pros & Cons
.
Pros
Operation products in weeks
Less traumatic to organization
Small capital outlay, rapid ROI
Cons
Too many builds – over head
Too few builds – build and fix
Need an open architecture
No overall design at start
59. Manual Testing v1.0
Spiral Model
.
Spiral Model
Defined by Barry Boehm
Each loop in the spiral represents a phase in the process.
Customer communication
Risks are explicitly assessed and resolved throughout the
process.
Uses prototyping
60. Manual Testing v1.0
Spiral Model – Pros & Cons
.
-Plan,risk analysis,dev,evaluation
Pros
Good for large and complex projects
Customer Evaluation
Risk Evaluation
Cons
Difficult to convince some customers that the
evolutionary approach is controllable
Needs considerable risk assessment
If a risk is not discovered, problems will surely occur
61. Manual Testing v1.0
Agile Model
.
Conceptual framework
Attempt to minimize risk by developing software in short
time, called iterations.
Typically one to four weeks
Face – to - Face communication
Cowboy Coding
62. Manual Testing v1.0
V – Model
.
The V- Model illustrates that testing activities (Verification and
Validation) can be integrated into each phase of the product life
cycle. Validation part of testing is integrated in the earlier phases
of the life cycle which includes reviewing end user requirements,
design documents etc.
There are variants of V-Model however we will take a common
type of V-model example. The V-model generally has four test
levels.
Unit Testing
Integration Testing
System Testing
UAT
In practice the V-Model may have more granular test levels like
unit integration testing after unit testing.
64. Manual Testing v1.0
Testing Levels
.
Testing is expanded in SDLC phase into different levels.
Unit Testing.
Integration Testing.
System Testing.
Acceptance Testing.
NOTENOTE
All these levels
are discussed in detail.
65. Manual Testing v1.0
Documents Prepared in V Model
.
Requirement Phase
SRS Document
Functional Specification Document
High Level Design Phase
Architecture Design Document
Low Level Design Phase
Detail Design Document
66. Manual Testing v1.0
Testing Activities in V Model
.
System Testing
Once SRS or FSD document are prepared, System testing
activities are started
Testing activities are
System Test Plan
System Test Case
Integration Testing
Once Architectural Document are prepared, Integration
testing activities are started
Testing activities are
Integration Test Plan
Integration Test Cases
69. Manual Testing v1.0
Unit Testing
What is called an Unit?
Module
Screen / Program
Backend database
Who will do Unit testing?
Unit Testing is primarily carried out by the developers
themselves
What is Unit Testing ?
Testing individual unit of the software in isolation
Lowest level of testing
Deals functional correctness and the completeness of
individual program units.
70. Manual Testing v1.0
Integration Testing
What is Integration.
Integration is the process of assembling unit-tested
modules
What is Integration testing.
Testing of a partially integrated application to identify
defects involving the interaction of collaborating
components.
Objective of Integration testing.
Determine if components will work properly together.
Identify defects that are not easily identified during unit
testing
Data Dependency between modules
Data Transfer between modules.
71. Manual Testing v1.0
Testing Approach
Big Bang approach
Incremental approach
Top Down approach
Bottom Up approach
Examples of Integration.
72. Manual Testing v1.0
Big Bang Approach
Big Bang approach consists of testing each module individually
and linking all these modules together only when every module
in the system has been tested.
73. Manual Testing v1.0
Pros & Cons of Big Bang
Pros
Advantageous when we construct independent module
concurrently
Cons
Approach is quite challenging and risky as we integrate all
modules in a single step and test the resulting system.
Locating interface errors, if any, becomes difficult here.
74. Manual Testing v1.0
Incremental Approach
Software units are gradually built, spreading the integration
testing load more evenly through the construction phase.
Incremental approach can be implemented in two distinct ways:
Top-down
Bottom-up.
75. Manual Testing v1.0
Top-down Integration
Program is merged and tested from top to bottom.
Modules are integrated by moving downward through the
control hierarchy, beginning with the main control module.
A module will be integrated into the system only when the
module which calls it has been already integrated successfully.
76. Manual Testing v1.0
Stubs
What is 'Stub‘?
Dummy routine that simulates a behavior of a
subordinate.
If a particular module is not completed or not started, we
can simulate this module, just by developing a stub.
To simulate responses of M2, M3 and M4 whenever they
are to be invoked from M1, “stubs” are created.
77. Manual Testing v1.0
Pros and Cons of Top down
Pros
It is done in an environment that closely resembles that of
reality, so the tested product is more reliable.
Stubs are functionally simpler than drivers and therefore,
stub can be written with less time and labor.
Cons
Unit testing of lower modules can be complicated by the
complexity of upper modules.
78. Manual Testing v1.0
Bottom up Approach
Program is merged and tested from bottom to top.
The terminal module is tested in isolation first, then the
next set of the higher level modules are tested with the
previously tested lower level modules.
Here we have to write 'Drivers‘
Driver is nothing more than a program, that accept the
data passed to the module
79. Manual Testing v1.0
Pros & Cons of Bottom up
Pros
Unit testing of each module can be done very thoroughly.
Cons
Test Drivers have to be generated for modules at all levels,
except for top controlling module.
80. Manual Testing v1.0
System Testing
What is System Testing
System testing is an black-box type testing that is based on
overall requirements specifications, covers all combined
parts of a system.
Objective of System Testing
In system testing, we need to ensure that the system does
what the customer wants it to do
System testing consist of Performance & Functional testing
81. Manual Testing v1.0
System Testing Types
Testing involved in System testing
Sanity Testing
Compatibility Testing
Exploratory Testing
Stress Testing
Volume Testing
Load Testing
Acceptance Testing
Ad-hoc Testing
Alpha Testing:
Benchmark Testing
End-to-end Testing
For More types Visit
http://blog.enfocussolutions.com/Powering_Requirements_Success/bid/173061/
Types-of-System-Testing
82. Manual Testing v1.0
System Testing Types
Sanity testing
Typically an initial testing effort to determine if a new
software version is performing well enough to accept it for a
major testing effort.
Compatibility testing
Testing how well software performs in a particular
hardware/software/OS/network/environment.
Exploratory testing
Test design and Test execution at the same time.
Stress testing
Testing conducted to evaluate a system or component at or
beyond the limits of its specified requirements
83. Manual Testing v1.0
System Testing Types
Volume testing
Testing where the system is subjected to large volumes of
data.
Load testing
Testing conducted to evaluate the compliance of a system
or component with specified performance requirements
Acceptance testing
Formal testing conducted to determine whether or not a
system satisfies its acceptance criteria and to enable the
customer to determine whether or not to accept the system.
This testing process is usually performed by customer
representatives.
84. Manual Testing v1.0
System Testing Types
Ad-hoc Testing
Testing performed without planning and documentation - the
tester tries to 'break' the system by randomly trying the system's
functionality. This testing process is performed by testing teams.
Alpha Testing
Type of testing a software product or system conducted at the
developer's site. This testing process is usually performed by the
end user.
Benchmark Testing
Testing technique that uses representative sets of programs and
data designed to evaluate the performance of computer
hardware and software in a given configuration. This testing
process is performed by testing teams.
85. Manual Testing v1.0
System Testing Types
End-to-end Testing:
Similar to system testing, involves testing of a complete application
environment in a situation that mimics real-world use, such as
interacting with a database, using network communications, or
interacting with other hardware, applications, or systems if
appropriate. This testing process is performed by QA teams.
Functional Testing:
Type of black box testing that bases its test cases on the
specifications of the software component under test. This testing
process is performed by testing teams.
Non-functional Testing:
Testing technique which focuses on testing of a software
application for its non-functional requirements. Can be conducted
by the performance engineers or by manual testing teams.
86. Manual Testing v1.0
System Testing Types
What is Acceptance Testing?
Acceptance testing allows customers to ensure that the
system meets their business requirements.
What is Regression testing?
Testing of a previously verified program or application
following program modification for extension or correction to
insure no defects have been introduced.
What is Re testing?
Testing of a previously verified program or Retesting is
testing the new version of AUT .once the new version is
ready test the previously found bugs are actually being fixed.
92. Manual Testing v1.0
Software Testing Life Cycle
Test Plan
Road map for testing activities
Test Case
A document with actions and expected reaction
Test Execution
Testing application for defects using test cases
Defect Reporting
Reporting defects
93. Manual Testing v1.0
Test Plan
What is Test Plan
Different type of Test Plan
Who create Test Plan
When to Create Test Plan
Attributes of Test Plan
94. Manual Testing v1.0
Test Plan
Why plan?
Define the road map for testing activities.
What is Test Plan?
Test plan is the document, which specify the test
conditions, features, functions that will be tested for a
specific level of testing.
95. Manual Testing v1.0
Different Types of Test Plan
Unit Test Plan
Integration Test Plan
System Test Plan
Acceptance Test Plan
96. Manual Testing v1.0
When to Create Test Plan ?
Test plans should be prepared as soon as the corresponding
document in the development cycle is produced.
The preparation of the test plan itself validates the document
in the development cycle.
Who Create Test Plan ?
Test Plan is created by Test Lead or Test Manager.
They have the knowledge about the testing approach.
Experienced in Effort Estimation & Scheduling.
Communication between different team.
Test Plan Creation
97. Manual Testing v1.0
Document Scope
Project Overview
Document Reference
Intended Audience
Assumptions, Dependencies and Constraints
Environment
Feature not to be tested
Suspension & Redemption Criteria
Tools
Metrics
Defect Reporting & Tracking
Deliverables
Entry & Exit Criteria
Schedule
Feature to be tested
Roles & Responsibilities
Test Approach
Risks
Attributes of Test Plan
98. Manual Testing v1.0
Document Scope
Project Overview
Document Reference
Intended Audience
Assumptions, Dependencies and Constraints
Environment
Feature not to be tested
Suspension & Redemption Criteria
Tools
Metrics
Defect Reporting & Tracking
Deliverables
Entry & Exit Criteria
Schedule
Feature to be tested
Roles & Responsibilities
Test Approach
Risks
Test Plan Attributes details
99. Manual Testing v1.0
Document Scope
The scope of this document is to explain the Testing
Strategies, Test Environment, and Resource Usage for testing
the <Product Name> application.
Project Overview
In this section, provide overview of the product’s functional
specifications that would be explained in detail later
Document Reference
The documents used to create the Test Plan and testing
activities.
Example
Functional Specification Document v 1.0
Design Document v 1.0
Intended Audience
The audiences for this document include Project Leader,
Team Members and Test Engineers.
Test Plan Attributes details
100. Manual Testing v1.0
Assumptions, Dependencies, Constraints
List any assumptions being made with regard to the
application e.g., the user is planning 25% growth
across the board on all transaction types.
List any dependencies that may impact testing, e.g., critical
system resources must be available such as database
availability.
List any constraints that impact testing, e.g., live production
data for parallel tests will only be available after 5 P.M
each day
Test Plan Attributes details
101. Manual Testing v1.0
Test Environment
Describe the test environments for unit or integration or
system or acceptance test. Describe any interfaces that
must be established. Reference the Technical Design or
other documents where this information can be found.
Roles & Responsibilities
Project & Test team members Roles
Responsibilities of each members
Test Approach
Describe the overall approach to testing: who does it,
main activities, techniques, and tools used for each major
group of features. How will you decide that a group of
features is adequately tested?
Test Plan Attributes details
102. Manual Testing v1.0
Entry & Exit Criteria
List the set of conditions that determine when System
Testing can begin and end
Features to be tested
Cross-reference them to test design specifications
Features not to be tested
Which ones not a part of testing
Risk
Schedule
Personal
Requirements
Technical
Management.
Test Plan Attributes details
103. Manual Testing v1.0
Schedule
Dates of the activities in the test phase
Test Case completion date
Execution date
Defect review and closure date
Deployment date into production
Suspension and Resumption criteria
List anything that would cause you to stop testing until it’s
fixed.
What would have to be done to get you to restart testing.
Test Plan Attributes details
104. Manual Testing v1.0
Tools
List the tools used for testing the application
Functional Testing Tools
Performance Testing Tools
Defect Tracking Tools
Configuration Tools
Metrics
List all metrics collected
Responsible person for metrics
Test Plan Attributes details
105. Manual Testing v1.0
Defect Tracking & Reporting
Defect Severity
Defect Life Cycle
Defect Tracking
Defect Reporting Process
Roles & Responsibility
Deliverables
List all the documents and scripts to be delivered in this
section
Test Plan Attributes details
107. Manual Testing v1.0
What is Test Case?
What is good test case
Positive and Negative Test Case
Test Case Template
Test Case Design Techniques
Contents
108. Manual Testing v1.0
A test case is a document that describes input, action, or
event and an expected response, to determine if a feature of
an application is working correctly.
Test Case is a document which describes the test steps to
execute and its corresponding expected results.
What is a Test Case ?
109. Manual Testing v1.0
Accurate - tests what it’s designed to test
Economical - no unnecessary steps
Repeatable, reusable - keeps on going
Traceable - to a requirement
Appropriate - for test environment, testers
Self standing - independent of the writer
Self cleaning - picks up after itself
What is a good Test Case ?
110. Manual Testing v1.0
Positive testing checks that the software does what it should.
Negative testing checks that the software doesn't do what it
shouldn't.
Negative testing should always come first in the test case
scenario.
Positive testing should follow after the Negative testing.
Positive & Negative Scenario.
111. Manual Testing v1.0
Attributes of Test Case
Project Name Project Version
Test Case ID Test Case Version
Test Case Name Designer
Creation Date Step
Design Status Test Description
Expected Result Actual Result
112. Manual Testing v1.0
Test Case Template
Project Name : Name of the project
Project Version : v1.0
Test Case ID : TC_1
Test Case Name : <Project Name>_<Module Name>_<Screen Name>
Test Case Version : v1.1
Status : Design / Review / Complete
Designer : Sivakumar Krishnan
Creation Date : 08/March/2013
Execution Status : Design
113. Manual Testing v1.0
Test Case Design Techniques
Black Box Technique
Equivalence Class Partitioning
Boundary Value Analysis
Error Guessing
Cause Effect Graphing
State Transition Testing
White Box Technique
Branch Testing
Coverage Testing
114. Manual Testing v1.0
ECP – Equivalence Class Partitioning
Equivalence partitioning is a method for deriving test cases. In
this method, classes of input conditions called equivalence classes
are identified such that each member of the class causes the same
kind of processing and output to occur.
A software testing technique that involves identifying a small
set of representative input values that invoke as many different
input conditions as possible.
115. Manual Testing v1.0
ECP – For Test Case
Group of test forms an equivalent class, if
They all test the same thing
If one test finds a defect, the others will
If one test does not find a defect, the others will not
Tests are grouped into one equivalence class when
They involve the same input variables
They result in similar operations in the program
They affect the same output variables
.
116. Manual Testing v1.0
Finding ECP
Identify all inputs
Identify all outputs
Identify equivalence classes for each input
Identify equivalence classes for each output
Ensure that test cases test each input and output equivalence
class at least once
ECP Diagram
117. Manual Testing v1.0
Eg: ECP
A program takes in marks, student type and returns the grades
for the student in that subject.
Inputs are :
Marks : (0 - 100)
Student Type : First time/Repeat
Outputs are :
Grade : F, D, C, B, A
Rules :
Student type First Time
0-40 = F
41-50 = D
51-60 = C
61-70 = B
71-100 = A
Student type Repeat
0-50 = F
51-60 = D
61-70 = C
71-80 = B
81-100 = A
118. Manual Testing v1.0
Boundary Value Analysis leads to selection of test cases that
exercise boundary values
BVA is a test case design technique that complements
equivalence partitioning. Rather than selecting any elements of
an equivalence, BVA leads to the selection of test case at the
'edges' of the class.
Identify all inputs & outputs
Identify equivalence classes for each input
Identify equivalence classes for each output
For each input equivalent class, ensure that test cases include
one interior points
all extreme points
all epsilon points
Boundary Value Analysis ( BVA)
119. Manual Testing v1.0
Example
If input condition is a range bounded by values 'a'
and 'b‘, test case should be designed with values 'a'
and 'b', just above and just below a & b.
If input condition specifies a number of values, test
case should be developed that exercises the minimum
and maximum numbers. Values just above and just
below the maximum and minimum should be tested.
Eg : BVA
120. Manual Testing v1.0
Pros
Very good at exposing potential user interface/user input
problems.
Very clear guidelines on determining test cases.
Very small set of test cases generated.
Cons
Does not test all possible inputs.
Does not test dependencies between combinations of
inputs.
Eg : Pros & Cons of BVA
121. Manual Testing v1.0
A test case design technique where the experience of the
tester is used to postulate what faults exist, and to design tests
specially to expose them.
This is a testers ‘intuition’ skill that can be applied in all other
testing techniques to produce more effective tests.
.
Error Guessing
122. Manual Testing v1.0
A graphical representation of inputs or stimuli (causes) with
their associated outputs (effects), which can be used to design
test cases.
A testing technique that aids in selecting, in a systematic way,
a high-yield set of test cases that logically relates causes to
effects to produce test cases.
.
Cause Effect Graphing
123. Manual Testing v1.0
What is State Transition
Changes in the attributes of an object or in the links an
object has with other objects.
When to use them
State models are ideal for describing the behavior of a
single object
What is State Transition Diagram
State-based behavior of the instances of a class.
State Transition
124. Manual Testing v1.0
Operation of an Elevator
Elevator has to go to all the 5 floors in a building.
Consider each floor as one state.
Let the lift be initially at the 0th floor (initial state), now a
request comes from the 5th floor it has to respond to that
request and the lift has move to 5th floor (next state) and
now a request also comes from the 3rd floor( another
state) it has to respond to this request also. Like wise the
requests may come from other floors also.
Each floor means a different state, the lift has to take care
of the request from all the states and has to transit to all
the state in sequence the request comes.
Eg : State Transition
126. Manual Testing v1.0
Test Case Writing
Seven Common Mistakes in Test Case Writing
Making cases too long
Incomplete, incorrect, or incoherent setup
Leaving out a step
Naming fields that changed or no longer exist
Unclear whether tester or system does action
Unclear what is a pass or fail result
Failure to clean up
128. Manual Testing v1.0
Test Execution
What is Test Execution
The processing of a test case suite by the software under
test, producing an outcome.
Today Test Execution
The execution is often undisciplined, poorly quantified, and
difficult to duplicate. The impact goes right to the bottom
line – costing unnecessary time and expense.
Test Set
Test Set is an collection of Test Case
Test Set can be formed for every release of testing
Each and every cycle of testing in an release can have individual
test set.
(Cont..)
129. Manual Testing v1.0
Test Execution
Testing is potentially endless. We can not test till all the defects
are unearthed and removed -- it is simply impossible. At some
point, we have to stop testing and ship the software.
Realistically, testing is a trade-off between budget, time and
quality. It is driven by profit models.
Pessimistic Approach
Unfortunately most often used approach.
Whenever some, or any of the allocated resources -- time,
budget, or test cases -- are exhausted.
Optimistic Approach
Stopping rule is to stop testing when either reliability meets
the requirement, or the benefit from continuing testing
cannot justify the testing cost.
131. Manual Testing v1.0
Chapter XI
Why software has defects or bugs?
Software complexity
Programming errors
Changing in requirement
Poor Business Understanding
Miscommunication between the groups
Software versions upgrades
Lack of skill set.
.
132. Manual Testing v1.0
Defect Reporting
What is defect?
A defect is a failure to conform to requirements
Any type of undesired result is a defect.
A failure to meet one of the acceptance criteria of your
customers.
.
Expected Happened on Real
time.
133. Manual Testing v1.0
What is Defect Severity
A classification of a software error or fault based on an
evaluation of the degree of impact that error or fault on
the development or operation of a system (often used to
determine whether or when a fault will be corrected)
The five Levels of severity
Critical
Major
Average
Minor
Cosmetic
Defect Severity
134. Manual Testing v1.0
Critical
The defect results in the failure of the complete software
system, of a subsystem, or of a software unit (program or
module) within the system.
Major
The defect results in the failure of the complete software
system, of a subsystem, or of a software unit (program or
module) within the system. There is no way to make the
failed component's, however, there are acceptable
processing alternatives which will yield the desired result.
Average
The defect does not result in a failure, but causes the
system to produce incorrect, incomplete, or inconsistent
results, or the defect impairs the systems usability.
Defect Severity level Description
135. Manual Testing v1.0
Minor
The defect does not cause a failure, does not impair
usability, and the desired processing results are easily
obtained by working around the defect.
Cosmetic
The defect is the result of non-conformance to a standard,
is related to the aesthetics of the system, or is a request for
an enhancement. Defects at this level may be deferred or
even ignored.
Defect Severity level Description
136. Manual Testing v1.0
What is Defect Priority
Defect priority level can be used with severity categories
to determine the immediacy of defect repair or fix
Levels of Defect Priority
Urgent
High
Medium
Low
Defer
Defect Priority
137. Manual Testing v1.0
Urgent
Further development and/or testing cannot occur until
the defect has been repaired. The system cannot be used
until the repair has been effected.
High
The defect must be resolved as soon as possible because it is
impairing development/and or testing activities. System use
will be severely affected until the defect is fixed.
Medium
The defect should be resolved in the normal course of
development activities. It can wait until a new build or
version is created.
Defect Priority level Description
138. Manual Testing v1.0
Low
The defect is an irritant which should be repaired but
which can be repaired after more serious defect have been
fixed.
Defer
The defect repair can be put of indefinitely. It can be
resolved in a future major system revision or not resolved
at all.
Defect Priority level Description
140. Manual Testing v1.0
Defect Status
New
A new defect reported to Development Team.
Open
Defect which is reproducible & accepted by AD team and
assigned to concern developers by AD Manager
Not A Defect
If a reported defect is out of scope or future enhancements,
then the defect will be assigned as ‘Not a Defect’ by AD
Manager.
Not Clear
If the defect description is unclear, the Project Manager has
the option of assigning it to “Not Clear” state and
sending it back to whomever raised the defect for
clarification. Once clarified and accepted by the AD
Manager, the defect moves into ‘Open’ status.
Development (Work in Progress)
When a developer started to work on the assigned defects.
( Cont..)
141. Manual Testing v1.0
Defect Status
Fixed
Defects once worked on by AD team and sent to QA for
re-testing.
Reject
Unsatisfactory test results during retest will result in QA
rejecting the defect and sending them back to AD
team.
Closed
Satisfactory test results during retest (i.e. actual results map
to expected results) will result the defect to “Closed”
by QA.
Reopened
Previously closed defect found during regression testing
has the option of assigning the defect to “Reopened”
rather than creating as “New” defect.
142. Manual Testing v1.0
Defect Report Template
Attributes of Defect Report Template
Defect ID Assigned To
Subject Project
Severity Status
Priority Description
Detected By Detected in Version
Detected Date Environment
Module
143. Manual Testing v1.0
Defect Report Template
Attributes of Defect Report Template
Defect ID Assigned To
Subject Project
Severity Status
Priority Description
Detected By Detected in Version
Detected Date Environment
Module
145. Manual Testing v1.0
Defect Tracking
What is Defect Tracking?
Defect tracking is a process of identifying the new defects
found by tester is already reported by any other tester in the
defect database.
Example :
Defect is checked in the database using the filter options
by using the defect report attributes like Module, Sub
Module, Severity, Environment etc.,
148. Manual Testing v1.0
Traceability Matrix
What is Traceability Matrix?
Mapping of customer requirements in each and every
phase of testing using a matrix is called Traceability Matrix.
The different phases in testing are
Test Case
Test Execution
Automation Script Creation
Defect Reporting
Eg :
House Relocation
Identify items
Packing
Loading
Un Loading
Coverage
150. Manual Testing v1.0
Software Metrics are numerical data collected in software
development and testing activities for analysis.
Software Metrics support 4 main function
Planning
Organizing
Controlling
Improving
Software Metrics
151. Manual Testing v1.0
Planning metrics
Cost estimating
Training planning
Resource planning
Scheduling
Budgeting
Organizing metrics
Size
Schedule
Metrics Types
Controlling
Status
Track
Improving
Process Improvement
Effort Improvement
153. Manual Testing v1.0
Sample Testing Metrics
Defect Density (Defects per Person day)
(Total Number of Defects in the cycle) * 100
(Actual Effort in person days for the cycle)
Example
Total number of defects in cycle 1 = 75
Actual effort spent in testing in days = 5 days
Defect Density ≠ 75 / 5 = 15 defects per day
154. Manual Testing v1.0
Sample Testing Metrics
Defect Removal Efficiency%
(Total no. of Defects found by tester)* 100
(Total no. of Defects found by tester + Total no.
of Defects found by customer and others)
Example
Total no. of defects found by tester = 120
Total no. of defects found by others = 20
Defect Removal Efficiency 120 / 120 + 20 * 100
=
85.71 %
155. Manual Testing v1.0
Effort Estimation
Software Estimation is an process of predicting the duration of
an activity in project.
Different Estimation Techniques
Function Point Estimation
COCOMO
Test Case Point Estimation
Metrics Based Estimation
156. Manual Testing v1.0
Process in Estimation
Identify of Requirements
Task can be understood by document
Separate the task in groups
Categorize the requirements into criticality as Simple, Average and
Complex
Login - Simple
Inbox - Complex
Compose – Average
Multiply the requirements with metrics value for category
Simple task - 5 min for test case creation
Average task – 10 min for test case creation
Complex task – 15 min for test case creation
Add the Adjustment Factor
Buffer time of 20 % should be added to the estimated time
Report the effort estimation
157. Manual Testing v1.0
Example in Estimation
Test Estimation for Yahoo Mail application
Identification of requirements
Login Screen
Inbox Screen
Compose Screen
Address Screen
Categorize them based on criticality
Login ~ Simple
Inbox ~ Complex
Compose, Address ~ Average
( Cont.. )
158. Manual Testing v1.0
Example in Estimation
Login – 5 mins
Inbox – 15 mins
Compose – 10 mins
Address – 10 mins
Multiply with metrics value
Simple - 1 * 5 = 5
Average – 2 * 10 = 20
Complex – 1 * 15 = 15
Total = 40 mins
Add the adjustment factor
40 mins * 20 % = 8 mins
The total time required to write test case for Yahoo Mail
application is 48 Mins for functionality like Login, Inbox,
Compose and Address.
160. Manual Testing v1.0
Contents
Testing Techniques
Why to Automate?
Different Testing Tools
Which Test Case to Automate
Which Test Case Not to Automate
Benefits of Automation
162. Manual Testing v1.0
Why to Automate ?
Automated testing is recognized as a cost efficient way to
increase application reliability, while reducing the time and cost
of software quality programs
163. Manual Testing v1.0
Different Type of Tools
Functional Testing Tools
Quick Test Pro.
Selenium
Test Management Tools
Quality Center Performance Testing
Performance Testing Tools
Load Runner
QA Load
164. Manual Testing v1.0
Which Test Cases to Automate?
Tests that need to be run for every build of the application
(sanity check, regression test)
Tests that use multiple data values for the same actions (data
driven tests)
Tests that require detailed information from application
internals (e.g., SQL, GUI attributes)
Stress/load testing
More repetitive execution?
Better candidate for
automation.
More repetitive execution?
Better candidate for
automation.
165. Manual Testing v1.0
Which Test Cases Not to Automate?
Usability testing
"How easy is the application to use?"
Tests without predictable results
"ASAP" testing
"We need to test NOW!"
Ad hoc/random testing
based on intuition and knowledge of application
One-time testing
Improvisation required?
Time required for automation.
Improvisation required?
Time required for automation.
166. Manual Testing v1.0
Benefits Of Automation Testing
Test Repeatability and Consistency
Automated tests should provide the same inputs and test
conditions no matter how many times they are run. Human
testers can make errors and lose effectiveness, especially
when they are fairly convinced the test won’t find anything
new.
Expanded and Practical Test Reuse
Automated tests provide expanded leverage due to the
negligible cost of executing the same test multiple times in
different environments and configurations, or of running
slightly modified tests using different input records or input
variables, which may cover conditions and paths that are
functionally quite different.
167. Manual Testing v1.0
Benefits Of Automation Testing
Practical Baseline Suites
Automated tests make it feasible to run a fairly
comprehensive suite of tests as an acceptance or baseline
suite to help ensure that small changes have not broken or
adversely impacted previously working features and
functionality. As tests are built, they are saved, maintained,
and accumulated. Automation makes it practical to run the
tests again for regression testing even small modifications.