SlideShare une entreprise Scribd logo
1  sur  19
MASTER OF SCIENCE IN INFORMATION TECHNOLOGY
MALAYSIA UNIVERSITY OF SCIENCE AND TECHNOLOGY
SESSION 2012/2013
COURSE : SOFTWARE TESTING TECHNIQUE &
BEST PRACTICES
COURSE CODE : MIT521
STUDENT NAME : YUDEP APOI
LECTURE NAME : DR. SELLAPPAN PALLANIAPAN
TOPICS : Survey of Software Testing and
Methods
MALAYSIA UNIVERSITY of
SCIENCE and TECHNOLOGY
CONFIDENTIAL
1.0 Introduction
In software development methodology or SLDC Testing is one of its phases to (1) verify that
it behaves ―as specified‖; (2) to detect errors, and (3) to validate that what has been specified
is what the user actually wanted.
Verification is to make sure that the system answered the question; are we building the
system right? In this stage verification is the checking or testing of items, including software,
for conformance and consistency by evaluating the results against pre-specified requirements.
Error Detection: Testing should intentionally attempt to make things go wrong to determine if
things happen when they shouldn‘t or things don‘t happen when they should.
Validation looks at the system correctness – i.e. is the process of checking that what has been
specified is what the user actually wanted. The question that we need to answer here is; are
we building the right system?
The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the
process of analyzing a software item to detect the differences between existing and required
conditions (that is defects/errors/bugs) and to evaluate the features of the software item.
2.0 Software Testing Methodology
As explained in the earlier Software testing is an integral part of the software development
life cycle (SDLC). In the market, there are a lot of software testing methods that widely use.
There are many different types of testing methods or techniques used as part of the software
testing methodology. Below are lists of few of the methods:
 White box testing
 Black box testing
 Gray box testing
 Unit testing
 Integration testing
 Regression testing
 Usability testing
 Performance testing
 Scalability testing
 Software stress testing
 Recovery testing
 Security testing
 Conformance testing
 Smoke testing
 Compatibility testing
 System testing
 Alpha testing
 Beta testing
The software testing methods described above can be implemented in two ways - manually or
by automation. Human software testers, who manually check the piece of code, test and
report bugs in it, do manual software testing. In case of automated software testing, the same
process is performed by a computer by means of automated testing software such as
WinRunner, LoadRunner, Test Director, etc.
3.0 Case Study
Case Study 1 - Achieving the Full Potential of Test Automation (www.logigear.com)
This case study has concluded that, as with other areas of software development, the true
potential of software test automation is realized only within a framework that provides true
scalable structure. Since its introduction in 1994, the keyword based method of test
automation has become the dominant approach in Europe and is now taking USA by storm
precisely because it provides the best way to achieve this goal.
Action Based Testing offers the latest innovations in keyword-driven testing from the original
architect of the keyword concept. Test design, test automation and test execution are all
performed within a spreadsheet environment, guided by a method focused on an elegant
structure of reusable high-level actions.
TestArchitect, a test automation framework from LogiGear with features ranging from action
organization to globally distributed team management, offers the full power of Action Based
Testing to the entire testing organization including business analysts, test engineers,
automation engineers, test leads and managers.
Case Study 2 – Prudential Intranet Regional Callcenter System
As I am writing this survey, I am currently working in Prudential Services Asia Sdn. Bhd. as
a Developer for internal using system called RCC-Intranet. This system including HR
Recruitment, Callback Management, Staff Information Management, Leave Management,
Dashboard Report Management and Overtime Management.
This system is developed based on Prototype Methodology and to be specific, it is
evolutionary prototyping. As we developing this system we are continually refined and
rebuilt. It has been argued that prototyping, in some form or another, should be used all the
time. However, prototyping is most beneficial in systems that will have many interactions
with the users.
It has been found that prototyping is very effective in the analysis and design of on-line
systems, especially for transaction processing, where the use of screen dialogs is much more
in evidence. The greater the interaction between the computer and the user, the greater the
benefit is that can be obtained from building a quick system and letting the user play with it.
As we finished the system module by module, we will let the user to test the system and the
output and result from users we use to continually rebuilt and refine the system. As thus far
(to the date this article is written), 80% modules have been developed and tested.
4.0 Test Plan
In the testing stage, the test plan is crucial. A Software Test Plan is a document describing the
testing scope and activities. It is the basis for formally testing any software/product in a
project. Below is the test plan template [2] and guideline:
TEST PLAN TEMPLATE
The format and content of a software test plan vary depending on the processes, standards,
and test management tools being implemented. Nevertheless, the following format, which is
based on IEEE standard for software test documentation, provides a summary of what a test
plan can/should contain.
Test Plan Identifier:
 Provide a unique identifier for the document. (Adhere to the Configuration
Management System if you have one.)
Introduction:
 Provide an overview of the test plan.
 Specify the goals/objectives.
 Specify any constraints.
References:
 List the related documents, with links to them if available, including the following:
 Project Plan
 Configuration Management Plan
Test Items:
 List the test items (software/products) and their versions.
Features to be tested:
 List the features of the software/product to be tested.
 Provide references to the Requirements and/or Design specifications of the features to
be tested
Features Not to Be Tested:
 List the features of the software/product, which will not be tested.
 Specify the reasons these features won‘t be tested.
Approach:
 Mention the overall approach to testing.
 Specify the testing levels [if it's a Master Test Plan], the testing types, and the testing
methods [Manual/Automated; White Box/Black Box/Gray Box]
Item Pass/Fail Criteria:
 Specify the criteria that will be used to determine whether each test item
(software/product) has passed or failed testing.
Suspension Criteria and Resumption Requirements:
 Specify criteria to be used to suspend the testing activity.
 Specify testing activities, which must be redone when testing, is resumed.
Test Deliverables:
 List test deliverables, and links to them if available, including the following:
 Test Plan (this document itself)
 Test Cases
 Test Scripts
 Defect/Enhancement Logs
 Test Reports
Test Environment:
 Specify the properties of test environment: hardware, software, network etc.
 List any testing or related tools.
Estimate:
 Provide a summary of test estimates (cost or effort) and/or provide a link to the
detailed estimation.
Schedule:
 Provide a summary of the schedule, specifying key test milestones, and/or provide a
link to the detailed schedule.
Staffing and Training Needs:
 Specify staffing needs by role and required skills.
 Identify training that is necessary to provide those skills, if not already acquired.
Responsibilities:
 List the responsibilities of each team/role/individual.
Risks:
 List the risks that have been identified.
 Specify the mitigation plan and the contingency plan for each risk.
Assumptions and Dependencies:
 List the assumptions that have been made during the preparation of this plan.
 List the dependencies.
Approvals:
 Specify the names and roles of all persons who must approve the plan.
 Provide space for signatures and dates. (If the document is to be printed.)
5.0 A survey
Here are a few papers that mention about software testing.
Jovanović, Irena, Software Testing Methods and Techniques
In this paper main testing methods and techniques are shortly described. General
classification is outlined: two testing methods – black box testing and white box testing, and
their frequently used techniques:
 Black Box techniques: Equivalent Partitioning, Boundary Value Analysis, Cause-
Effect Graphing Techniques, and Comparison Testing;
 White Box techniques: Basis Path Testing, Loop Testing, and Control Structure
Testing.
Also, the classification of the IEEE Computer Society is illustrated.
Jiantao Pan (1999), Software Testing 18-849b Dependable Embedded Systems
Software testing is any activity aimed at evaluating an attribute or capability of a program or
system and determining that it meets its required results. [Hetzel88] Although crucial to
software quality and widely deployed by programmers and testers, software testing still
remains an art, due to limited understanding of the principles of software. The difficulty in
software testing stems from the complexity of software: we can not completely test a program
with moderate complexity. Testing is more than just debugging. The purpose of testing can
be quality assurance, verification and validation, or reliability estimation. Testing can be used
as a generic metric as well. Correctness testing and reliability testing are two major areas of
testing. Software testing is a trade-off between budget, time and quality.
Ibrahim K. El-Far and James A. Whittaker (2001), Model-based Software Testing, Florida
Institute of Technology
Software testing requires the use of a model to guide such efforts as test selection and test
verification. Often, such models are implicit, existing only in the head of a human tester,
applying test inputs in an ad hoc fashion. The mental model testers build encapsulates
application behavior, allowing testers to understand the application‘s capabilities and more
effectively test its range of possible behaviors. When these mental models are written down,
they become sharable, reusable testing artifacts. In this case, testers are performing what has
become to be known as model-based testing. Model-based testing has recently gained
attention with the popularization of models (including UML) in software design and
development. There are a number of models of software in use today, a few of which make
good models for testing. This paper introduces model-based testing and discusses its tasks in
general terms with finite state models (arguably the most popular software models) as
examples. In addition, advantages, difficulties, and shortcoming of various model-based
approaches are concisely presented. Finally, we close with a discussion of where model-
based testing fits in the present and future of software engineering.
Mohd. Ehmer Khan (2010), Different Forms of Software Testing Techniques for Finding
Errors, Department of Information Technology, Al Musanna College of Technology,
Sultanate of Oman
Software testing is an activity, which is aimed for evaluating an attribute or capability of a
program and ensures that it meets the required result. There are many approaches to software
testing, but effective testing of complex product is essentially a process of investigation, not
merely a matter of creating and following route procedure. It is often impossible to find all
the errors in the program. This fundamental problem in testing thus throws open question, as
to what would be the strategy that we should adopt for testing. Thus, the selection of right
strategy at the right time will make the software testing efficient and effective. In this paper I
have described software-testing techniques, which are classified by purpose.
Vu Nguyen, Test Case Point Analysis: An Approach to Estimating Software Testing Size,
Faculty of Information Technology University of Science, VNU-HCMC Ho Chi Minh city,
Vietnam
Quality assurance management is an essential component of the software development
lifecycle. To ensure quality, applicability, and usefulness of a product, development teams
must spend considerable time and resources testing, which makes the estimation of the
software testing effort, a critical activity. This paper proposes an approach, namely Test Case
Point Analysis (TCPA), to estimating the size of software testing work. The approach
measures the size of a software test case based on its checkpoints, preconditions and test data,
as well as the types of testing. This paper also describes a case study applying the TCPA to a
large testing project at KMS Technology. The result indicates that this approach is more
preferable than estimating using tester‘s experience.
6.0 Test Best Practice
6.1 Methods
The purpose of ‗Test Techniques and Methods‘ is to improve test process capability during
test design and execution by applying basic test techniques and methods and incident
management. Well-founded testing means that test design techniques and methods are
applied, supported (if possible and beneficial) by tools. Test design techniques are used to
derive and select test cases from requirements and design specifications. A test case consists
of the description of the input values, execution preconditions, the change process, and the
expected result. The test cases are documented in a test design specification. At a later stage,
as more information becomes available on the actual implementation, the test designs are
translated into test procedures. In a test procedure, also referred to as a manual test script, the
specific test actions and checks are arranged in an executable sequence. The tests will
subsequently be executed using these test procedures. The test design and execution activities
follow the test approach as defined in the test plan. During the test execution stage, incidents
(defects) are found and test incident reports are written. Incidents are be logged using an
incident management system and thorough communication about the incidents with
stakeholders is established. For incident management a basic incident classification scheme is
established, and a basic incident repository is put into place.
6.1.1 Static and Dynamic approach
There are many approaches to software testing. Reviews, walkthroughs, or inspections are
referred to as static testing, whereas actually executing programmed code with a given set of
test cases is referred to as dynamic testing. Static testing can be omitted, and unfortunately in
practice often is dynamic testing takes place when the program itself is used. Dynamic testing
may begin before the program is 100% complete in order to test particular sections of code
and are applied to discrete functions or modules. Typical techniques for this are either using
stubs/drivers or execution from a debugger environment.
5.1.2 The box approach
Software testing methods are traditionally divided into white- and black-box testing. These
two approaches are used to describe the point of view that a test engineer takes when
designing test cases.
White-box testing
White box testing is the detailed investigation of internal logic and structure of the code.
White box testing is also called glass testing or open box testing. In order to perform white
box testing on an application, the tester needs to possess knowledge of the internal working
of the code. The tester needs to have a look inside the source code and find out which
unit/chunk of the code is behaving inappropriately.
Advantages:
 As the tester has knowledge of the source code, it becomes very easy to find
out which type of data can help in testing the application effectively.
 It helps in optimizing the code.
 Extra lines of code can be removed which can bring in hidden defects.
 Due to the tester's knowledge about the code, maximum coverage is attained
during test scenario writing.
Disadvantages:
 Due to the fact that a skilled tester is needed to perform white box testing, the
costs are increased.
 Sometimes it is impossible to look into every nook and corner to find out
hidden errors that may create problems as many paths will go untested.
 It is difficult to maintain white box testing as the use of specialized tools like
code analyzers and debugging tools are required.
White-box testing (also known as clear box testing, glass box testing, and transparent box
testing and structural testing) tests internal structures or workings of a program, as opposed to
the functionality exposed to the end-user. In white-box testing an internal perspective of the
system, as well as programming skills, are used to design test cases. The tester chooses inputs
to exercise paths through the code and determine the appropriate outputs. This is analogous to
testing nodes in a circuit, e.g. in-circuit testing (ICT).
Black-box testing
The technique of testing without having any knowledge of the interior workings of the
application is Black Box testing. The tester is oblivious to the system architecture and does
not have access to the source code. Typically, when performing a black box test, a tester will
interact with the system's user interface by providing inputs and examining outputs without
knowing how and where the inputs are worked upon.
Advantages:
 Well suited and efficient for large code segments.
 Code Access not required.
 Clearly separates user's perspective from the developer's perspective through
visibly defined roles.
 Large numbers of moderately skilled testers can test the application with no
knowledge of implementation, programming language or operating systems.
Disadvantages:
 Limited Coverage since only a selected number of test scenarios are actually
performed.
 Inefficient testing, due to the fact that the tester only has limited knowledge
about an application.
 Blind Coverage, since the tester cannot target specific code segments or error
prone areas.
 The test cases are difficult to design.
Black box testing treats the software as a "black box", examining functionality without any
knowledge of internal implementation. The tester is only aware of what the software is
supposed to do, not how it does it. Black-box testing methods include: equivalence
partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table
testing, fuzz testing, model-based testing, use case testing, exploratory testing and
specification-based testing.
Gray box testing
Grey-box testing (American spelling: gray-box testing) involves having knowledge of
internal data structures and algorithms for purposes of designing tests, while executing those
tests at the user, or black-box level. The tester is not required to have full access to the
software's source code. Manipulating input data and formatting output do not qualify as grey-
box, because the input and output are clearly outside of the "black box" that we are calling
the system under test. This distinction is particularly important when conducting integration
testing between two modules of code written by two different developers, where only the
interfaces are exposed for test. However, modifying a data repository does qualify as grey-
box, as the user would not normally be able to change the data outside of the system under
test. Grey-box testing may also include reverse engineering to determine, for instance,
boundary values or error messages.
Grey Box testing is a technique to test the application with limited knowledge of the internal
workings of an application. In software testing, the term the more you know the better carries
a lot of weight when testing an application.
Mastering the domain of a system always gives the tester an edge over someone with limited
domain knowledge. Unlike black box testing, where the tester only tests the application's user
interface, in grey box testing, the tester has access to design documents and the database.
Having this knowledge, the tester is able to better prepare test data and test scenarios when
making the test plan.
Advantages:
 Offers combined benefits of black box and white box testing wherever
possible.
 Grey box testers don't rely on the source code; instead they rely on interface
definition and functional specifications.
 Based on the limited information available, a grey box tester can design
excellent test scenarios especially around communication protocols and data
type handling.
 The test is done from the point of view of the user and not the designer.
Disadvantages:
 Since the access to source code is not available, the ability to go over the code
and test coverage is limited.
 The tests can be redundant if the software designer has already run a test case.
 Testing every possible input stream is unrealistic because it would take an
unreasonable amount of time; therefore, many program paths will go untested.
5.1.2 Comparison
Black Box vs. Grey Box vs. White Box
S.N. Black Box Testing Grey Box Testing White Box Testing
1 The Internal Workings
of an application are not
required to be known
Somewhat knowledge of
the internal workings are
known
Tester has full
knowledge of the
Internal workings of
the application
2 Also known as closed
box testing, data driven
testing and functional
testing
Another term for grey
box testing is translucent
testing as the tester has
limited knowledge of the
insides of the application
Also known as clear
box testing, structural
testing or code based
testing
3 Performed by end users
and also by testers and
developers
Performed by end users
and also by testers and
developers
Normally done by
testers and developers
4 Testing is based on
external expectations -
Internal behavior of the
application is unknown
Testing is done on the
basis of high level
database diagrams and
data flow diagrams
Internal workings are
fully known and the
tester can design test
data accordingly
5 This is the least time
consuming and
exhaustive
Partly time consuming
and exhaustive
The most exhaustive
and time consuming
type of testing
6 Not suited to algorithm
testing
Not suited to algorithm
testing
Suited for algorithm
testing
7 This can only be done
by trial and error method
Data domains and
Internal boundaries can
be tested, if known
Data domains and
Internal boundaries
can be better tested
6.2 Tips
Seriously take every aspects of test. Analyze test result thoroughly. Do not ignore the test
result. The final test result may be ‗pass‘ or ‗fail‘ but troubleshooting the root cause of ‗fail‘
will lead to the solution of the problem. Testers will be respected if they not only log the bugs
but also provide solutions.
Maximize the test coverage every time any application is tested. Though 100 percent test
coverage might not be possible you can always try to reach it.
Break the application into smaller functional modules to ensure maximum test coverage e.g.:
If you have divided your website application in modules and ―accepting user information‖ is
one of the modules. You can break this ―user information‖ screen into smaller parts for
writing test cases: Parts like UI testing, security testing, functional testing of the user
information form etc. Apply all form field type and size tests, negative and validation tests on
input fields and write all test cases for maximum coverage.
Write test cases for the intended functionality first i.e.: for valid conditions according to
requirements. Then write test cases for invalid conditions. This will cover expected as well
unexpected behavior of the application.
Write test cases in requirement analysis and the design phase itself. This way can ensure all
the requirements are testable.
Make your test cases available to developers prior to coding. Don‘t keep your test cases with
you waiting to get the final application release for testing, thinking that you can log more
bugs. Let developers analyze your test cases thoroughly to develop a quality application. This
will also save the re-work time.
If possible identify and group your test cases for regression testing. This will ensure quick
and effective manual regression testing.
Applications requiring critical response time should be thoroughly tested for performance.
Performance testing is the critical part of many applications. In manual testing this is mostly
ignored by testers. Find out ways to test your application for performance. If it is not possible
to create test data manually, then write some basic scripts to create test data for performance
testing or ask the developers to write it for you.
Programmers should not test their own code. Basic unit testing of the developed application
should be enough for developers to release the application for the testers. But testers should
not force developers to release the product for testing. Let them take their own time.
Everyone from lead to manger will know when the module/update is released for testing and
they can estimate the testing time accordingly. This is a typical situation in an agile project
environment.
Go beyond requirement testing. Test the application for what it is not supposed to do.
7.0 Question and Discussion
Some of the major software testing controversies include:
What constitutes responsible software testing?
Members of the "context-driven" school of testing believe that there are no "best practices" of
testing, but rather that testing is a set of skills that allow the tester to select or invent testing
practices to suit each unique situation.
Agile vs. traditional
Should testers learn to work under conditions of uncertainty and constant change or should
they aim at process "maturity"? The agile testing movement has received growing popularity
since 2006 mainly in commercial circles, whereas government and military software
providers use this methodology but also the traditional test-last models (e.g. in the Waterfall
model).
Exploratory test vs. scripted
Should tests be designed at the same time as they are executed or should they be designed
beforehand?
Manual testing vs. automated
Some writers believe that test automation is so expensive relative to its value that it should be
used sparingly. More in particular, test-driven development states that developers should
write unit-tests of the Xunit type before coding the functionality. The tests then can be
considered as a way to capture and implement the requirements.
Software design vs. software implementation
Should testing be carried out only at the end or throughout the whole process?
Who watches the watchmen?
The idea is that any form of observation is also an interaction—the act of testing can also
affect that which is being tested.
8.0 Conclusion
As the conclusion, software testing has its purposes. First, to reduce costly error.
The cost of errors in software can vary from nothing at all to large amounts of money and
even the loss of life. There are hundreds of stories about failures of computer systems that
have been attributed to errors in software. There are many reasons why systems fail but the
issue that stands out the most is the lack of adequate testing.
Most of us have had an experience with software that did not work as expected. Software that
does not work can have a large impact on an organization. It can lead to many problems
including:
 Loss of money – this can include losing customers right through to financial penalties
for non-compliance to legal requirements
 Loss of time – this can be caused by transactions taking a long time to process but can
include staff not being able to work due to a fault or failure
 Damage to business reputation – if an organization is unable to provide service to
their customers due to software problems than the customers will lose confidence or
faith in this organization (and probably take their business elsewhere)
 Injury or death – It might sound dramatic but some safety-critical systems could result
in injuries or deaths if they don‘t work properly (e.g. flight traffic control software)
Testing is an important part of each software development process, no matter which
programming paradigm is used. In functional programming its low level nature caused
missing acceptance of the software testing by parts of the community. Publications from the
last few years show that testing of functional programs has (eventually) received some more
attention.
References
1. Exploratory Testing, Cem Kaner,Florida Institute of Technology, Quality Assurance
Institute Worldwide Annual Software Testing Conference, Orlando, FL, November
2006
2. Software Testing by Jiantao Pan, Carnegie Mellon University
3. Leitner, A., Ciupa, I., Oriol, M., Meyer, B., Fiva, A., "Contract Driven Development
= Test Driven Development - Writing Test Cases", Proceedings of ESEC/FSE'07:
European Software Engineering Conference and the ACM SIGSOFT Symposium on
the Foundations of Software Engineering 2007, (Dubrovnik, Croatia), September
2007
4. a b c
Kaner, Cem; Falk, Jack and Nguyen, Hung Quoc (1999). Testing Computer
Software, 2nd Ed.. New York, et al: John Wiley and Sons, Inc.. pp. 480 pages.
ISBN 0-471-35846-0.
5. Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best
Practices in Software Management. Wiley-IEEE Computer Society Press. pp. 41–43.
ISBN 0-470-04212-5. http://www.wiley.com/WileyCDA/WileyTitle/productCd-
0470042125.html.
6. Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best
Practices in Software Management. Wiley-IEEE Computer Society Press. p. 426.
ISBN 0-470-04212-5. http://www.wiley.com/WileyCDA/WileyTitle/productCd-
0470042125.html.
7. a b
Section 1.1.2, Certified Tester Foundation Level Syllabus, International Software
Testing Qualifications Board
8. Principle 2, Section 1.3, Certified Tester Foundation Level Syllabus, International
Software Testing Qualifications Board
9. "Proceedings from the 5th International Conference on Software Testing and
Validation (ICST). Software Competence Center Hagenberg. "Test Design: Lessons
Learned and Practical Implications.".
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4578383.
10. Software errors cost U.S. economy $59.5 billion annually, NIST report
11. McConnell, Steve (2004). Code Complete (2nd ed.). Microsoft Press. p. 29. ISBN 0-
7356-1967-0.
12. see D. Gelperin and W.C. Hetzel
13. a b
Myers, Glenford J. (1979). The Art of Software Testing. John Wiley and Sons.
ISBN 0-471-04328-1.
14. Company, People's Computer (1987). "Dr. Dobb's journal of software tools for the
professional programmer". Dr. Dobb's journal of software tools for the professional
programmer (M&T Pub) 12 (1–6): 116.
http://books.google.com/?id=7RoIAAAAIAAJ.
15. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6).
ISSN 0001-0782.
16. until 1956 it was the debugging oriented period, when testing was often associated to
debugging: there was no clear difference between testing and debugging. Gelperin,
D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-
0782.
17. From 1957–1978 there was the demonstration oriented period where debugging and
testing was distinguished now - in this period it was shown, that software satisfies the
requirements. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing".
CACM 31 (6). ISSN 0001-0782.
18. The time between 1979–1982 is announced as the destruction oriented period, where
the goal was to find errors. Gelperin, D.; B. Hetzel (1988). "The Growth of Software
Testing". CACM 31 (6). ISSN 0001-0782.
19. 1983–1987 is classified as the evaluation oriented period: intention here is that
during the software lifecycle a product evaluation is provided and measuring quality.
Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6).
ISSN 0001-0782.
20. From 1988 on it was seen as prevention oriented period where tests were to
demonstrate that software satisfies its specification, to detect faults and to prevent
faults. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31
(6). ISSN 0001-0782.
21. Introduction, Code Coverage Analysis, Steve Cornett
22. Ron, Patton. Software Testing.
23. Laycock, G. T. (1993) (PostScript). The Theory and Practice of Specification Based
Software Testing. Dept of Computer Science, Sheffield University, UK.
http://www.mcs.le.ac.uk/people/gtl1/thesis.ps.gz. Retrieved 2008-02-13.
24. Savenkov, Roman (2008). How to Become a Software Tester. Roman Savenkov
Consulting. p. 159. ISBN 978-0-615-23372-7.
25. Patton, Ron. Software Testing.
26. "SOA Testing Tools for Black, White and Gray Box SOA Testing Techniques".
Crosschecknet.com.
http://www.crosschecknet.com/soa_testing_black_white_gray_box.php. Retrieved
2012-12-10.
27. "Visual testing of software - Helsinki University of Technology" (PDF).
http://www.cs.hut.fi/~jlonnber/VisualTesting.pdf. Retrieved 2012-01-13.
28. "Article on visual testing in Test Magazine". Testmagazine.co.uk.
http://www.testmagazine.co.uk/2011/04/visual-testing. Retrieved 2012-01-13.
29. a b
"SWEBOK Guide - Chapter 5". Computer.org.
http://www.computer.org/portal/web/swebok/html/ch5#Ref2.1. Retrieved 2012-01-
13.
30. Binder, Robert V. (1999). Testing Object-Oriented Systems: Objects, Patterns, and
Tools. Addison-Wesley Professional. p. 45. ISBN 0-201-80938-9.
31. Beizer, Boris (1990). Software Testing Techniques (Second ed.). New York: Van
Nostrand Reinhold. pp. 21,430. ISBN 0-442-20672-0.
32. van Veenendaal, Erik. "Standard glossary of terms used in Software Testing".
http://www.astqb.org/get-certified/istqb-syllabi-the-istqb-software-tester-certification-
body-of-knowledge/. Retrieved 4 January 2013.
33. EtestingHub-Online Free Software Testing Tutorial. "e)Testing Phase in Software
Testing:". Etestinghub.com. http://www.etestinghub.com/testing_lifecycles.php#2.
Retrieved 2012-01-13.
34. Myers, Glenford J. (1979). The Art of Software Testing. John Wiley and Sons.
pp. 145–146. ISBN 0-471-04328-1.
35. Dustin, Elfriede (2002). Effective Software Testing. Addison Wesley. p. 3. ISBN 0-
201-79429-2.
36. Pan, Jiantao (Spring 1999). "Software Testing (18-849b Dependable Embedded
Systems)". Topics in Dependable Embedded Systems. Electrical and Computer
Engineering Department, Carnegie Mellon University.
http://www.ece.cmu.edu/~koopman/des_s99/sw_testing/.

Contenu connexe

Tendances

IT 8076 Software Testing Unit1
IT 8076 Software Testing Unit1IT 8076 Software Testing Unit1
IT 8076 Software Testing Unit1Roselin Mary S
 
software testing strategies
software testing strategiessoftware testing strategies
software testing strategiesHemanth Gajula
 
Software Testing
Software TestingSoftware Testing
Software TestingKiran Kumar
 
Chapter 3 SOFTWARE TESTING PROCESS
Chapter 3 SOFTWARE TESTING PROCESSChapter 3 SOFTWARE TESTING PROCESS
Chapter 3 SOFTWARE TESTING PROCESSst. michael
 
Research issues in object oriented software testing
Research issues in object oriented software testingResearch issues in object oriented software testing
Research issues in object oriented software testingAnshul Vinayak
 
What are Software Testing Methodologies | Software Testing Techniques | Edureka
What are Software Testing Methodologies | Software Testing Techniques | EdurekaWhat are Software Testing Methodologies | Software Testing Techniques | Edureka
What are Software Testing Methodologies | Software Testing Techniques | EdurekaEdureka!
 
Unit 2 hci in software process
Unit 2   hci in software processUnit 2   hci in software process
Unit 2 hci in software processRoselin Mary S
 
A survey of software testing
A survey of software testingA survey of software testing
A survey of software testingTao He
 
Approaches to Software Testing
Approaches to Software TestingApproaches to Software Testing
Approaches to Software TestingScott Barber
 
Software Testing and Quality Assurance Assignment 3
Software Testing and Quality Assurance Assignment 3Software Testing and Quality Assurance Assignment 3
Software Testing and Quality Assurance Assignment 3Gurpreet singh
 
Software testing
Software testingSoftware testing
Software testingSengu Msc
 

Tendances (16)

IT 8076 Software Testing Unit1
IT 8076 Software Testing Unit1IT 8076 Software Testing Unit1
IT 8076 Software Testing Unit1
 
@#$@#$@#$"""@#$@#$"""
@#$@#$@#$"""@#$@#$"""@#$@#$@#$"""@#$@#$"""
@#$@#$@#$"""@#$@#$"""
 
software testing strategies
software testing strategiessoftware testing strategies
software testing strategies
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Chapter 3 SOFTWARE TESTING PROCESS
Chapter 3 SOFTWARE TESTING PROCESSChapter 3 SOFTWARE TESTING PROCESS
Chapter 3 SOFTWARE TESTING PROCESS
 
Research issues in object oriented software testing
Research issues in object oriented software testingResearch issues in object oriented software testing
Research issues in object oriented software testing
 
What are Software Testing Methodologies | Software Testing Techniques | Edureka
What are Software Testing Methodologies | Software Testing Techniques | EdurekaWhat are Software Testing Methodologies | Software Testing Techniques | Edureka
What are Software Testing Methodologies | Software Testing Techniques | Edureka
 
Unit 2 hci in software process
Unit 2   hci in software processUnit 2   hci in software process
Unit 2 hci in software process
 
Software testing
Software testingSoftware testing
Software testing
 
A survey of software testing
A survey of software testingA survey of software testing
A survey of software testing
 
Software testing ppt
Software testing pptSoftware testing ppt
Software testing ppt
 
Approaches to Software Testing
Approaches to Software TestingApproaches to Software Testing
Approaches to Software Testing
 
Software Testing and Quality Assurance Assignment 3
Software Testing and Quality Assurance Assignment 3Software Testing and Quality Assurance Assignment 3
Software Testing and Quality Assurance Assignment 3
 
Testing fundamentals
Testing fundamentalsTesting fundamentals
Testing fundamentals
 
Software testing
Software testingSoftware testing
Software testing
 
CTFL Module 02
CTFL Module 02CTFL Module 02
CTFL Module 02
 

Similaire à MIT521 software testing (2012) v2

Software testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.comSoftware testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.comwww.testersforum.com
 
Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance Webtech Learning
 
Software testing
Software testingSoftware testing
Software testingSengu Msc
 
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTINGWelingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTINGSachin Pathania
 
Application Testing For Software Testing
Application Testing For Software TestingApplication Testing For Software Testing
Application Testing For Software TestingRobyn Champagne
 
Testing Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdfTesting Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdfMuhammadShoaibHussai2
 
Software Testing Life Cycle
Software Testing Life CycleSoftware Testing Life Cycle
Software Testing Life CycleUdayakumar Sree
 
Software testing
Software testingSoftware testing
Software testingRavi Dasari
 
Importance of Testing in SDLC
Importance of Testing in SDLCImportance of Testing in SDLC
Importance of Testing in SDLCIJEACS
 
10. Software testing overview
10. Software testing overview10. Software testing overview
10. Software testing overviewghayour abbas
 
SWE-401 - 10. Software Testing Overview
SWE-401 - 10. Software Testing OverviewSWE-401 - 10. Software Testing Overview
SWE-401 - 10. Software Testing Overviewghayour abbas
 
softwaretestingppt-120810095500-phpapp02 (1).pdf
softwaretestingppt-120810095500-phpapp02 (1).pdfsoftwaretestingppt-120810095500-phpapp02 (1).pdf
softwaretestingppt-120810095500-phpapp02 (1).pdfBabaShaikh3
 
softwaretestingppt-FINAL-PPT-1
softwaretestingppt-FINAL-PPT-1softwaretestingppt-FINAL-PPT-1
softwaretestingppt-FINAL-PPT-1FAIZALSAIYED
 
Testing in Software Engineering.docx
Testing in Software Engineering.docxTesting in Software Engineering.docx
Testing in Software Engineering.docx8759000398
 
38475471 qa-and-software-testing-interview-questions-and-answers
38475471 qa-and-software-testing-interview-questions-and-answers38475471 qa-and-software-testing-interview-questions-and-answers
38475471 qa-and-software-testing-interview-questions-and-answersMaria FutureThoughts
 

Similaire à MIT521 software testing (2012) v2 (20)

Too many files
Too many filesToo many files
Too many files
 
Software testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.comSoftware testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.com
 
Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance
 
Software testing
Software testingSoftware testing
Software testing
 
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTINGWelingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
 
Application Testing For Software Testing
Application Testing For Software TestingApplication Testing For Software Testing
Application Testing For Software Testing
 
Best software testing course
Best software testing courseBest software testing course
Best software testing course
 
Testing Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdfTesting Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdf
 
Software Testing Life Cycle
Software Testing Life CycleSoftware Testing Life Cycle
Software Testing Life Cycle
 
T0 numtq0nje=
T0 numtq0nje=T0 numtq0nje=
T0 numtq0nje=
 
Software testing
Software testingSoftware testing
Software testing
 
Importance of Testing in SDLC
Importance of Testing in SDLCImportance of Testing in SDLC
Importance of Testing in SDLC
 
10. Software testing overview
10. Software testing overview10. Software testing overview
10. Software testing overview
 
SWE-401 - 10. Software Testing Overview
SWE-401 - 10. Software Testing OverviewSWE-401 - 10. Software Testing Overview
SWE-401 - 10. Software Testing Overview
 
Unit 5 st ppt
Unit 5 st pptUnit 5 st ppt
Unit 5 st ppt
 
Testing
TestingTesting
Testing
 
softwaretestingppt-120810095500-phpapp02 (1).pdf
softwaretestingppt-120810095500-phpapp02 (1).pdfsoftwaretestingppt-120810095500-phpapp02 (1).pdf
softwaretestingppt-120810095500-phpapp02 (1).pdf
 
softwaretestingppt-FINAL-PPT-1
softwaretestingppt-FINAL-PPT-1softwaretestingppt-FINAL-PPT-1
softwaretestingppt-FINAL-PPT-1
 
Testing in Software Engineering.docx
Testing in Software Engineering.docxTesting in Software Engineering.docx
Testing in Software Engineering.docx
 
38475471 qa-and-software-testing-interview-questions-and-answers
38475471 qa-and-software-testing-interview-questions-and-answers38475471 qa-and-software-testing-interview-questions-and-answers
38475471 qa-and-software-testing-interview-questions-and-answers
 

Plus de Yudep Apoi

Amalan Pertanian Yang Baik Untuk Penanaman Lada
Amalan Pertanian Yang Baik Untuk Penanaman LadaAmalan Pertanian Yang Baik Untuk Penanaman Lada
Amalan Pertanian Yang Baik Untuk Penanaman LadaYudep Apoi
 
RHB SE User Manual Draft
RHB SE User Manual DraftRHB SE User Manual Draft
RHB SE User Manual DraftYudep Apoi
 
Steps how to create active x using visual studio 2008
Steps how to create active x using visual studio 2008Steps how to create active x using visual studio 2008
Steps how to create active x using visual studio 2008Yudep Apoi
 
Web aggregation and mashup with kapow mashup server
Web aggregation and mashup with kapow mashup serverWeb aggregation and mashup with kapow mashup server
Web aggregation and mashup with kapow mashup serverYudep Apoi
 
MIT520 software architecture assignments (2012) - 1
MIT520   software architecture assignments (2012) - 1MIT520   software architecture assignments (2012) - 1
MIT520 software architecture assignments (2012) - 1Yudep Apoi
 
Intranet (leave management module) admin
Intranet (leave management module) adminIntranet (leave management module) admin
Intranet (leave management module) adminYudep Apoi
 
Intranet (hiring module)
Intranet (hiring module)Intranet (hiring module)
Intranet (hiring module)Yudep Apoi
 
Intranet (callback module)
Intranet (callback module)Intranet (callback module)
Intranet (callback module)Yudep Apoi
 
Intranet (attendance module)
Intranet (attendance module)Intranet (attendance module)
Intranet (attendance module)Yudep Apoi
 

Plus de Yudep Apoi (9)

Amalan Pertanian Yang Baik Untuk Penanaman Lada
Amalan Pertanian Yang Baik Untuk Penanaman LadaAmalan Pertanian Yang Baik Untuk Penanaman Lada
Amalan Pertanian Yang Baik Untuk Penanaman Lada
 
RHB SE User Manual Draft
RHB SE User Manual DraftRHB SE User Manual Draft
RHB SE User Manual Draft
 
Steps how to create active x using visual studio 2008
Steps how to create active x using visual studio 2008Steps how to create active x using visual studio 2008
Steps how to create active x using visual studio 2008
 
Web aggregation and mashup with kapow mashup server
Web aggregation and mashup with kapow mashup serverWeb aggregation and mashup with kapow mashup server
Web aggregation and mashup with kapow mashup server
 
MIT520 software architecture assignments (2012) - 1
MIT520   software architecture assignments (2012) - 1MIT520   software architecture assignments (2012) - 1
MIT520 software architecture assignments (2012) - 1
 
Intranet (leave management module) admin
Intranet (leave management module) adminIntranet (leave management module) admin
Intranet (leave management module) admin
 
Intranet (hiring module)
Intranet (hiring module)Intranet (hiring module)
Intranet (hiring module)
 
Intranet (callback module)
Intranet (callback module)Intranet (callback module)
Intranet (callback module)
 
Intranet (attendance module)
Intranet (attendance module)Intranet (attendance module)
Intranet (attendance module)
 

Dernier

UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6DianaGray10
 
Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Commit University
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsSafe Software
 
Cybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxCybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxGDSC PJATK
 
Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024SkyPlanner
 
Bird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemBird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemAsko Soukka
 
All in AI: LLM Landscape & RAG in 2024 with Mark Ryan (Google) & Jerry Liu (L...
All in AI: LLM Landscape & RAG in 2024 with Mark Ryan (Google) & Jerry Liu (L...All in AI: LLM Landscape & RAG in 2024 with Mark Ryan (Google) & Jerry Liu (L...
All in AI: LLM Landscape & RAG in 2024 with Mark Ryan (Google) & Jerry Liu (L...Daniel Zivkovic
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8DianaGray10
 
Nanopower In Semiconductor Industry.pdf
Nanopower  In Semiconductor Industry.pdfNanopower  In Semiconductor Industry.pdf
Nanopower In Semiconductor Industry.pdfPedro Manuel
 
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1DianaGray10
 
UiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation DevelopersUiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation DevelopersUiPathCommunity
 
Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024D Cloud Solutions
 
COMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a WebsiteCOMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a Websitedgelyza
 
Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Brian Pichman
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfAijun Zhang
 
Introduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxIntroduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxMatsuo Lab
 
20230202 - Introduction to tis-py
20230202 - Introduction to tis-py20230202 - Introduction to tis-py
20230202 - Introduction to tis-pyJamie (Taka) Wang
 
Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioChristian Posta
 
UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"
UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"
UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"DianaGray10
 

Dernier (20)

UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6
 
Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
 
Cybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxCybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptx
 
Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024
 
Bird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemBird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystem
 
All in AI: LLM Landscape & RAG in 2024 with Mark Ryan (Google) & Jerry Liu (L...
All in AI: LLM Landscape & RAG in 2024 with Mark Ryan (Google) & Jerry Liu (L...All in AI: LLM Landscape & RAG in 2024 with Mark Ryan (Google) & Jerry Liu (L...
All in AI: LLM Landscape & RAG in 2024 with Mark Ryan (Google) & Jerry Liu (L...
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8
 
Nanopower In Semiconductor Industry.pdf
Nanopower  In Semiconductor Industry.pdfNanopower  In Semiconductor Industry.pdf
Nanopower In Semiconductor Industry.pdf
 
20230104 - machine vision
20230104 - machine vision20230104 - machine vision
20230104 - machine vision
 
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
 
UiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation DevelopersUiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation Developers
 
Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024
 
COMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a WebsiteCOMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a Website
 
Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdf
 
Introduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxIntroduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptx
 
20230202 - Introduction to tis-py
20230202 - Introduction to tis-py20230202 - Introduction to tis-py
20230202 - Introduction to tis-py
 
Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and Istio
 
UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"
UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"
UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"
 

MIT521 software testing (2012) v2

  • 1. MASTER OF SCIENCE IN INFORMATION TECHNOLOGY MALAYSIA UNIVERSITY OF SCIENCE AND TECHNOLOGY SESSION 2012/2013 COURSE : SOFTWARE TESTING TECHNIQUE & BEST PRACTICES COURSE CODE : MIT521 STUDENT NAME : YUDEP APOI LECTURE NAME : DR. SELLAPPAN PALLANIAPAN TOPICS : Survey of Software Testing and Methods MALAYSIA UNIVERSITY of SCIENCE and TECHNOLOGY CONFIDENTIAL
  • 2. 1.0 Introduction In software development methodology or SLDC Testing is one of its phases to (1) verify that it behaves ―as specified‖; (2) to detect errors, and (3) to validate that what has been specified is what the user actually wanted. Verification is to make sure that the system answered the question; are we building the system right? In this stage verification is the checking or testing of items, including software, for conformance and consistency by evaluating the results against pre-specified requirements. Error Detection: Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn‘t or things don‘t happen when they should. Validation looks at the system correctness – i.e. is the process of checking that what has been specified is what the user actually wanted. The question that we need to answer here is; are we building the right system? The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process of analyzing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item. 2.0 Software Testing Methodology As explained in the earlier Software testing is an integral part of the software development life cycle (SDLC). In the market, there are a lot of software testing methods that widely use. There are many different types of testing methods or techniques used as part of the software testing methodology. Below are lists of few of the methods:  White box testing  Black box testing  Gray box testing  Unit testing  Integration testing  Regression testing  Usability testing  Performance testing
  • 3.  Scalability testing  Software stress testing  Recovery testing  Security testing  Conformance testing  Smoke testing  Compatibility testing  System testing  Alpha testing  Beta testing The software testing methods described above can be implemented in two ways - manually or by automation. Human software testers, who manually check the piece of code, test and report bugs in it, do manual software testing. In case of automated software testing, the same process is performed by a computer by means of automated testing software such as WinRunner, LoadRunner, Test Director, etc. 3.0 Case Study Case Study 1 - Achieving the Full Potential of Test Automation (www.logigear.com) This case study has concluded that, as with other areas of software development, the true potential of software test automation is realized only within a framework that provides true scalable structure. Since its introduction in 1994, the keyword based method of test automation has become the dominant approach in Europe and is now taking USA by storm precisely because it provides the best way to achieve this goal. Action Based Testing offers the latest innovations in keyword-driven testing from the original architect of the keyword concept. Test design, test automation and test execution are all performed within a spreadsheet environment, guided by a method focused on an elegant structure of reusable high-level actions. TestArchitect, a test automation framework from LogiGear with features ranging from action organization to globally distributed team management, offers the full power of Action Based Testing to the entire testing organization including business analysts, test engineers, automation engineers, test leads and managers.
  • 4. Case Study 2 – Prudential Intranet Regional Callcenter System As I am writing this survey, I am currently working in Prudential Services Asia Sdn. Bhd. as a Developer for internal using system called RCC-Intranet. This system including HR Recruitment, Callback Management, Staff Information Management, Leave Management, Dashboard Report Management and Overtime Management. This system is developed based on Prototype Methodology and to be specific, it is evolutionary prototyping. As we developing this system we are continually refined and rebuilt. It has been argued that prototyping, in some form or another, should be used all the time. However, prototyping is most beneficial in systems that will have many interactions with the users. It has been found that prototyping is very effective in the analysis and design of on-line systems, especially for transaction processing, where the use of screen dialogs is much more in evidence. The greater the interaction between the computer and the user, the greater the benefit is that can be obtained from building a quick system and letting the user play with it. As we finished the system module by module, we will let the user to test the system and the output and result from users we use to continually rebuilt and refine the system. As thus far (to the date this article is written), 80% modules have been developed and tested. 4.0 Test Plan In the testing stage, the test plan is crucial. A Software Test Plan is a document describing the testing scope and activities. It is the basis for formally testing any software/product in a project. Below is the test plan template [2] and guideline: TEST PLAN TEMPLATE The format and content of a software test plan vary depending on the processes, standards, and test management tools being implemented. Nevertheless, the following format, which is based on IEEE standard for software test documentation, provides a summary of what a test plan can/should contain. Test Plan Identifier:  Provide a unique identifier for the document. (Adhere to the Configuration Management System if you have one.) Introduction:
  • 5.  Provide an overview of the test plan.  Specify the goals/objectives.  Specify any constraints. References:  List the related documents, with links to them if available, including the following:  Project Plan  Configuration Management Plan Test Items:  List the test items (software/products) and their versions. Features to be tested:  List the features of the software/product to be tested.  Provide references to the Requirements and/or Design specifications of the features to be tested Features Not to Be Tested:  List the features of the software/product, which will not be tested.  Specify the reasons these features won‘t be tested. Approach:  Mention the overall approach to testing.  Specify the testing levels [if it's a Master Test Plan], the testing types, and the testing methods [Manual/Automated; White Box/Black Box/Gray Box] Item Pass/Fail Criteria:  Specify the criteria that will be used to determine whether each test item (software/product) has passed or failed testing. Suspension Criteria and Resumption Requirements:  Specify criteria to be used to suspend the testing activity.  Specify testing activities, which must be redone when testing, is resumed. Test Deliverables:  List test deliverables, and links to them if available, including the following:  Test Plan (this document itself)  Test Cases  Test Scripts  Defect/Enhancement Logs  Test Reports
  • 6. Test Environment:  Specify the properties of test environment: hardware, software, network etc.  List any testing or related tools. Estimate:  Provide a summary of test estimates (cost or effort) and/or provide a link to the detailed estimation. Schedule:  Provide a summary of the schedule, specifying key test milestones, and/or provide a link to the detailed schedule. Staffing and Training Needs:  Specify staffing needs by role and required skills.  Identify training that is necessary to provide those skills, if not already acquired. Responsibilities:  List the responsibilities of each team/role/individual. Risks:  List the risks that have been identified.  Specify the mitigation plan and the contingency plan for each risk. Assumptions and Dependencies:  List the assumptions that have been made during the preparation of this plan.  List the dependencies. Approvals:  Specify the names and roles of all persons who must approve the plan.  Provide space for signatures and dates. (If the document is to be printed.) 5.0 A survey Here are a few papers that mention about software testing. Jovanović, Irena, Software Testing Methods and Techniques In this paper main testing methods and techniques are shortly described. General classification is outlined: two testing methods – black box testing and white box testing, and their frequently used techniques:  Black Box techniques: Equivalent Partitioning, Boundary Value Analysis, Cause- Effect Graphing Techniques, and Comparison Testing;
  • 7.  White Box techniques: Basis Path Testing, Loop Testing, and Control Structure Testing. Also, the classification of the IEEE Computer Society is illustrated. Jiantao Pan (1999), Software Testing 18-849b Dependable Embedded Systems Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. [Hetzel88] Although crucial to software quality and widely deployed by programmers and testers, software testing still remains an art, due to limited understanding of the principles of software. The difficulty in software testing stems from the complexity of software: we can not completely test a program with moderate complexity. Testing is more than just debugging. The purpose of testing can be quality assurance, verification and validation, or reliability estimation. Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality. Ibrahim K. El-Far and James A. Whittaker (2001), Model-based Software Testing, Florida Institute of Technology Software testing requires the use of a model to guide such efforts as test selection and test verification. Often, such models are implicit, existing only in the head of a human tester, applying test inputs in an ad hoc fashion. The mental model testers build encapsulates application behavior, allowing testers to understand the application‘s capabilities and more effectively test its range of possible behaviors. When these mental models are written down, they become sharable, reusable testing artifacts. In this case, testers are performing what has become to be known as model-based testing. Model-based testing has recently gained attention with the popularization of models (including UML) in software design and development. There are a number of models of software in use today, a few of which make good models for testing. This paper introduces model-based testing and discusses its tasks in general terms with finite state models (arguably the most popular software models) as examples. In addition, advantages, difficulties, and shortcoming of various model-based approaches are concisely presented. Finally, we close with a discussion of where model- based testing fits in the present and future of software engineering.
  • 8. Mohd. Ehmer Khan (2010), Different Forms of Software Testing Techniques for Finding Errors, Department of Information Technology, Al Musanna College of Technology, Sultanate of Oman Software testing is an activity, which is aimed for evaluating an attribute or capability of a program and ensures that it meets the required result. There are many approaches to software testing, but effective testing of complex product is essentially a process of investigation, not merely a matter of creating and following route procedure. It is often impossible to find all the errors in the program. This fundamental problem in testing thus throws open question, as to what would be the strategy that we should adopt for testing. Thus, the selection of right strategy at the right time will make the software testing efficient and effective. In this paper I have described software-testing techniques, which are classified by purpose. Vu Nguyen, Test Case Point Analysis: An Approach to Estimating Software Testing Size, Faculty of Information Technology University of Science, VNU-HCMC Ho Chi Minh city, Vietnam Quality assurance management is an essential component of the software development lifecycle. To ensure quality, applicability, and usefulness of a product, development teams must spend considerable time and resources testing, which makes the estimation of the software testing effort, a critical activity. This paper proposes an approach, namely Test Case Point Analysis (TCPA), to estimating the size of software testing work. The approach measures the size of a software test case based on its checkpoints, preconditions and test data, as well as the types of testing. This paper also describes a case study applying the TCPA to a large testing project at KMS Technology. The result indicates that this approach is more preferable than estimating using tester‘s experience. 6.0 Test Best Practice 6.1 Methods The purpose of ‗Test Techniques and Methods‘ is to improve test process capability during test design and execution by applying basic test techniques and methods and incident management. Well-founded testing means that test design techniques and methods are applied, supported (if possible and beneficial) by tools. Test design techniques are used to derive and select test cases from requirements and design specifications. A test case consists of the description of the input values, execution preconditions, the change process, and the
  • 9. expected result. The test cases are documented in a test design specification. At a later stage, as more information becomes available on the actual implementation, the test designs are translated into test procedures. In a test procedure, also referred to as a manual test script, the specific test actions and checks are arranged in an executable sequence. The tests will subsequently be executed using these test procedures. The test design and execution activities follow the test approach as defined in the test plan. During the test execution stage, incidents (defects) are found and test incident reports are written. Incidents are be logged using an incident management system and thorough communication about the incidents with stakeholders is established. For incident management a basic incident classification scheme is established, and a basic incident repository is put into place. 6.1.1 Static and Dynamic approach There are many approaches to software testing. Reviews, walkthroughs, or inspections are referred to as static testing, whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing. Static testing can be omitted, and unfortunately in practice often is dynamic testing takes place when the program itself is used. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete functions or modules. Typical techniques for this are either using stubs/drivers or execution from a debugger environment. 5.1.2 The box approach Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases. White-box testing White box testing is the detailed investigation of internal logic and structure of the code. White box testing is also called glass testing or open box testing. In order to perform white box testing on an application, the tester needs to possess knowledge of the internal working of the code. The tester needs to have a look inside the source code and find out which unit/chunk of the code is behaving inappropriately. Advantages:
  • 10.  As the tester has knowledge of the source code, it becomes very easy to find out which type of data can help in testing the application effectively.  It helps in optimizing the code.  Extra lines of code can be removed which can bring in hidden defects.  Due to the tester's knowledge about the code, maximum coverage is attained during test scenario writing. Disadvantages:  Due to the fact that a skilled tester is needed to perform white box testing, the costs are increased.  Sometimes it is impossible to look into every nook and corner to find out hidden errors that may create problems as many paths will go untested.  It is difficult to maintain white box testing as the use of specialized tools like code analyzers and debugging tools are required. White-box testing (also known as clear box testing, glass box testing, and transparent box testing and structural testing) tests internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing an internal perspective of the system, as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT). Black-box testing The technique of testing without having any knowledge of the interior workings of the application is Black Box testing. The tester is oblivious to the system architecture and does not have access to the source code. Typically, when performing a black box test, a tester will interact with the system's user interface by providing inputs and examining outputs without knowing how and where the inputs are worked upon. Advantages:  Well suited and efficient for large code segments.  Code Access not required.
  • 11.  Clearly separates user's perspective from the developer's perspective through visibly defined roles.  Large numbers of moderately skilled testers can test the application with no knowledge of implementation, programming language or operating systems. Disadvantages:  Limited Coverage since only a selected number of test scenarios are actually performed.  Inefficient testing, due to the fact that the tester only has limited knowledge about an application.  Blind Coverage, since the tester cannot target specific code segments or error prone areas.  The test cases are difficult to design. Black box testing treats the software as a "black box", examining functionality without any knowledge of internal implementation. The tester is only aware of what the software is supposed to do, not how it does it. Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing and specification-based testing. Gray box testing Grey-box testing (American spelling: gray-box testing) involves having knowledge of internal data structures and algorithms for purposes of designing tests, while executing those tests at the user, or black-box level. The tester is not required to have full access to the software's source code. Manipulating input data and formatting output do not qualify as grey- box, because the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conducting integration
  • 12. testing between two modules of code written by two different developers, where only the interfaces are exposed for test. However, modifying a data repository does qualify as grey- box, as the user would not normally be able to change the data outside of the system under test. Grey-box testing may also include reverse engineering to determine, for instance, boundary values or error messages. Grey Box testing is a technique to test the application with limited knowledge of the internal workings of an application. In software testing, the term the more you know the better carries a lot of weight when testing an application. Mastering the domain of a system always gives the tester an edge over someone with limited domain knowledge. Unlike black box testing, where the tester only tests the application's user interface, in grey box testing, the tester has access to design documents and the database. Having this knowledge, the tester is able to better prepare test data and test scenarios when making the test plan. Advantages:  Offers combined benefits of black box and white box testing wherever possible.  Grey box testers don't rely on the source code; instead they rely on interface definition and functional specifications.  Based on the limited information available, a grey box tester can design excellent test scenarios especially around communication protocols and data type handling.  The test is done from the point of view of the user and not the designer. Disadvantages:  Since the access to source code is not available, the ability to go over the code and test coverage is limited.  The tests can be redundant if the software designer has already run a test case.  Testing every possible input stream is unrealistic because it would take an unreasonable amount of time; therefore, many program paths will go untested.
  • 13. 5.1.2 Comparison Black Box vs. Grey Box vs. White Box S.N. Black Box Testing Grey Box Testing White Box Testing 1 The Internal Workings of an application are not required to be known Somewhat knowledge of the internal workings are known Tester has full knowledge of the Internal workings of the application 2 Also known as closed box testing, data driven testing and functional testing Another term for grey box testing is translucent testing as the tester has limited knowledge of the insides of the application Also known as clear box testing, structural testing or code based testing 3 Performed by end users and also by testers and developers Performed by end users and also by testers and developers Normally done by testers and developers 4 Testing is based on external expectations - Internal behavior of the application is unknown Testing is done on the basis of high level database diagrams and data flow diagrams Internal workings are fully known and the tester can design test data accordingly 5 This is the least time consuming and exhaustive Partly time consuming and exhaustive The most exhaustive and time consuming type of testing 6 Not suited to algorithm testing Not suited to algorithm testing Suited for algorithm testing 7 This can only be done by trial and error method Data domains and Internal boundaries can be tested, if known Data domains and Internal boundaries can be better tested 6.2 Tips Seriously take every aspects of test. Analyze test result thoroughly. Do not ignore the test result. The final test result may be ‗pass‘ or ‗fail‘ but troubleshooting the root cause of ‗fail‘
  • 14. will lead to the solution of the problem. Testers will be respected if they not only log the bugs but also provide solutions. Maximize the test coverage every time any application is tested. Though 100 percent test coverage might not be possible you can always try to reach it. Break the application into smaller functional modules to ensure maximum test coverage e.g.: If you have divided your website application in modules and ―accepting user information‖ is one of the modules. You can break this ―user information‖ screen into smaller parts for writing test cases: Parts like UI testing, security testing, functional testing of the user information form etc. Apply all form field type and size tests, negative and validation tests on input fields and write all test cases for maximum coverage. Write test cases for the intended functionality first i.e.: for valid conditions according to requirements. Then write test cases for invalid conditions. This will cover expected as well unexpected behavior of the application. Write test cases in requirement analysis and the design phase itself. This way can ensure all the requirements are testable. Make your test cases available to developers prior to coding. Don‘t keep your test cases with you waiting to get the final application release for testing, thinking that you can log more bugs. Let developers analyze your test cases thoroughly to develop a quality application. This will also save the re-work time. If possible identify and group your test cases for regression testing. This will ensure quick and effective manual regression testing. Applications requiring critical response time should be thoroughly tested for performance. Performance testing is the critical part of many applications. In manual testing this is mostly ignored by testers. Find out ways to test your application for performance. If it is not possible to create test data manually, then write some basic scripts to create test data for performance testing or ask the developers to write it for you.
  • 15. Programmers should not test their own code. Basic unit testing of the developed application should be enough for developers to release the application for the testers. But testers should not force developers to release the product for testing. Let them take their own time. Everyone from lead to manger will know when the module/update is released for testing and they can estimate the testing time accordingly. This is a typical situation in an agile project environment. Go beyond requirement testing. Test the application for what it is not supposed to do. 7.0 Question and Discussion Some of the major software testing controversies include: What constitutes responsible software testing? Members of the "context-driven" school of testing believe that there are no "best practices" of testing, but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each unique situation. Agile vs. traditional Should testers learn to work under conditions of uncertainty and constant change or should they aim at process "maturity"? The agile testing movement has received growing popularity since 2006 mainly in commercial circles, whereas government and military software providers use this methodology but also the traditional test-last models (e.g. in the Waterfall model). Exploratory test vs. scripted Should tests be designed at the same time as they are executed or should they be designed beforehand? Manual testing vs. automated Some writers believe that test automation is so expensive relative to its value that it should be used sparingly. More in particular, test-driven development states that developers should write unit-tests of the Xunit type before coding the functionality. The tests then can be considered as a way to capture and implement the requirements. Software design vs. software implementation Should testing be carried out only at the end or throughout the whole process? Who watches the watchmen? The idea is that any form of observation is also an interaction—the act of testing can also affect that which is being tested.
  • 16. 8.0 Conclusion As the conclusion, software testing has its purposes. First, to reduce costly error. The cost of errors in software can vary from nothing at all to large amounts of money and even the loss of life. There are hundreds of stories about failures of computer systems that have been attributed to errors in software. There are many reasons why systems fail but the issue that stands out the most is the lack of adequate testing. Most of us have had an experience with software that did not work as expected. Software that does not work can have a large impact on an organization. It can lead to many problems including:  Loss of money – this can include losing customers right through to financial penalties for non-compliance to legal requirements  Loss of time – this can be caused by transactions taking a long time to process but can include staff not being able to work due to a fault or failure  Damage to business reputation – if an organization is unable to provide service to their customers due to software problems than the customers will lose confidence or faith in this organization (and probably take their business elsewhere)  Injury or death – It might sound dramatic but some safety-critical systems could result in injuries or deaths if they don‘t work properly (e.g. flight traffic control software) Testing is an important part of each software development process, no matter which programming paradigm is used. In functional programming its low level nature caused missing acceptance of the software testing by parts of the community. Publications from the last few years show that testing of functional programs has (eventually) received some more attention.
  • 17. References 1. Exploratory Testing, Cem Kaner,Florida Institute of Technology, Quality Assurance Institute Worldwide Annual Software Testing Conference, Orlando, FL, November 2006 2. Software Testing by Jiantao Pan, Carnegie Mellon University 3. Leitner, A., Ciupa, I., Oriol, M., Meyer, B., Fiva, A., "Contract Driven Development = Test Driven Development - Writing Test Cases", Proceedings of ESEC/FSE'07: European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering 2007, (Dubrovnik, Croatia), September 2007 4. a b c Kaner, Cem; Falk, Jack and Nguyen, Hung Quoc (1999). Testing Computer Software, 2nd Ed.. New York, et al: John Wiley and Sons, Inc.. pp. 480 pages. ISBN 0-471-35846-0. 5. Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management. Wiley-IEEE Computer Society Press. pp. 41–43. ISBN 0-470-04212-5. http://www.wiley.com/WileyCDA/WileyTitle/productCd- 0470042125.html. 6. Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management. Wiley-IEEE Computer Society Press. p. 426. ISBN 0-470-04212-5. http://www.wiley.com/WileyCDA/WileyTitle/productCd- 0470042125.html. 7. a b Section 1.1.2, Certified Tester Foundation Level Syllabus, International Software Testing Qualifications Board 8. Principle 2, Section 1.3, Certified Tester Foundation Level Syllabus, International Software Testing Qualifications Board 9. "Proceedings from the 5th International Conference on Software Testing and Validation (ICST). Software Competence Center Hagenberg. "Test Design: Lessons Learned and Practical Implications.". http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4578383. 10. Software errors cost U.S. economy $59.5 billion annually, NIST report 11. McConnell, Steve (2004). Code Complete (2nd ed.). Microsoft Press. p. 29. ISBN 0- 7356-1967-0. 12. see D. Gelperin and W.C. Hetzel
  • 18. 13. a b Myers, Glenford J. (1979). The Art of Software Testing. John Wiley and Sons. ISBN 0-471-04328-1. 14. Company, People's Computer (1987). "Dr. Dobb's journal of software tools for the professional programmer". Dr. Dobb's journal of software tools for the professional programmer (M&T Pub) 12 (1–6): 116. http://books.google.com/?id=7RoIAAAAIAAJ. 15. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782. 16. until 1956 it was the debugging oriented period, when testing was often associated to debugging: there was no clear difference between testing and debugging. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001- 0782. 17. From 1957–1978 there was the demonstration oriented period where debugging and testing was distinguished now - in this period it was shown, that software satisfies the requirements. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782. 18. The time between 1979–1982 is announced as the destruction oriented period, where the goal was to find errors. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782. 19. 1983–1987 is classified as the evaluation oriented period: intention here is that during the software lifecycle a product evaluation is provided and measuring quality. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782. 20. From 1988 on it was seen as prevention oriented period where tests were to demonstrate that software satisfies its specification, to detect faults and to prevent faults. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782. 21. Introduction, Code Coverage Analysis, Steve Cornett 22. Ron, Patton. Software Testing. 23. Laycock, G. T. (1993) (PostScript). The Theory and Practice of Specification Based Software Testing. Dept of Computer Science, Sheffield University, UK. http://www.mcs.le.ac.uk/people/gtl1/thesis.ps.gz. Retrieved 2008-02-13. 24. Savenkov, Roman (2008). How to Become a Software Tester. Roman Savenkov Consulting. p. 159. ISBN 978-0-615-23372-7.
  • 19. 25. Patton, Ron. Software Testing. 26. "SOA Testing Tools for Black, White and Gray Box SOA Testing Techniques". Crosschecknet.com. http://www.crosschecknet.com/soa_testing_black_white_gray_box.php. Retrieved 2012-12-10. 27. "Visual testing of software - Helsinki University of Technology" (PDF). http://www.cs.hut.fi/~jlonnber/VisualTesting.pdf. Retrieved 2012-01-13. 28. "Article on visual testing in Test Magazine". Testmagazine.co.uk. http://www.testmagazine.co.uk/2011/04/visual-testing. Retrieved 2012-01-13. 29. a b "SWEBOK Guide - Chapter 5". Computer.org. http://www.computer.org/portal/web/swebok/html/ch5#Ref2.1. Retrieved 2012-01- 13. 30. Binder, Robert V. (1999). Testing Object-Oriented Systems: Objects, Patterns, and Tools. Addison-Wesley Professional. p. 45. ISBN 0-201-80938-9. 31. Beizer, Boris (1990). Software Testing Techniques (Second ed.). New York: Van Nostrand Reinhold. pp. 21,430. ISBN 0-442-20672-0. 32. van Veenendaal, Erik. "Standard glossary of terms used in Software Testing". http://www.astqb.org/get-certified/istqb-syllabi-the-istqb-software-tester-certification- body-of-knowledge/. Retrieved 4 January 2013. 33. EtestingHub-Online Free Software Testing Tutorial. "e)Testing Phase in Software Testing:". Etestinghub.com. http://www.etestinghub.com/testing_lifecycles.php#2. Retrieved 2012-01-13. 34. Myers, Glenford J. (1979). The Art of Software Testing. John Wiley and Sons. pp. 145–146. ISBN 0-471-04328-1. 35. Dustin, Elfriede (2002). Effective Software Testing. Addison Wesley. p. 3. ISBN 0- 201-79429-2. 36. Pan, Jiantao (Spring 1999). "Software Testing (18-849b Dependable Embedded Systems)". Topics in Dependable Embedded Systems. Electrical and Computer Engineering Department, Carnegie Mellon University. http://www.ece.cmu.edu/~koopman/des_s99/sw_testing/.