SlideShare une entreprise Scribd logo
1  sur  9
Télécharger pour lire hors ligne
Test Case Point Analysis
                                         Vu Nguyen
                            QASymphony, LLC.(www.qasymphony.com)

Abstract
Software testing is a crucial activity in software application development lifecycle to ensuring
quality, applicability, and usefulness of software products. It consumes a large amount of effort of
the development team to achieve the software quality goal. Thus, one critical success-factor in
software testing is to estimate and measure software testing accurately as a part of the software
quality assurance management. This white paper proposes an approach, namely Test Case Point
Analysis, to estimating the size and effort of software testing work. The approach measures the
size of software test case based on its checkpoints, precondition and test data, and types of test.
The testing effort is computed using the Test Case Point count of the testing activities.

Keywords: test case point, software test estimation, software testing, quality assurance, test
management

I. INTRODUCTION
Software testing is a crucial activity in software application development lifecycle to ensuring
quality, applicability, and usefulness of software products. No software can be released without a
reasonable amount of testing involved. To achieve the quality goal, software project teams have
been reported to devoting a substantial portion of the total effort to perform the software testing
activity. According to the previous industry reports, software testing consumes about 10-25% of
the total project effort, and on some projects this number may reach 50% [1].
Thus, one factor to ensure the success of software testing is the ability to provide reasonably
accurate size and effort estimates for testing activities. Size and effort estimates are used as key
inputs for making investment decisions, planning, scheduling, project bidding, and deriving other
metrics such as productivity and defect density for project monitoring and controlling. If the
estimates are not realistic, then these activities effectively use the misleading information, and the
consequences could be disastrous.
Unfortunately, providing reliable size and effort estimates for software testing is challenging. One
reason is that software testing is a human-intensive activity that is affected by a large number of
factors such as personnel capabilities, software requirements, processes, environments,
communications, and technologies. These factors cause high variance in productivities of projects
of different characteristics. Another reason is that software quality attributes such as functionality,
reliability, usability, efficiency, and scalability are hard to quantify objectively. Moreover, there
are many different types of software testing and testing contexts even in one project. Because of
the diversity of software testing activities, one approach that works well on one type of test in one
environment may not work when it is applied to another type of test in another context.
Although many approaches have been introduced and applied to estimating overall software size
and effort, there is a lack of methods for estimating software testing size and effort. Often, testing
effort is estimated as a part of the overall software effort, and software size is used as a size
measure of software testing activity. This approach, however, cannot account for specific
requirements of software testing, such as intensive testing required by clients. Another limitation
is that it cannot be applied to the test-only project in which the software exists but the testing
team does not have access to source code or requirements in order to perform Function Point
Analysis or count Source Lines of Code (SLOC). Finally, the software size in Function Points or
SLOC may not well reflect the amount of test performed by the testing team. Therefore,
measuring the productivity of software testing using these metrics is misleading.
In an attempt to address this issue, this white paper presents an approach to estimating the size
and effort of software testing. The sizing method is called Test Case Point. As the name indicates,
the method measures the size of test case, the core artifact that testers create and use when
performing software test execution. The size of test case is evaluated using four elements of test
case complexity, including checkpoint, precondition, data, and type of the test case. This article
also describes simple approaches to determining testing effort using Test Case Point.
Although theoretically this Test Case Point analysis can be applied to measuring the size of
different kinds of software testing given test cases as input, this method is only focused on
estimating the size of system, integration, and acceptance testing activities which are often
performed manually by independent verification and validation (IV&V). Unit testing, automation
testing, and performance testing that involve using test scripts are also beyond the scope of this
method.


II. A SUMMARY SOFTWARE TESTING SIZE AND EFFORT ESTIMATION
    METHODS
A number of methods and metrics have been proposed and applied to estimating the size and
effort of software projects. Many of these methods are also used to estimate the effort of software
testing activities. SLOC is a traditional and popular metric that measures the size of software
product by counting its source program. A number of SLOC definitions exist, but the most
common are physical SLOC which counts the number of physical lines of code and logical SLOC
which counts the number of logical statements in the source program. SLOC is used as a size
input for popular software estimation models such as COCOMO, SEER-SIM, and SLIM. In these
models, the testing effort is computed the overall effort estimate using a predefined percentage.
This percentage can be obtained from industry average, historical data, or based on experience of
the estimators.
Function Point Analysis (FPA) is another method for estimating the size of software that has been
used in industry since the late 1970s [3]. This analysis measures the software size based on five
components including inputs, outputs, inquiries, files, and interfaces. The size output, whose unit
is Function Point, is then adjusted with technical factors to take into account technical
characteristics of the system. To estimate effort, the estimated size in Function Point can be
determined using a productivity index, by converting to SLOC using a method called backfire, or
by performing a simple linear regression model. FPA has been extended and introduced in a
number of sizing methods including Mark II [11], COSTMIC FFP [12], and NESMA FPA [6].
Although FPA is widely used in industry, it has been criticized for its complexity that required
much time and skilled estimators to perform the analysis. A similar but simplified method named
Use Case Point was proposed [7]. This method counts the number Use Case Points of the
software project by measuring the complexity of its use cases. The use case complexity is based
on actors and transactions of the use case and then adjusted with the technical complexity and the
environment factors to obtain the final Use Case Point count. The idea of FPA and UCP methods
has inspired the introduction of Test Case Point Analysis [4] and other methods from the practice
and research community [2, 5, 8, 9, 10, 13]. In [8], Aranha and Borba proposed a model for
estimating test execution effort by measuring the size and execution complexity of test cases.
They introduced the size unit of execution point which is based on the characteristics of every
step in the test case. The effort is then estimated by converting the execution point count to effort
using a conversion factor.
III. TEST CASE POINT ANALYSIS

1. Method Overview
The Test Case Point measures the software testing size, reflecting the complexity of the testing
activities performed to ensure the quality goal of the software. As a complexity measure, it needs
to reflect the effort required to perform the testing activities including planning, designing,
executing tests, and reporting and tracking defects.
The Test Case Point Analysis uses test cases as input and generates Test Case Point count for the
test cases being measured. The complexity of the test case is based on four elements including
checkpoint, precondition, test data, and types of test case, which effectively assumes that the
complexity is centered at these four elements. These elements are classified into two types, one
reflecting the largeness of the test case, which includes checkpoint, precondition, test data, and
the other normalizing the complexity of different types of test by adjusting the Test Case Point by
weights of test types.

2. Method Detail
2.1. Measure Test Case Complexity
Each test case is assigned a number of Test Case Points based on the number of checkpoints, the
complexity of precondition, and test data used in the test case.
Checkpoint is the condition in which the tester verifies whether the result produced by the target
function matches the expected criterion. One test case consists of one or many checkpoints.
Rule 1: One checkpoint is counted as one Test Case Point.
Precondition. Test case’s precondition specifies the condition to execute the test case. The
precondition mainly affects the cost to execute the test case, same as that of the test data. Some
precondition may be related to data prepared for the test case.
                     Table 1. Precondition Complexity Level Description

   Complexity level      Description

   None                  The precondition is not applicable or important to execute the test
                         case. Or, the precondition is just reused from the previous test case
                         to continue the current test case.

   Low                   The condition for executing the test case is available with some
                         simple modifications required. Or, some simple setting-up steps are
                         needed.

   Medium                Some explicit preparations are needed to execute the test case. The
                         condition for executing is available with some additional
                         modifications required. Or, some additional setting-up steps are
                         needed.

   High                  Heavy hardware and/or software configurations are needed to
                         execute the test case.
The complexity of test case’s precondition is classified into four levels, None, Low, Medium, and
High (see Table 1).
Test data is used to execute the test case. It can be generated at the test case execution time, or is
already prepared by previous tests, or is generated by testing scripts. Test data is test case specific
or general to a group of test cases or the whole system. In the latter cases, the data can be reused
in multiple test cases.
The complexity of test data is classified into four levels, None, Low, Medium, and High, which is
shown in Table 2.
                        Table 2. Test Data Complexity Level Description

   Complexity level        Description

   None                    No test data preparation is needed.

   Low                     Test data is needed, but it is simple so that it can be created during
                           the test case execution time. Or, the test case uses a slightly
                           modified version of the existing test data, i.e., little effort required to
                           modify the test data.

   Medium                  Test data is deliberately prepared in advance with extra effort to
                           ensure its completeness, comprehensiveness, and consistency.

   High                    Test data is prepared in advance with considerable effort to ensure
                           its completeness, comprehensiveness, and consistency or by using
                           support tools to generate and database to store and manage. Scripts
                           may be required to generate test data.

In many cases, the test data has to be supplied by a third-party. In such cases, the effort to
generate test data for the test case is expectedly small, thus, the test data complexity should be
low.
Rule 2: each complexity level of precondition and test data is assigned a number of Test
        Case Points. This measure is called unadjusted Test Case Point.
                      Table 3. Test Case Point Allocation for Precondition

            Complexity level           Number of Test Case Point         Standard
                                                                         deviation (±σ)

            None                       0                                 0

            Low                        1                                 0

            Medium                     3                                 0.5

            High                       5                                 1

Table 3 and 4 show the number of Test Case Points allocated to each complexity level for the
precondition and test data components of the test case. These constants were obtained through a
survey of 18 experienced quality assurance engineers in our organization. The standard deviation
values reflect the variance among the survey’s results. It is worthy to note that the estimator
should adjust these constants to better reflect the characteristics of their projects and
environments.
                       Table 4. Test Case Point Allocation for Test Data

           Complexity level          Number of Test Case Point         Standard
                                                                       deviation (±σ)

           None                      0                                 0

           Low                       1                                 0

           Medium                    3                                 0.6

           High                      6                                 1.3


2.2. Adjust Test Case Point by Type of Test
To account for the complexity differences among test types, the Test Case Point counted for each
test case is adjusted using a weight corresponding to its test type. The weight of the most common
test type, user interface and functional testing, is established as the baseline and fixed at 1.0. The
weights of other test types are relative to this baseline.
Table 5 shows the types of test, the corresponding weights, and standard deviations of the
weights. The weights were obtained via the same survey of 18 experienced quality assurance
engineers in our organization. Again, the estimators can review and adjust these weights when
applying the method in their organizations.
                                  Table 5. Weights of Test Types

  Type of Test           Weight      Standard Comments
                         (W)         deviation
                                     (±σ)

  User interface and     1.0         0            User interface and functional testing is
  functional testing                              considered baseline.

  API                    1.22        0.32         API testing verifies the accuracy of the
                                                  interfaces in providing services.

  Database               1.36        0.26         Testing the accuracy of database scripts, data
                                                  integrity and/or data migration.

  Security               1.39        0.28         Testing how well the system sustains from
                                                  hacking     attacks,    unauthorized   and
                                                  unauthenticated access.

  Installation           1.09        0.06         Testing of full, partial, or upgrade
                                                  install/uninstall processes of the software.
Type of Test           Weight       Standard Comments
                         (W)          deviation
                                      (±σ)

  Networking             1.27         0.29         Testing the communications among entities
                                                   via networks.

  Algorithm and          1.38         0.27         Verifying algorithms and computations
  computation                                      designed and implemented in the system.

  Usability testing      1.12         0.13         Testing the friendliness, ease of use, and other
                                                   usability attributes of the system.

  Performance            1.33         0.27         Verifying whether the system meets
  (manual)                                         performance requirements, assuming that the
                                                   test is done manually.

  Recovery testing       1.07         0.14         Recovery testing verifies the accuracy of the
                                                   recovery process to recover from system
                                                   crashes and other errors.

  Compatibility          1.01         0.03         Testing whether the software is compatible
  testing                                          with other elements of a system with which it
                                                   should operate, e.g. browsers, Operating
                                                   Systems, or hardware.


The total Adjusted Test Case Point (ATCP) is counted as
                                       ATCP = ∑(UTCPi * Wi)                                 (Eq 3.1)
       Where UTCPi is the number of Unadjusted Test Case Points counted using checkpoints,
        precondition, and test data for the test case ith.
       Wi is the weight of the test case ith, taking into account its test type.
It is important to note that the list of test types is not exhaustive. Estimators can supplement it
with additional test types and the corresponding weights based on the weight of the user interface
and functional testing.

IV. EFFORT ESTIMATION
Testing activities can be classified into four categories, test planning, test design, test execution,
and defect reporting. Of these activities, test execution and defect reporting may be performed
multiple times for a single test case during project. However, the size measured in Test Case
Point takes into account all of these activities, assuming that each activity is performed once. The
distribution of effort of these activities allows estimating the effort in case the test execution and
defect reporting activities are performed more than once. The distribution of testing effort can be
generated using historical data.
The distribution of effort of testing phases shown in Table 6 was obtained from the same survey
performed to collect the constants described above. Again, these values mainly reflect the
experience in our organization, and thus, the estimators are encouraged to adjust these numbers
using their own experience or historical data.
                                 Table 6. Testing Effort Distribution
    Testing Phase          Description                                        Percent of   Standard
                                                                              Effort       deviation
                                                                                           (±σ)
    Test Planning          Identifying test scope, test approaches, and       10%          3%
                           associated risks. Test planning also identifies
                           resource      needs    (personnel,     software,
                           hardware) and preliminary schedule for the
                           testing activities.
    Test Analysis and      Dealing with detailing test scope/objectives,      25%          5%
    Design                 evaluating/clarifying     test    requirements,
                           designing the test cases and preparing
                           necessary environments for the testing
                           activities. This phase needs to ensure
                           combinations of test are covered.
    Test Execution         Is the repeating phase dealing with executing      47%          4%
                           test activities and objectives outlined in the
                           test planning phase and reporting test results,
                           defects, and risks.
                           This phase also includes test results analysis
                           activities to check the actual output versus the
                           expected results.
    Defect Tracking &      Is the repeating phase dealing with tracking       18%          3%
    Reporting              the progress and status of tests and defects
                           and evaluating current status versus exit
                           criteria and goals

Dependent on the availability of information and resources, the testing effort can be estimated
using one of the following simple methods:
     Productivity Index
     Simple Linear Regression Analysis
Estimate Effort Using Productivity Index
The Productivity Index is measured as the number of Test Case Points completed per person-
month. The Productivity Index is then used to compute the testing effort by multiplying it with
the total Adjusted Test Case Point (see Eq. 4.1).
The Productivity Index can be determined using historical data. It is recommended to use the
Productivity Index of similar projects with the project being estimated. The testing effort is
computed as
                                   Effort = TCP * Productivity Index                        (Eq. 4.1)
Estimate Effort Using Simple Linear Regression Analysis
Regression is a more comprehensive and expectedly a more accurate method to estimate the
testing effort given that many historical data points are available. With this method, the estimators
can compute estimation ranges and confidence level of the estimate. This method requires having
at least three similar projects for providing meaningful estimates.
Given N completed projects in which the ith project has the number of Test Case Point (TCPi)
and effort (Ei) in person-month, the simple linear regression model is presented as
                                       Ei = α + β * TCPi + ɛi                              (Eq. 4.2)
Where, A and B are coefficients and ɛi is the error term of the model. Using the least squares
regression one can easily compute the estimates A and B of the coefficients α and β, respectively.
The A and B values are then used in the model for estimating the effort of new testing projects as
follows
                                           E = A + B*TCP                                    (Eq. 4.3)
Where, TCP is the size of the test cases in Test Case Point, and E is the estimated effort in
person-month of the project being estimated.

V. CONCLUSION
Software testing plays an important role in the success of software development and maintenance
projects. Estimating testing effort accurately is a key step towards to that goal. In an attempt to
fill the gap in estimating software testing, this paper has proposed a method called Test Case
Point Analysis to measure and computing the size and effort of software testing activities. The
input of this analysis is test cases, and the output is the number of Test Case Points for the test
cases being counted.
An advantageous feature of this approach is that it measures the complexity of test case, the main
work product that the tester produces and uses for test execution. Thus, it better reflects the effort
that the tester spends on their activities. Another advantage is that the analysis can be performed
easily by counting the number of checkpoints, measuring the complexity of precondition and test
data, and determining the type of each test case. However, there are several limitations of this
approach, hence suggesting directions for future improvements of the approach. One limitation is
that the Test Case Point measure has not been empirically validated. Data of the past testing
projects need to be used to validate the effect and usefulness of the size measure in estimating the
effort of the software testing activities. Another limitation is the concern of whether the
complexity ranges of the test case’s precondition and test data can properly reflect the actual
complexity of these attributes. Future improvements on the method need to address these
limitations.

VI. REFERENCES
[1] Y. Yang, Q. Li, M. Li, Q. Wang, An empirical analysis on distribution patterns of software
    maintenance effort, International Conference on Software Maintenance, 2008, pp. 456-459
[2] S. Kumar, “Understanding TESTING ESTIMATION using Use Case Metrics”, Web:
    http://www.stickyminds.com/sitewide.asp?ObjectId=15698&Function=edetail&ObjectType=
    ART
[3] A.J. Albrecht, “Measuring Application Development Productivity,” Proc. IBM Applications
    Development Symp., SHARE-Guide, pp. 83-92, 1979
[4] TestingQA.com, “Test Case Point Analysis”, Web: http://testingqa.com/test-case-point-
    analysis/
[5] TestingQA.com, “Test Effort Estimation”, Web: http://testingqa.com/test-effort-estimation/
[6] NESMA (2008), "FPA according to NESMA and IFPUG; the present situation",
    http://www.nesma.nl/download/artikelen/FPA%20according%20to%20NESMA%20and%20
    IFPUG%20-%20the%20present%20situation%20(vs%202008-07-01).pdf
[7] G. Karner, “Metrics for Objectory”. Diploma thesis, University of Linköping, Sweden. No.
    LiTHIDA-Ex-9344:21. December 1993.
[8] E. Aranha and P. Borba. “An estimation model for test execution effort”. In Proceedings of
    First International Symposium on Empirical Software Engineering and Measurement, 2007.
[9] E. R. C. de Almeida, B. T. de Abreu, and R. Moraes. “An Alternative Approach to Test Effort
    Estimation Based on Use Cases”. In Proceedings of International Conference on Software
    Testing, Verification, and Validation, 2009.
[10] S. Nageswaran. “Test effort estimation using use case points”. In Proceedings of 14th
    International Internet & Software Quality Week 2001, 2001
[11] C.R. Symons, "Software Sizing and Estimating: Mk II FPA". Chichester, England: John
    Wiley, 1991.
[12] A. Abran, D. St-Pierre, M. Maya, J.M. Desharnais, "Full function points for embedded and
    real-time software", Proceedings of the UKSMA Fall Conference, London, UK, 14, 1998
[13] D.G. Silva, B.T. Abreu, M. Jino, A Simple Approach for Estimation of Execution Effort of
    Functional Test Cases, In Proceedings of 2009 International Conference on Software Testing
    Verification and Validation, 2009

Contenu connexe

Tendances

Test Estimation in Practice
Test Estimation in PracticeTest Estimation in Practice
Test Estimation in PracticeTechWell
 
Effective Test Estimation
Effective Test EstimationEffective Test Estimation
Effective Test EstimationTechWell
 
T19 performance testing effort - estimation or guesstimation revised
T19   performance testing effort - estimation or guesstimation revisedT19   performance testing effort - estimation or guesstimation revised
T19 performance testing effort - estimation or guesstimation revisedTEST Huddle
 
'Model Based Test Design' by Mattias Armholt
'Model Based Test Design' by Mattias Armholt'Model Based Test Design' by Mattias Armholt
'Model Based Test Design' by Mattias ArmholtTEST Huddle
 
The art of system and solution testing
The art of system and solution testingThe art of system and solution testing
The art of system and solution testinggaoliang641
 
Mattias Ratert - Incremental Scenario Testing
Mattias Ratert - Incremental Scenario TestingMattias Ratert - Incremental Scenario Testing
Mattias Ratert - Incremental Scenario TestingTEST Huddle
 
C.V, Narayanan - Open Source Tools for Test Management - EuroSTAR 2010
C.V, Narayanan - Open Source Tools for Test Management - EuroSTAR 2010C.V, Narayanan - Open Source Tools for Test Management - EuroSTAR 2010
C.V, Narayanan - Open Source Tools for Test Management - EuroSTAR 2010TEST Huddle
 
Testing metrics
Testing metricsTesting metrics
Testing metricsprats12345
 
Rob Baarda - Are Real Test Metrics Predictive for the Future?
Rob Baarda - Are Real Test Metrics Predictive for the Future?Rob Baarda - Are Real Test Metrics Predictive for the Future?
Rob Baarda - Are Real Test Metrics Predictive for the Future?TEST Huddle
 
A Productive Method for Improving Test Effectiveness
A Productive Method for Improving Test EffectivenessA Productive Method for Improving Test Effectiveness
A Productive Method for Improving Test EffectivenessShradha Singh
 
Software testing metrics
Software testing metricsSoftware testing metrics
Software testing metricsDavid O' Connor
 
Software Engineering (Software Quality Assurance & Testing: Supplementary Mat...
Software Engineering (Software Quality Assurance & Testing: Supplementary Mat...Software Engineering (Software Quality Assurance & Testing: Supplementary Mat...
Software Engineering (Software Quality Assurance & Testing: Supplementary Mat...ShudipPal
 
Robert Magnusson - TMMI Level 2 - A Practical Approach
Robert Magnusson - TMMI Level 2 -  A Practical ApproachRobert Magnusson - TMMI Level 2 -  A Practical Approach
Robert Magnusson - TMMI Level 2 - A Practical ApproachTEST Huddle
 
How to Test the Internet of Everything
How to Test the Internet of EverythingHow to Test the Internet of Everything
How to Test the Internet of EverythingSQALab
 
Testing Software Solutions
Testing Software SolutionsTesting Software Solutions
Testing Software Solutionsgavhays
 
Metrics for manual testing
Metrics for manual testingMetrics for manual testing
Metrics for manual testingAnup Panigrahi
 
Risk-based Testing
Risk-based TestingRisk-based Testing
Risk-based TestingJohan Hoberg
 
Software QA Metrics Dashboard Benchmarking
Software QA Metrics Dashboard BenchmarkingSoftware QA Metrics Dashboard Benchmarking
Software QA Metrics Dashboard BenchmarkingJohn Carter
 

Tendances (20)

Test Estimation in Practice
Test Estimation in PracticeTest Estimation in Practice
Test Estimation in Practice
 
Effective Test Estimation
Effective Test EstimationEffective Test Estimation
Effective Test Estimation
 
T19 performance testing effort - estimation or guesstimation revised
T19   performance testing effort - estimation or guesstimation revisedT19   performance testing effort - estimation or guesstimation revised
T19 performance testing effort - estimation or guesstimation revised
 
'Model Based Test Design' by Mattias Armholt
'Model Based Test Design' by Mattias Armholt'Model Based Test Design' by Mattias Armholt
'Model Based Test Design' by Mattias Armholt
 
The art of system and solution testing
The art of system and solution testingThe art of system and solution testing
The art of system and solution testing
 
Mattias Ratert - Incremental Scenario Testing
Mattias Ratert - Incremental Scenario TestingMattias Ratert - Incremental Scenario Testing
Mattias Ratert - Incremental Scenario Testing
 
C.V, Narayanan - Open Source Tools for Test Management - EuroSTAR 2010
C.V, Narayanan - Open Source Tools for Test Management - EuroSTAR 2010C.V, Narayanan - Open Source Tools for Test Management - EuroSTAR 2010
C.V, Narayanan - Open Source Tools for Test Management - EuroSTAR 2010
 
Testing metrics
Testing metricsTesting metrics
Testing metrics
 
Rob Baarda - Are Real Test Metrics Predictive for the Future?
Rob Baarda - Are Real Test Metrics Predictive for the Future?Rob Baarda - Are Real Test Metrics Predictive for the Future?
Rob Baarda - Are Real Test Metrics Predictive for the Future?
 
A Productive Method for Improving Test Effectiveness
A Productive Method for Improving Test EffectivenessA Productive Method for Improving Test Effectiveness
A Productive Method for Improving Test Effectiveness
 
Test execution may_04_2006
Test execution may_04_2006Test execution may_04_2006
Test execution may_04_2006
 
Software testing metrics
Software testing metricsSoftware testing metrics
Software testing metrics
 
Software Engineering (Software Quality Assurance & Testing: Supplementary Mat...
Software Engineering (Software Quality Assurance & Testing: Supplementary Mat...Software Engineering (Software Quality Assurance & Testing: Supplementary Mat...
Software Engineering (Software Quality Assurance & Testing: Supplementary Mat...
 
Robert Magnusson - TMMI Level 2 - A Practical Approach
Robert Magnusson - TMMI Level 2 -  A Practical ApproachRobert Magnusson - TMMI Level 2 -  A Practical Approach
Robert Magnusson - TMMI Level 2 - A Practical Approach
 
How to Test the Internet of Everything
How to Test the Internet of EverythingHow to Test the Internet of Everything
How to Test the Internet of Everything
 
Sop test planning
Sop test planningSop test planning
Sop test planning
 
Testing Software Solutions
Testing Software SolutionsTesting Software Solutions
Testing Software Solutions
 
Metrics for manual testing
Metrics for manual testingMetrics for manual testing
Metrics for manual testing
 
Risk-based Testing
Risk-based TestingRisk-based Testing
Risk-based Testing
 
Software QA Metrics Dashboard Benchmarking
Software QA Metrics Dashboard BenchmarkingSoftware QA Metrics Dashboard Benchmarking
Software QA Metrics Dashboard Benchmarking
 

Similaire à Test Case Point Analysis Estimates Software Testing Size & Effort

A Complexity Based Regression Test Selection Strategy
A Complexity Based Regression Test Selection StrategyA Complexity Based Regression Test Selection Strategy
A Complexity Based Regression Test Selection StrategyCSEIJJournal
 
Principles and Goals of Software Testing
Principles and Goals of Software Testing Principles and Goals of Software Testing
Principles and Goals of Software Testing INFOGAIN PUBLICATION
 
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTING
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTINGFROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTING
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTINGijseajournal
 
From the Art of Software Testing to Test-as-a-Service in Cloud Computing
From the Art of Software Testing to Test-as-a-Service in Cloud ComputingFrom the Art of Software Testing to Test-as-a-Service in Cloud Computing
From the Art of Software Testing to Test-as-a-Service in Cloud Computingijseajournal
 
Chapter 9 Testing Strategies.ppt
Chapter 9 Testing Strategies.pptChapter 9 Testing Strategies.ppt
Chapter 9 Testing Strategies.pptVijayaPratapReddyM
 
Some Commonly Asked Question For Software Testing
Some Commonly Asked Question For Software TestingSome Commonly Asked Question For Software Testing
Some Commonly Asked Question For Software TestingKumari Warsha Goel
 
Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance Webtech Learning
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Software testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.comSoftware testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.comwww.testersforum.com
 
SRGM Analyzers Tool of SDLC for Software Improving Quality
SRGM Analyzers Tool of SDLC for Software Improving QualitySRGM Analyzers Tool of SDLC for Software Improving Quality
SRGM Analyzers Tool of SDLC for Software Improving QualityIJERA Editor
 
MIT521 software testing (2012) v2
MIT521   software testing  (2012) v2MIT521   software testing  (2012) v2
MIT521 software testing (2012) v2Yudep Apoi
 
ISTQB Advanced Study Guide - 3
ISTQB Advanced Study Guide - 3ISTQB Advanced Study Guide - 3
ISTQB Advanced Study Guide - 3Yogindernath Gupta
 
A NOVEL APPROACH FOR TEST CASEPRIORITIZATION
A NOVEL APPROACH FOR TEST CASEPRIORITIZATIONA NOVEL APPROACH FOR TEST CASEPRIORITIZATION
A NOVEL APPROACH FOR TEST CASEPRIORITIZATIONIJCSEA Journal
 
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONS
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONSQUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONS
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONSijseajournal
 

Similaire à Test Case Point Analysis Estimates Software Testing Size & Effort (20)

A Complexity Based Regression Test Selection Strategy
A Complexity Based Regression Test Selection StrategyA Complexity Based Regression Test Selection Strategy
A Complexity Based Regression Test Selection Strategy
 
Principles and Goals of Software Testing
Principles and Goals of Software Testing Principles and Goals of Software Testing
Principles and Goals of Software Testing
 
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTING
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTINGFROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTING
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTING
 
From the Art of Software Testing to Test-as-a-Service in Cloud Computing
From the Art of Software Testing to Test-as-a-Service in Cloud ComputingFrom the Art of Software Testing to Test-as-a-Service in Cloud Computing
From the Art of Software Testing to Test-as-a-Service in Cloud Computing
 
Test performance indicators
Test performance indicatorsTest performance indicators
Test performance indicators
 
C41041120
C41041120C41041120
C41041120
 
Chapter 9 Testing Strategies.ppt
Chapter 9 Testing Strategies.pptChapter 9 Testing Strategies.ppt
Chapter 9 Testing Strategies.ppt
 
Some Commonly Asked Question For Software Testing
Some Commonly Asked Question For Software TestingSome Commonly Asked Question For Software Testing
Some Commonly Asked Question For Software Testing
 
O0181397100
O0181397100O0181397100
O0181397100
 
Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
Too many files
Too many filesToo many files
Too many files
 
Software testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.comSoftware testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.com
 
Ijcatr04051006
Ijcatr04051006Ijcatr04051006
Ijcatr04051006
 
SRGM Analyzers Tool of SDLC for Software Improving Quality
SRGM Analyzers Tool of SDLC for Software Improving QualitySRGM Analyzers Tool of SDLC for Software Improving Quality
SRGM Analyzers Tool of SDLC for Software Improving Quality
 
Testing Experience Magazine Vol.14 June 2011
Testing Experience Magazine Vol.14 June 2011Testing Experience Magazine Vol.14 June 2011
Testing Experience Magazine Vol.14 June 2011
 
MIT521 software testing (2012) v2
MIT521   software testing  (2012) v2MIT521   software testing  (2012) v2
MIT521 software testing (2012) v2
 
ISTQB Advanced Study Guide - 3
ISTQB Advanced Study Guide - 3ISTQB Advanced Study Guide - 3
ISTQB Advanced Study Guide - 3
 
A NOVEL APPROACH FOR TEST CASEPRIORITIZATION
A NOVEL APPROACH FOR TEST CASEPRIORITIZATIONA NOVEL APPROACH FOR TEST CASEPRIORITIZATION
A NOVEL APPROACH FOR TEST CASEPRIORITIZATION
 
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONS
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONSQUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONS
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONS
 

Plus de KMS Technology

A journey to a Full Stack Tester
A journey to a Full Stack Tester A journey to a Full Stack Tester
A journey to a Full Stack Tester KMS Technology
 
React & Redux, how to scale?
React & Redux, how to scale?React & Redux, how to scale?
React & Redux, how to scale?KMS Technology
 
Common design principles and design patterns in automation testing
Common design principles and design patterns in automation testingCommon design principles and design patterns in automation testing
Common design principles and design patterns in automation testingKMS Technology
 
[Webinar] Test First, Fail Fast - Simplifying the Tester's Transition to DevOps
[Webinar] Test First, Fail Fast - Simplifying the Tester's Transition to DevOps[Webinar] Test First, Fail Fast - Simplifying the Tester's Transition to DevOps
[Webinar] Test First, Fail Fast - Simplifying the Tester's Transition to DevOpsKMS Technology
 
What's new in the Front-end development nowadays?
What's new in the Front-end development nowadays?What's new in the Front-end development nowadays?
What's new in the Front-end development nowadays?KMS Technology
 
JavaScript - No Longer A Toy Language
JavaScript - No Longer A Toy LanguageJavaScript - No Longer A Toy Language
JavaScript - No Longer A Toy LanguageKMS Technology
 
JavaScript No longer A “toy” Language
JavaScript No longer A “toy” LanguageJavaScript No longer A “toy” Language
JavaScript No longer A “toy” LanguageKMS Technology
 
Preparations For A Successful Interview
Preparations For A Successful InterviewPreparations For A Successful Interview
Preparations For A Successful InterviewKMS Technology
 
Introduction To Single Page Application
Introduction To Single Page ApplicationIntroduction To Single Page Application
Introduction To Single Page ApplicationKMS Technology
 
AWS: Scaling With Elastic Beanstalk
AWS: Scaling With Elastic BeanstalkAWS: Scaling With Elastic Beanstalk
AWS: Scaling With Elastic BeanstalkKMS Technology
 
Behavior-Driven Development and Automation Testing Using Cucumber Framework W...
Behavior-Driven Development and Automation Testing Using Cucumber Framework W...Behavior-Driven Development and Automation Testing Using Cucumber Framework W...
Behavior-Driven Development and Automation Testing Using Cucumber Framework W...KMS Technology
 
Technology Application Development Trends For IT Students
Technology Application Development Trends For IT StudentsTechnology Application Development Trends For IT Students
Technology Application Development Trends For IT StudentsKMS Technology
 
Contributors for Delivering a Successful Testing Project Seminar
Contributors for Delivering a Successful Testing Project SeminarContributors for Delivering a Successful Testing Project Seminar
Contributors for Delivering a Successful Testing Project SeminarKMS Technology
 
Increase Chances to Be Hired as Software Developers - 2014
Increase Chances to Be Hired as Software Developers - 2014Increase Chances to Be Hired as Software Developers - 2014
Increase Chances to Be Hired as Software Developers - 2014KMS Technology
 
Behavior Driven Development and Automation Testing Using Cucumber
Behavior Driven Development and Automation Testing Using CucumberBehavior Driven Development and Automation Testing Using Cucumber
Behavior Driven Development and Automation Testing Using CucumberKMS Technology
 
Software Technology Trends in 2013-2014
Software Technology Trends in 2013-2014Software Technology Trends in 2013-2014
Software Technology Trends in 2013-2014KMS Technology
 

Plus de KMS Technology (20)

A journey to a Full Stack Tester
A journey to a Full Stack Tester A journey to a Full Stack Tester
A journey to a Full Stack Tester
 
React & Redux, how to scale?
React & Redux, how to scale?React & Redux, how to scale?
React & Redux, how to scale?
 
Sexy React Stack
Sexy React StackSexy React Stack
Sexy React Stack
 
Common design principles and design patterns in automation testing
Common design principles and design patterns in automation testingCommon design principles and design patterns in automation testing
Common design principles and design patterns in automation testing
 
[Webinar] Test First, Fail Fast - Simplifying the Tester's Transition to DevOps
[Webinar] Test First, Fail Fast - Simplifying the Tester's Transition to DevOps[Webinar] Test First, Fail Fast - Simplifying the Tester's Transition to DevOps
[Webinar] Test First, Fail Fast - Simplifying the Tester's Transition to DevOps
 
KMSNext Roadmap
KMSNext RoadmapKMSNext Roadmap
KMSNext Roadmap
 
KMS Introduction
KMS IntroductionKMS Introduction
KMS Introduction
 
What's new in the Front-end development nowadays?
What's new in the Front-end development nowadays?What's new in the Front-end development nowadays?
What's new in the Front-end development nowadays?
 
JavaScript - No Longer A Toy Language
JavaScript - No Longer A Toy LanguageJavaScript - No Longer A Toy Language
JavaScript - No Longer A Toy Language
 
JavaScript No longer A “toy” Language
JavaScript No longer A “toy” LanguageJavaScript No longer A “toy” Language
JavaScript No longer A “toy” Language
 
Preparations For A Successful Interview
Preparations For A Successful InterviewPreparations For A Successful Interview
Preparations For A Successful Interview
 
Introduction To Single Page Application
Introduction To Single Page ApplicationIntroduction To Single Page Application
Introduction To Single Page Application
 
AWS: Scaling With Elastic Beanstalk
AWS: Scaling With Elastic BeanstalkAWS: Scaling With Elastic Beanstalk
AWS: Scaling With Elastic Beanstalk
 
Behavior-Driven Development and Automation Testing Using Cucumber Framework W...
Behavior-Driven Development and Automation Testing Using Cucumber Framework W...Behavior-Driven Development and Automation Testing Using Cucumber Framework W...
Behavior-Driven Development and Automation Testing Using Cucumber Framework W...
 
KMS Introduction
KMS IntroductionKMS Introduction
KMS Introduction
 
Technology Application Development Trends For IT Students
Technology Application Development Trends For IT StudentsTechnology Application Development Trends For IT Students
Technology Application Development Trends For IT Students
 
Contributors for Delivering a Successful Testing Project Seminar
Contributors for Delivering a Successful Testing Project SeminarContributors for Delivering a Successful Testing Project Seminar
Contributors for Delivering a Successful Testing Project Seminar
 
Increase Chances to Be Hired as Software Developers - 2014
Increase Chances to Be Hired as Software Developers - 2014Increase Chances to Be Hired as Software Developers - 2014
Increase Chances to Be Hired as Software Developers - 2014
 
Behavior Driven Development and Automation Testing Using Cucumber
Behavior Driven Development and Automation Testing Using CucumberBehavior Driven Development and Automation Testing Using Cucumber
Behavior Driven Development and Automation Testing Using Cucumber
 
Software Technology Trends in 2013-2014
Software Technology Trends in 2013-2014Software Technology Trends in 2013-2014
Software Technology Trends in 2013-2014
 

Dernier

Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 

Dernier (20)

Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 

Test Case Point Analysis Estimates Software Testing Size & Effort

  • 1. Test Case Point Analysis Vu Nguyen QASymphony, LLC.(www.qasymphony.com) Abstract Software testing is a crucial activity in software application development lifecycle to ensuring quality, applicability, and usefulness of software products. It consumes a large amount of effort of the development team to achieve the software quality goal. Thus, one critical success-factor in software testing is to estimate and measure software testing accurately as a part of the software quality assurance management. This white paper proposes an approach, namely Test Case Point Analysis, to estimating the size and effort of software testing work. The approach measures the size of software test case based on its checkpoints, precondition and test data, and types of test. The testing effort is computed using the Test Case Point count of the testing activities. Keywords: test case point, software test estimation, software testing, quality assurance, test management I. INTRODUCTION Software testing is a crucial activity in software application development lifecycle to ensuring quality, applicability, and usefulness of software products. No software can be released without a reasonable amount of testing involved. To achieve the quality goal, software project teams have been reported to devoting a substantial portion of the total effort to perform the software testing activity. According to the previous industry reports, software testing consumes about 10-25% of the total project effort, and on some projects this number may reach 50% [1]. Thus, one factor to ensure the success of software testing is the ability to provide reasonably accurate size and effort estimates for testing activities. Size and effort estimates are used as key inputs for making investment decisions, planning, scheduling, project bidding, and deriving other metrics such as productivity and defect density for project monitoring and controlling. If the estimates are not realistic, then these activities effectively use the misleading information, and the consequences could be disastrous. Unfortunately, providing reliable size and effort estimates for software testing is challenging. One reason is that software testing is a human-intensive activity that is affected by a large number of factors such as personnel capabilities, software requirements, processes, environments, communications, and technologies. These factors cause high variance in productivities of projects of different characteristics. Another reason is that software quality attributes such as functionality, reliability, usability, efficiency, and scalability are hard to quantify objectively. Moreover, there are many different types of software testing and testing contexts even in one project. Because of the diversity of software testing activities, one approach that works well on one type of test in one environment may not work when it is applied to another type of test in another context. Although many approaches have been introduced and applied to estimating overall software size and effort, there is a lack of methods for estimating software testing size and effort. Often, testing effort is estimated as a part of the overall software effort, and software size is used as a size measure of software testing activity. This approach, however, cannot account for specific requirements of software testing, such as intensive testing required by clients. Another limitation is that it cannot be applied to the test-only project in which the software exists but the testing team does not have access to source code or requirements in order to perform Function Point Analysis or count Source Lines of Code (SLOC). Finally, the software size in Function Points or
  • 2. SLOC may not well reflect the amount of test performed by the testing team. Therefore, measuring the productivity of software testing using these metrics is misleading. In an attempt to address this issue, this white paper presents an approach to estimating the size and effort of software testing. The sizing method is called Test Case Point. As the name indicates, the method measures the size of test case, the core artifact that testers create and use when performing software test execution. The size of test case is evaluated using four elements of test case complexity, including checkpoint, precondition, data, and type of the test case. This article also describes simple approaches to determining testing effort using Test Case Point. Although theoretically this Test Case Point analysis can be applied to measuring the size of different kinds of software testing given test cases as input, this method is only focused on estimating the size of system, integration, and acceptance testing activities which are often performed manually by independent verification and validation (IV&V). Unit testing, automation testing, and performance testing that involve using test scripts are also beyond the scope of this method. II. A SUMMARY SOFTWARE TESTING SIZE AND EFFORT ESTIMATION METHODS A number of methods and metrics have been proposed and applied to estimating the size and effort of software projects. Many of these methods are also used to estimate the effort of software testing activities. SLOC is a traditional and popular metric that measures the size of software product by counting its source program. A number of SLOC definitions exist, but the most common are physical SLOC which counts the number of physical lines of code and logical SLOC which counts the number of logical statements in the source program. SLOC is used as a size input for popular software estimation models such as COCOMO, SEER-SIM, and SLIM. In these models, the testing effort is computed the overall effort estimate using a predefined percentage. This percentage can be obtained from industry average, historical data, or based on experience of the estimators. Function Point Analysis (FPA) is another method for estimating the size of software that has been used in industry since the late 1970s [3]. This analysis measures the software size based on five components including inputs, outputs, inquiries, files, and interfaces. The size output, whose unit is Function Point, is then adjusted with technical factors to take into account technical characteristics of the system. To estimate effort, the estimated size in Function Point can be determined using a productivity index, by converting to SLOC using a method called backfire, or by performing a simple linear regression model. FPA has been extended and introduced in a number of sizing methods including Mark II [11], COSTMIC FFP [12], and NESMA FPA [6]. Although FPA is widely used in industry, it has been criticized for its complexity that required much time and skilled estimators to perform the analysis. A similar but simplified method named Use Case Point was proposed [7]. This method counts the number Use Case Points of the software project by measuring the complexity of its use cases. The use case complexity is based on actors and transactions of the use case and then adjusted with the technical complexity and the environment factors to obtain the final Use Case Point count. The idea of FPA and UCP methods has inspired the introduction of Test Case Point Analysis [4] and other methods from the practice and research community [2, 5, 8, 9, 10, 13]. In [8], Aranha and Borba proposed a model for estimating test execution effort by measuring the size and execution complexity of test cases. They introduced the size unit of execution point which is based on the characteristics of every step in the test case. The effort is then estimated by converting the execution point count to effort using a conversion factor.
  • 3. III. TEST CASE POINT ANALYSIS 1. Method Overview The Test Case Point measures the software testing size, reflecting the complexity of the testing activities performed to ensure the quality goal of the software. As a complexity measure, it needs to reflect the effort required to perform the testing activities including planning, designing, executing tests, and reporting and tracking defects. The Test Case Point Analysis uses test cases as input and generates Test Case Point count for the test cases being measured. The complexity of the test case is based on four elements including checkpoint, precondition, test data, and types of test case, which effectively assumes that the complexity is centered at these four elements. These elements are classified into two types, one reflecting the largeness of the test case, which includes checkpoint, precondition, test data, and the other normalizing the complexity of different types of test by adjusting the Test Case Point by weights of test types. 2. Method Detail 2.1. Measure Test Case Complexity Each test case is assigned a number of Test Case Points based on the number of checkpoints, the complexity of precondition, and test data used in the test case. Checkpoint is the condition in which the tester verifies whether the result produced by the target function matches the expected criterion. One test case consists of one or many checkpoints. Rule 1: One checkpoint is counted as one Test Case Point. Precondition. Test case’s precondition specifies the condition to execute the test case. The precondition mainly affects the cost to execute the test case, same as that of the test data. Some precondition may be related to data prepared for the test case. Table 1. Precondition Complexity Level Description Complexity level Description None The precondition is not applicable or important to execute the test case. Or, the precondition is just reused from the previous test case to continue the current test case. Low The condition for executing the test case is available with some simple modifications required. Or, some simple setting-up steps are needed. Medium Some explicit preparations are needed to execute the test case. The condition for executing is available with some additional modifications required. Or, some additional setting-up steps are needed. High Heavy hardware and/or software configurations are needed to execute the test case.
  • 4. The complexity of test case’s precondition is classified into four levels, None, Low, Medium, and High (see Table 1). Test data is used to execute the test case. It can be generated at the test case execution time, or is already prepared by previous tests, or is generated by testing scripts. Test data is test case specific or general to a group of test cases or the whole system. In the latter cases, the data can be reused in multiple test cases. The complexity of test data is classified into four levels, None, Low, Medium, and High, which is shown in Table 2. Table 2. Test Data Complexity Level Description Complexity level Description None No test data preparation is needed. Low Test data is needed, but it is simple so that it can be created during the test case execution time. Or, the test case uses a slightly modified version of the existing test data, i.e., little effort required to modify the test data. Medium Test data is deliberately prepared in advance with extra effort to ensure its completeness, comprehensiveness, and consistency. High Test data is prepared in advance with considerable effort to ensure its completeness, comprehensiveness, and consistency or by using support tools to generate and database to store and manage. Scripts may be required to generate test data. In many cases, the test data has to be supplied by a third-party. In such cases, the effort to generate test data for the test case is expectedly small, thus, the test data complexity should be low. Rule 2: each complexity level of precondition and test data is assigned a number of Test Case Points. This measure is called unadjusted Test Case Point. Table 3. Test Case Point Allocation for Precondition Complexity level Number of Test Case Point Standard deviation (±σ) None 0 0 Low 1 0 Medium 3 0.5 High 5 1 Table 3 and 4 show the number of Test Case Points allocated to each complexity level for the precondition and test data components of the test case. These constants were obtained through a survey of 18 experienced quality assurance engineers in our organization. The standard deviation
  • 5. values reflect the variance among the survey’s results. It is worthy to note that the estimator should adjust these constants to better reflect the characteristics of their projects and environments. Table 4. Test Case Point Allocation for Test Data Complexity level Number of Test Case Point Standard deviation (±σ) None 0 0 Low 1 0 Medium 3 0.6 High 6 1.3 2.2. Adjust Test Case Point by Type of Test To account for the complexity differences among test types, the Test Case Point counted for each test case is adjusted using a weight corresponding to its test type. The weight of the most common test type, user interface and functional testing, is established as the baseline and fixed at 1.0. The weights of other test types are relative to this baseline. Table 5 shows the types of test, the corresponding weights, and standard deviations of the weights. The weights were obtained via the same survey of 18 experienced quality assurance engineers in our organization. Again, the estimators can review and adjust these weights when applying the method in their organizations. Table 5. Weights of Test Types Type of Test Weight Standard Comments (W) deviation (±σ) User interface and 1.0 0 User interface and functional testing is functional testing considered baseline. API 1.22 0.32 API testing verifies the accuracy of the interfaces in providing services. Database 1.36 0.26 Testing the accuracy of database scripts, data integrity and/or data migration. Security 1.39 0.28 Testing how well the system sustains from hacking attacks, unauthorized and unauthenticated access. Installation 1.09 0.06 Testing of full, partial, or upgrade install/uninstall processes of the software.
  • 6. Type of Test Weight Standard Comments (W) deviation (±σ) Networking 1.27 0.29 Testing the communications among entities via networks. Algorithm and 1.38 0.27 Verifying algorithms and computations computation designed and implemented in the system. Usability testing 1.12 0.13 Testing the friendliness, ease of use, and other usability attributes of the system. Performance 1.33 0.27 Verifying whether the system meets (manual) performance requirements, assuming that the test is done manually. Recovery testing 1.07 0.14 Recovery testing verifies the accuracy of the recovery process to recover from system crashes and other errors. Compatibility 1.01 0.03 Testing whether the software is compatible testing with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware. The total Adjusted Test Case Point (ATCP) is counted as ATCP = ∑(UTCPi * Wi) (Eq 3.1)  Where UTCPi is the number of Unadjusted Test Case Points counted using checkpoints, precondition, and test data for the test case ith.  Wi is the weight of the test case ith, taking into account its test type. It is important to note that the list of test types is not exhaustive. Estimators can supplement it with additional test types and the corresponding weights based on the weight of the user interface and functional testing. IV. EFFORT ESTIMATION Testing activities can be classified into four categories, test planning, test design, test execution, and defect reporting. Of these activities, test execution and defect reporting may be performed multiple times for a single test case during project. However, the size measured in Test Case Point takes into account all of these activities, assuming that each activity is performed once. The distribution of effort of these activities allows estimating the effort in case the test execution and defect reporting activities are performed more than once. The distribution of testing effort can be generated using historical data. The distribution of effort of testing phases shown in Table 6 was obtained from the same survey performed to collect the constants described above. Again, these values mainly reflect the
  • 7. experience in our organization, and thus, the estimators are encouraged to adjust these numbers using their own experience or historical data. Table 6. Testing Effort Distribution Testing Phase Description Percent of Standard Effort deviation (±σ) Test Planning Identifying test scope, test approaches, and 10% 3% associated risks. Test planning also identifies resource needs (personnel, software, hardware) and preliminary schedule for the testing activities. Test Analysis and Dealing with detailing test scope/objectives, 25% 5% Design evaluating/clarifying test requirements, designing the test cases and preparing necessary environments for the testing activities. This phase needs to ensure combinations of test are covered. Test Execution Is the repeating phase dealing with executing 47% 4% test activities and objectives outlined in the test planning phase and reporting test results, defects, and risks. This phase also includes test results analysis activities to check the actual output versus the expected results. Defect Tracking & Is the repeating phase dealing with tracking 18% 3% Reporting the progress and status of tests and defects and evaluating current status versus exit criteria and goals Dependent on the availability of information and resources, the testing effort can be estimated using one of the following simple methods:  Productivity Index  Simple Linear Regression Analysis Estimate Effort Using Productivity Index The Productivity Index is measured as the number of Test Case Points completed per person- month. The Productivity Index is then used to compute the testing effort by multiplying it with the total Adjusted Test Case Point (see Eq. 4.1). The Productivity Index can be determined using historical data. It is recommended to use the Productivity Index of similar projects with the project being estimated. The testing effort is computed as Effort = TCP * Productivity Index (Eq. 4.1)
  • 8. Estimate Effort Using Simple Linear Regression Analysis Regression is a more comprehensive and expectedly a more accurate method to estimate the testing effort given that many historical data points are available. With this method, the estimators can compute estimation ranges and confidence level of the estimate. This method requires having at least three similar projects for providing meaningful estimates. Given N completed projects in which the ith project has the number of Test Case Point (TCPi) and effort (Ei) in person-month, the simple linear regression model is presented as Ei = α + β * TCPi + ɛi (Eq. 4.2) Where, A and B are coefficients and ɛi is the error term of the model. Using the least squares regression one can easily compute the estimates A and B of the coefficients α and β, respectively. The A and B values are then used in the model for estimating the effort of new testing projects as follows E = A + B*TCP (Eq. 4.3) Where, TCP is the size of the test cases in Test Case Point, and E is the estimated effort in person-month of the project being estimated. V. CONCLUSION Software testing plays an important role in the success of software development and maintenance projects. Estimating testing effort accurately is a key step towards to that goal. In an attempt to fill the gap in estimating software testing, this paper has proposed a method called Test Case Point Analysis to measure and computing the size and effort of software testing activities. The input of this analysis is test cases, and the output is the number of Test Case Points for the test cases being counted. An advantageous feature of this approach is that it measures the complexity of test case, the main work product that the tester produces and uses for test execution. Thus, it better reflects the effort that the tester spends on their activities. Another advantage is that the analysis can be performed easily by counting the number of checkpoints, measuring the complexity of precondition and test data, and determining the type of each test case. However, there are several limitations of this approach, hence suggesting directions for future improvements of the approach. One limitation is that the Test Case Point measure has not been empirically validated. Data of the past testing projects need to be used to validate the effect and usefulness of the size measure in estimating the effort of the software testing activities. Another limitation is the concern of whether the complexity ranges of the test case’s precondition and test data can properly reflect the actual complexity of these attributes. Future improvements on the method need to address these limitations. VI. REFERENCES [1] Y. Yang, Q. Li, M. Li, Q. Wang, An empirical analysis on distribution patterns of software maintenance effort, International Conference on Software Maintenance, 2008, pp. 456-459 [2] S. Kumar, “Understanding TESTING ESTIMATION using Use Case Metrics”, Web: http://www.stickyminds.com/sitewide.asp?ObjectId=15698&Function=edetail&ObjectType= ART [3] A.J. Albrecht, “Measuring Application Development Productivity,” Proc. IBM Applications Development Symp., SHARE-Guide, pp. 83-92, 1979
  • 9. [4] TestingQA.com, “Test Case Point Analysis”, Web: http://testingqa.com/test-case-point- analysis/ [5] TestingQA.com, “Test Effort Estimation”, Web: http://testingqa.com/test-effort-estimation/ [6] NESMA (2008), "FPA according to NESMA and IFPUG; the present situation", http://www.nesma.nl/download/artikelen/FPA%20according%20to%20NESMA%20and%20 IFPUG%20-%20the%20present%20situation%20(vs%202008-07-01).pdf [7] G. Karner, “Metrics for Objectory”. Diploma thesis, University of Linköping, Sweden. No. LiTHIDA-Ex-9344:21. December 1993. [8] E. Aranha and P. Borba. “An estimation model for test execution effort”. In Proceedings of First International Symposium on Empirical Software Engineering and Measurement, 2007. [9] E. R. C. de Almeida, B. T. de Abreu, and R. Moraes. “An Alternative Approach to Test Effort Estimation Based on Use Cases”. In Proceedings of International Conference on Software Testing, Verification, and Validation, 2009. [10] S. Nageswaran. “Test effort estimation using use case points”. In Proceedings of 14th International Internet & Software Quality Week 2001, 2001 [11] C.R. Symons, "Software Sizing and Estimating: Mk II FPA". Chichester, England: John Wiley, 1991. [12] A. Abran, D. St-Pierre, M. Maya, J.M. Desharnais, "Full function points for embedded and real-time software", Proceedings of the UKSMA Fall Conference, London, UK, 14, 1998 [13] D.G. Silva, B.T. Abreu, M. Jino, A Simple Approach for Estimation of Execution Effort of Functional Test Cases, In Proceedings of 2009 International Conference on Software Testing Verification and Validation, 2009