SlideShare une entreprise Scribd logo
1  sur  6
Running Tested Features:
A Better Way to Track Software Development Progress
Page 1 of 6
By Camille Bell, an Agile Coach
cbell@CamilleBellConsulting.com Twitter @agilecamille
Why track Running Tested Feature Metric?
The Running Tested Features (RTF) metric provides developers, managers and customers alike
with a clear, unambiguous gauge of real software development progress. Usable on any kind of
development project, RTF’s focus on outcome instead of process makes RTF especially fit for
Agile projects. Because RTF can be used with both Agile and Waterfall projects, RTF makes an
excellent progress metric for teams transitioning to Agile.
What is the Running Tested Feature Metric?
The formal definition of the Running Tested Feature metric is very simple:
“
1. The desired software is broken down into named features (requirements, stories),
which are part of what it means to deliver the desired system.
2. For each named feature, there are one or more automated acceptance tests which,
when they work, will show that the feature in question is implemented.
3. The RTF metric shows, at every moment in the project, how many features are
passing all their acceptance tests. “
Ron Jeffries, A Metric Leading to Agility
What Do the Terms mean?
Feature: A feature is an end-user customer-defined requirement. Background or support activities
needed to implement a feature (e.g. installing tools, configuring servers, deployment activities, etc.)
are not features themselves and don’t count for RTF. The specifications of features may be
captured in User Stories, Use Cases, or other requirements capture mechanisms.
Running: A feature is running, if that feature has been implemented in working integrated
deliverable code. Features are either running or they aren’t. Development activities (e.g. design,
coding, CM, analysis, reviews) are not running. These activities may contribute to the development
of working code, but the end product, working executing code and only working executing code, is
considered to be running. All features counted in RTF should be integrated and running.
A feature may communicate or interact with other features or parts of the system, which are not yet
implemented. For automated testing, the implemented feature may use mock objects, stubs or
simulations in place of an unimplemented feature’s interfaces and interactions, but only if such
functionality is not required of the feature itself.
Tested: A running feature is tested, if that feature has end-user customer-defined or approved
automated acceptance tests, if those tests correctly test functionality of that given feature and if
that feature persistently passes all its automated tests all of the time.
Running Tested Features:
A Better Way to Track Software Development Progress
Page 2 of 6
Calculating & Tracking RTF
Assume the development team implements some number of pre-selected features in any given
development cycle or iteration. For the purposes of implementation and counting RTF, complex
features may be broken down into sub features by mutual agreement of the development team and
customer. This agreement need not be formal, but it does need to be clear and agreed to by both
parties.
Features are either running or they aren’t. Running features either pass all of their tests or they
don’t. For any given feature that feature’s RTF value is binary: zero or one. There are no partials or
percentages. Individual RTF values are summed and plotted against time. To be counted in any
time period, a feature must continue to run and pass its tests.
RTF should be tracked very frequently; daily tracking is ideal.
Running Tested Features on an Agile, Iterative or Spiral Project
Agile and Spiral development projects should have short iterations measured in a number of
weeks. Iterations include all software development activities (requirements analysis, design,
coding, testing, etc.). Typically iterations are time boxed, where each iteration covers the same
amount of time. Since some features may be more complex than others, the level of effort to
complete a feature will differ from feature to feature. Consequently the number of features that can
be completed during an iteration will differ from iteration to iteration, while the time remains
constant. Iteration software progressively grows to include more functionality with each iteration.
The end product of every iteration should be a tested, functional software product. Because the
iteration product is a subset of desired functionality, software at the end of a given iteration may
not be releasable. Even if not releasable, that software should always be fully tested within the
constraints of the delivered features. All this creates very short feedback loops, which significantly
improves productivity.
The RTF curve of an Agile, Iterative or Spiral software development project should steadily
increase over time. RTF measured against time on an Agile, Iterative or Spiral project should
produce a graph similar to the following:
Running Tested Features:
A Better Way to Track Software Development Progress
Page 3 of 6
If an Agile, Iterative or Spiral software development project does not have short iterations, the
project loses feedback and other productivity benefits and begins to resemble a Waterfall project.
The longer the iterations, the more waterfall the project becomes.
Running Tested Features on a Waterfall Project
Waterfall projects perform software development linearly; that is Waterfall projects perform all the
requirements analysis before any design is done and then all the design before any coding is
done, etc. Feature based acceptance testing of customer requirements is performed as the very
last Waterfall activity. Typically Waterfall development projects take one or more years. The
feedback loop for Waterfall projects is very long and consequently overall Waterfall productivity is
much lower than Agile, Iterative or Spiral development.
The RTF curve of a Waterfall software development project will be flat at zero until the very end of
the waterfall project. RTF measured against time on an Waterfall project will produce a graph
similar to the following:
If a Waterfall software development project has a short total duration, then the Waterfall project
gains some of the active feedback productivity benefits of a more Agile or Iterative project. The
shorter the Waterfall duration, the better the feedback and the more productive the project will be.
A series of short Waterfall projects will begin to resemble Iterative projects. Curiously, Winston
Royce, author of the most cited Waterfall paper, never intended single-pass Waterfall to be used
on lengthy projects. Royce used Waterfall as an over simplification of his process early in his
paper. Later in that paper Royce described Iterative and Agile techniques (within the limitations of
1960’s and 70’s government contracting) such as continual customer feedback and multiple
iterations.
Running Tested Features:
A Better Way to Track Software Development Progress
Page 4 of 6
Considerations in Determining RTF: an Example
Looking at the cumulative RTF, cumulative RTF is a measure over time, RTF can go down when
something breaks and back up when that something is fixed.
Assume that on Monday, a development team has 6 running tested features. Assume that on
Tuesday, no new features have been tested and integrated, then Tuesday’s RTF is still 6.
Wednesday the team adds a feature for a potential RTF of 7.
Assume that when the 7th
feature is added, not only does the 7th
feature fail to pass its tests, but
now the 2nd
feature also fails some or all of the 2nd
feature’s tests. Now Wednesday’s RTF is 5 not
counting the 7th
feature. If the 7th
feature passes all its tests, then the total RTF for Wednesday is
6. If the 7th
feature also fails its tests, then Wednesday’s RTF remains 5.
When everything is fixed on
Thursday and new tests have
been added where needed for
both the 2nd
and the 7th
feature
and when all 7 features pass all
their tests, then Thursday’s RTF
reaches 7.
If instead, adding the 7th
feature broke the build or did something else equally disastrous to make
all the features fail their tests, all features malfunction despite passing the tests, or a combination
of both, then the Wednesday’s RTF would drop to 0.
Again when everything is fixed on
Thursday and new tests have
been added where needed and
when all 7 features pass all their
tests, then Thursday’s RTF
reaches 7.
Running Tested Features:
A Better Way to Track Software Development Progress
Page 5 of 6
Notice that small changes in development stability (both good and bad) are noticeable, if and only,
if RTF is tracked in small time segments. Daily tracking and graphing is best and most useful to the
development team. Weekly or monthly rollups of daily RTF is good for higher management.
Rollups should include the more granular RTF material as supplements.
Valid Uses of RTF and Interpretations
RTF gauges the general health of a project. The general shape of the cumulative RTF curve is
informative and valuable.
A steady rise of the RTF curve over time indicates maximum productivity. Sustained flat lines in the
curve indicate periods of low productivity. RTF curves with valleys indicate product instability.
The general shape of two RTF curves can be compared, as in the graphs provided, but only at the
most imprecise level, never at a numeric level.
Don’t Shoot the Messenger
Occasional small short term dips in RTF, such as the detailed example, while they must be
recorded, are not serious concerns, if resolved quickly. Identifying errors early so that those errors
can be fixed, before they ripple throughout the system, is the purpose of automated regression
tests. Daily tracking of RTF encourages fixing errors immediately to raise RTF numbers and
thereby promotes improved productivity.
Team leaders, managers and customers should be concerned, however, when RTF dips or is flat
for a sustained period of time.
Invalid Uses of RTF
Never use RTF metrics to compare individual developers or teams. A software engineer or pair of
engineers responsible for 10 RTFs is not necessarily less efficient than one responsible for 20
RTFs. The RTF 10 engineer might be more efficient, if those features were more complex requiring
a higher level of effort. Also the surest way to sabotage honest and accurate collection of metrics
is to use those metrics to evaluate individuals.
Un-weighted RTF is an indicator of project health, not an indication the percent of project
completion. Why? Because features aren’t equal. The level of effort to implement one feature or
another can vary enormously even within the same project, the same team and the same
developer.
If RTF is weighted by relative level of difficulty, then that weighted RTF becomes almost identical
to Scrum burn up charts. However weighting features using Scrum requires a greater level of agile
maturity than measuring RTF, limiting its use. In contrast, RTF can be used on any project.
In general, be very skeptical of percent complete statistics. Misleading percent complete numbers
are historically responsible for cost and schedule overruns. EVM based on traditional percent
complete metrics (especially those from Waterfall or Iterative Waterfall) artifacts are always
inaccurate, because no true value is delivered until actual running and tested feature-based
software is also delivered.
Running Tested Features:
A Better Way to Track Software Development Progress
Page 6 of 6
Instead of percent complete, consider tracking value delivered over time using RTF. To ensure
maximal value, prioritize feature delivery. Customer or warfighter prioritization of feature delivery
combined with RTF will front load the delivery of those features with the highest value.
Conclusion
The Running Tested Features metric is easy to collect and easy to interpret. Simply eyeballing the
shape of the RTF project provides valuable insight into the progress and health of any ongoing
software development project.
Abbreviated Bibliography
Boehm, Barry and Hansen, Wilfred “The Spiral Model as a Tool for Evolutionary Acquisition”,
CrossTalk : The Journal of Defense Software Engineering, May 2001
Cockburn, Alistair “Agile Software Development”, Addison Wesley, 2002
Larman, Craig “Agile and Iterative Development: A Manager’s Guide”, Addison Wesley, 2004
Jeffries, Ron “A Metric Leading to Agility”, 2004,
http://www.xprogramming.com/xpmag/jatRtsMetric.htm
Royce, Winston “Managing the Development of Large Software Systems”, Proceedings of IEEE
Westcon, 1970

Contenu connexe

Tendances

Agile Testing Strategy
Agile Testing StrategyAgile Testing Strategy
Agile Testing Strategy
tharindakasun
 
Quality Assurance and Software Testing
Quality Assurance and Software TestingQuality Assurance and Software Testing
Quality Assurance and Software Testing
pingkapil
 

Tendances (20)

Software Testing - Test Design Techniques
Software Testing - Test Design TechniquesSoftware Testing - Test Design Techniques
Software Testing - Test Design Techniques
 
Software testing
Software testingSoftware testing
Software testing
 
Software testing overview subbu
Software testing overview subbuSoftware testing overview subbu
Software testing overview subbu
 
Automation testing
Automation testingAutomation testing
Automation testing
 
Agile-overview: Agile Manifesto, Agile principles and Agile Methodologies
Agile-overview: Agile Manifesto, Agile principles and Agile MethodologiesAgile-overview: Agile Manifesto, Agile principles and Agile Methodologies
Agile-overview: Agile Manifesto, Agile principles and Agile Methodologies
 
TMMi Implementation Guideline
TMMi Implementation GuidelineTMMi Implementation Guideline
TMMi Implementation Guideline
 
Agile QA and Testing process
Agile QA and Testing processAgile QA and Testing process
Agile QA and Testing process
 
Agile sdlc
Agile sdlcAgile sdlc
Agile sdlc
 
Unit Testing
Unit TestingUnit Testing
Unit Testing
 
Agile Testing Strategy
Agile Testing StrategyAgile Testing Strategy
Agile Testing Strategy
 
Integration test
Integration testIntegration test
Integration test
 
Effective Test Estimation
Effective Test EstimationEffective Test Estimation
Effective Test Estimation
 
Agile Process Introduction
Agile Process IntroductionAgile Process Introduction
Agile Process Introduction
 
Agile Software Development Methodologies
Agile Software Development MethodologiesAgile Software Development Methodologies
Agile Software Development Methodologies
 
risk based testing and regression testing
risk based testing and regression testingrisk based testing and regression testing
risk based testing and regression testing
 
Extreme Programming
Extreme ProgrammingExtreme Programming
Extreme Programming
 
Introduction to Agile Testing
Introduction to Agile TestingIntroduction to Agile Testing
Introduction to Agile Testing
 
Software Testing Life Cycle (STLC) | Software Testing Tutorial | Edureka
Software Testing Life Cycle (STLC) | Software Testing Tutorial | EdurekaSoftware Testing Life Cycle (STLC) | Software Testing Tutorial | Edureka
Software Testing Life Cycle (STLC) | Software Testing Tutorial | Edureka
 
Quality Assurance and Software Testing
Quality Assurance and Software TestingQuality Assurance and Software Testing
Quality Assurance and Software Testing
 
SOFTWARE TESTING
SOFTWARE TESTINGSOFTWARE TESTING
SOFTWARE TESTING
 

Similaire à Promoting Agility with Running Tested Features - Paper

Resume Sandip kandari 3 years automation testing
Resume Sandip kandari 3 years automation testing Resume Sandip kandari 3 years automation testing
Resume Sandip kandari 3 years automation testing
Sandip Kandari
 

Similaire à Promoting Agility with Running Tested Features - Paper (20)

An Overview of RUP methodology
An Overview of RUP methodologyAn Overview of RUP methodology
An Overview of RUP methodology
 
IFG for SAP Integration, webinar on Automated Testing
IFG for SAP Integration, webinar on Automated TestingIFG for SAP Integration, webinar on Automated Testing
IFG for SAP Integration, webinar on Automated Testing
 
Rup
RupRup
Rup
 
Introduction to Agile Software Development & Python
Introduction to Agile Software Development & PythonIntroduction to Agile Software Development & Python
Introduction to Agile Software Development & Python
 
SDLC and Software Process Models
SDLC and Software Process ModelsSDLC and Software Process Models
SDLC and Software Process Models
 
Scrum overview
Scrum overviewScrum overview
Scrum overview
 
Agile planning with Rational Team Concert
Agile planning with Rational Team ConcertAgile planning with Rational Team Concert
Agile planning with Rational Team Concert
 
Dev ops lpi-701
Dev ops lpi-701Dev ops lpi-701
Dev ops lpi-701
 
Animesh Chatterjee
Animesh Chatterjee Animesh Chatterjee
Animesh Chatterjee
 
Priti Singh
Priti SinghPriti Singh
Priti Singh
 
Connecting ALM Tools for a DevOps World with RLIA-TE
Connecting ALM Tools for a DevOps World with RLIA-TEConnecting ALM Tools for a DevOps World with RLIA-TE
Connecting ALM Tools for a DevOps World with RLIA-TE
 
Resume Sandip kandari 3 years automation testing
Resume Sandip kandari 3 years automation testing Resume Sandip kandari 3 years automation testing
Resume Sandip kandari 3 years automation testing
 
An overview of software development methodologies.
An overview of software development methodologies.An overview of software development methodologies.
An overview of software development methodologies.
 
Scrum Process Overview
Scrum Process OverviewScrum Process Overview
Scrum Process Overview
 
IBM Jazz Agile Collaborative Lifecycle Management 6.0.x What's new
IBM Jazz Agile Collaborative Lifecycle Management 6.0.x What's newIBM Jazz Agile Collaborative Lifecycle Management 6.0.x What's new
IBM Jazz Agile Collaborative Lifecycle Management 6.0.x What's new
 
Performance Testing in Agile Process
Performance Testing in Agile ProcessPerformance Testing in Agile Process
Performance Testing in Agile Process
 
Adm Initial Proposal
Adm Initial ProposalAdm Initial Proposal
Adm Initial Proposal
 
Load Runner Methodology to Performance Testing
Load Runner Methodology to Performance TestingLoad Runner Methodology to Performance Testing
Load Runner Methodology to Performance Testing
 
Strong Arm Your Tools
Strong Arm Your ToolsStrong Arm Your Tools
Strong Arm Your Tools
 
probe-into-the-key-components-and-tools-of-devops-lifecycle
probe-into-the-key-components-and-tools-of-devops-lifecycleprobe-into-the-key-components-and-tools-of-devops-lifecycle
probe-into-the-key-components-and-tools-of-devops-lifecycle
 

Plus de Camille Bell

Plus de Camille Bell (12)

What CS Class Didn't Teach About Testing
What CS Class Didn't Teach About TestingWhat CS Class Didn't Teach About Testing
What CS Class Didn't Teach About Testing
 
Remote Mob Programming
Remote Mob ProgrammingRemote Mob Programming
Remote Mob Programming
 
Kata Your Way to SW Craftsmanship
Kata Your Way to SW CraftsmanshipKata Your Way to SW Craftsmanship
Kata Your Way to SW Craftsmanship
 
Software Craftsmanship Workshop
Software Craftsmanship WorkshopSoftware Craftsmanship Workshop
Software Craftsmanship Workshop
 
What They Didn't Tell You in CSM Clas
What They Didn't Tell You in CSM ClasWhat They Didn't Tell You in CSM Clas
What They Didn't Tell You in CSM Clas
 
Inside Behavior Driven Development
Inside Behavior Driven DevelopmentInside Behavior Driven Development
Inside Behavior Driven Development
 
Growing Manual Testers into Automators
Growing Manual Testers into AutomatorsGrowing Manual Testers into Automators
Growing Manual Testers into Automators
 
Testing for Agility: Bringing Testing into Everything
Testing for Agility: Bringing Testing into EverythingTesting for Agility: Bringing Testing into Everything
Testing for Agility: Bringing Testing into Everything
 
Automate Debugging with git bisect
Automate Debugging with git bisectAutomate Debugging with git bisect
Automate Debugging with git bisect
 
An Introduction to Kanban
An Introduction to KanbanAn Introduction to Kanban
An Introduction to Kanban
 
Promoting Agility with Running Tested Features - Lightening Talk
Promoting Agility with Running Tested Features - Lightening TalkPromoting Agility with Running Tested Features - Lightening Talk
Promoting Agility with Running Tested Features - Lightening Talk
 
Adapting Agility: Getting your Agile Transformation Unstuck
Adapting Agility: Getting your Agile Transformation UnstuckAdapting Agility: Getting your Agile Transformation Unstuck
Adapting Agility: Getting your Agile Transformation Unstuck
 

Dernier

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 

Dernier (20)

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 

Promoting Agility with Running Tested Features - Paper

  • 1. Running Tested Features: A Better Way to Track Software Development Progress Page 1 of 6 By Camille Bell, an Agile Coach cbell@CamilleBellConsulting.com Twitter @agilecamille Why track Running Tested Feature Metric? The Running Tested Features (RTF) metric provides developers, managers and customers alike with a clear, unambiguous gauge of real software development progress. Usable on any kind of development project, RTF’s focus on outcome instead of process makes RTF especially fit for Agile projects. Because RTF can be used with both Agile and Waterfall projects, RTF makes an excellent progress metric for teams transitioning to Agile. What is the Running Tested Feature Metric? The formal definition of the Running Tested Feature metric is very simple: “ 1. The desired software is broken down into named features (requirements, stories), which are part of what it means to deliver the desired system. 2. For each named feature, there are one or more automated acceptance tests which, when they work, will show that the feature in question is implemented. 3. The RTF metric shows, at every moment in the project, how many features are passing all their acceptance tests. “ Ron Jeffries, A Metric Leading to Agility What Do the Terms mean? Feature: A feature is an end-user customer-defined requirement. Background or support activities needed to implement a feature (e.g. installing tools, configuring servers, deployment activities, etc.) are not features themselves and don’t count for RTF. The specifications of features may be captured in User Stories, Use Cases, or other requirements capture mechanisms. Running: A feature is running, if that feature has been implemented in working integrated deliverable code. Features are either running or they aren’t. Development activities (e.g. design, coding, CM, analysis, reviews) are not running. These activities may contribute to the development of working code, but the end product, working executing code and only working executing code, is considered to be running. All features counted in RTF should be integrated and running. A feature may communicate or interact with other features or parts of the system, which are not yet implemented. For automated testing, the implemented feature may use mock objects, stubs or simulations in place of an unimplemented feature’s interfaces and interactions, but only if such functionality is not required of the feature itself. Tested: A running feature is tested, if that feature has end-user customer-defined or approved automated acceptance tests, if those tests correctly test functionality of that given feature and if that feature persistently passes all its automated tests all of the time.
  • 2. Running Tested Features: A Better Way to Track Software Development Progress Page 2 of 6 Calculating & Tracking RTF Assume the development team implements some number of pre-selected features in any given development cycle or iteration. For the purposes of implementation and counting RTF, complex features may be broken down into sub features by mutual agreement of the development team and customer. This agreement need not be formal, but it does need to be clear and agreed to by both parties. Features are either running or they aren’t. Running features either pass all of their tests or they don’t. For any given feature that feature’s RTF value is binary: zero or one. There are no partials or percentages. Individual RTF values are summed and plotted against time. To be counted in any time period, a feature must continue to run and pass its tests. RTF should be tracked very frequently; daily tracking is ideal. Running Tested Features on an Agile, Iterative or Spiral Project Agile and Spiral development projects should have short iterations measured in a number of weeks. Iterations include all software development activities (requirements analysis, design, coding, testing, etc.). Typically iterations are time boxed, where each iteration covers the same amount of time. Since some features may be more complex than others, the level of effort to complete a feature will differ from feature to feature. Consequently the number of features that can be completed during an iteration will differ from iteration to iteration, while the time remains constant. Iteration software progressively grows to include more functionality with each iteration. The end product of every iteration should be a tested, functional software product. Because the iteration product is a subset of desired functionality, software at the end of a given iteration may not be releasable. Even if not releasable, that software should always be fully tested within the constraints of the delivered features. All this creates very short feedback loops, which significantly improves productivity. The RTF curve of an Agile, Iterative or Spiral software development project should steadily increase over time. RTF measured against time on an Agile, Iterative or Spiral project should produce a graph similar to the following:
  • 3. Running Tested Features: A Better Way to Track Software Development Progress Page 3 of 6 If an Agile, Iterative or Spiral software development project does not have short iterations, the project loses feedback and other productivity benefits and begins to resemble a Waterfall project. The longer the iterations, the more waterfall the project becomes. Running Tested Features on a Waterfall Project Waterfall projects perform software development linearly; that is Waterfall projects perform all the requirements analysis before any design is done and then all the design before any coding is done, etc. Feature based acceptance testing of customer requirements is performed as the very last Waterfall activity. Typically Waterfall development projects take one or more years. The feedback loop for Waterfall projects is very long and consequently overall Waterfall productivity is much lower than Agile, Iterative or Spiral development. The RTF curve of a Waterfall software development project will be flat at zero until the very end of the waterfall project. RTF measured against time on an Waterfall project will produce a graph similar to the following: If a Waterfall software development project has a short total duration, then the Waterfall project gains some of the active feedback productivity benefits of a more Agile or Iterative project. The shorter the Waterfall duration, the better the feedback and the more productive the project will be. A series of short Waterfall projects will begin to resemble Iterative projects. Curiously, Winston Royce, author of the most cited Waterfall paper, never intended single-pass Waterfall to be used on lengthy projects. Royce used Waterfall as an over simplification of his process early in his paper. Later in that paper Royce described Iterative and Agile techniques (within the limitations of 1960’s and 70’s government contracting) such as continual customer feedback and multiple iterations.
  • 4. Running Tested Features: A Better Way to Track Software Development Progress Page 4 of 6 Considerations in Determining RTF: an Example Looking at the cumulative RTF, cumulative RTF is a measure over time, RTF can go down when something breaks and back up when that something is fixed. Assume that on Monday, a development team has 6 running tested features. Assume that on Tuesday, no new features have been tested and integrated, then Tuesday’s RTF is still 6. Wednesday the team adds a feature for a potential RTF of 7. Assume that when the 7th feature is added, not only does the 7th feature fail to pass its tests, but now the 2nd feature also fails some or all of the 2nd feature’s tests. Now Wednesday’s RTF is 5 not counting the 7th feature. If the 7th feature passes all its tests, then the total RTF for Wednesday is 6. If the 7th feature also fails its tests, then Wednesday’s RTF remains 5. When everything is fixed on Thursday and new tests have been added where needed for both the 2nd and the 7th feature and when all 7 features pass all their tests, then Thursday’s RTF reaches 7. If instead, adding the 7th feature broke the build or did something else equally disastrous to make all the features fail their tests, all features malfunction despite passing the tests, or a combination of both, then the Wednesday’s RTF would drop to 0. Again when everything is fixed on Thursday and new tests have been added where needed and when all 7 features pass all their tests, then Thursday’s RTF reaches 7.
  • 5. Running Tested Features: A Better Way to Track Software Development Progress Page 5 of 6 Notice that small changes in development stability (both good and bad) are noticeable, if and only, if RTF is tracked in small time segments. Daily tracking and graphing is best and most useful to the development team. Weekly or monthly rollups of daily RTF is good for higher management. Rollups should include the more granular RTF material as supplements. Valid Uses of RTF and Interpretations RTF gauges the general health of a project. The general shape of the cumulative RTF curve is informative and valuable. A steady rise of the RTF curve over time indicates maximum productivity. Sustained flat lines in the curve indicate periods of low productivity. RTF curves with valleys indicate product instability. The general shape of two RTF curves can be compared, as in the graphs provided, but only at the most imprecise level, never at a numeric level. Don’t Shoot the Messenger Occasional small short term dips in RTF, such as the detailed example, while they must be recorded, are not serious concerns, if resolved quickly. Identifying errors early so that those errors can be fixed, before they ripple throughout the system, is the purpose of automated regression tests. Daily tracking of RTF encourages fixing errors immediately to raise RTF numbers and thereby promotes improved productivity. Team leaders, managers and customers should be concerned, however, when RTF dips or is flat for a sustained period of time. Invalid Uses of RTF Never use RTF metrics to compare individual developers or teams. A software engineer or pair of engineers responsible for 10 RTFs is not necessarily less efficient than one responsible for 20 RTFs. The RTF 10 engineer might be more efficient, if those features were more complex requiring a higher level of effort. Also the surest way to sabotage honest and accurate collection of metrics is to use those metrics to evaluate individuals. Un-weighted RTF is an indicator of project health, not an indication the percent of project completion. Why? Because features aren’t equal. The level of effort to implement one feature or another can vary enormously even within the same project, the same team and the same developer. If RTF is weighted by relative level of difficulty, then that weighted RTF becomes almost identical to Scrum burn up charts. However weighting features using Scrum requires a greater level of agile maturity than measuring RTF, limiting its use. In contrast, RTF can be used on any project. In general, be very skeptical of percent complete statistics. Misleading percent complete numbers are historically responsible for cost and schedule overruns. EVM based on traditional percent complete metrics (especially those from Waterfall or Iterative Waterfall) artifacts are always inaccurate, because no true value is delivered until actual running and tested feature-based software is also delivered.
  • 6. Running Tested Features: A Better Way to Track Software Development Progress Page 6 of 6 Instead of percent complete, consider tracking value delivered over time using RTF. To ensure maximal value, prioritize feature delivery. Customer or warfighter prioritization of feature delivery combined with RTF will front load the delivery of those features with the highest value. Conclusion The Running Tested Features metric is easy to collect and easy to interpret. Simply eyeballing the shape of the RTF project provides valuable insight into the progress and health of any ongoing software development project. Abbreviated Bibliography Boehm, Barry and Hansen, Wilfred “The Spiral Model as a Tool for Evolutionary Acquisition”, CrossTalk : The Journal of Defense Software Engineering, May 2001 Cockburn, Alistair “Agile Software Development”, Addison Wesley, 2002 Larman, Craig “Agile and Iterative Development: A Manager’s Guide”, Addison Wesley, 2004 Jeffries, Ron “A Metric Leading to Agility”, 2004, http://www.xprogramming.com/xpmag/jatRtsMetric.htm Royce, Winston “Managing the Development of Large Software Systems”, Proceedings of IEEE Westcon, 1970