This document discusses test driven development (TDD) and how it can be applied to legacy code. It describes the TDD process and patterns for writing tests. When applying TDD to legacy code, it recommends identifying change points and inflection points in the code. Tests should be written to cover the inflection point by breaking dependencies. Then changes can be made using the full TDD process, and the covered code can be refactored.
2. Introduction
TDD is a changed mindset for development
One of the cornerstones of the Agile movement
Rejection of the defined engineering approach
No detailed designs set in stone up front
−
Embraces change through refactoring
−
Result is more cohesive and less coupled code
Regression test bed and easier integration
3. TDD Process
Complex view
1. Add a little test
2. Run all tests and fail
Simple view
3. Make a little change
1. Make a test 4. Run all the tests and
succeed
2. Make it run
5. Refactor to remove
duplication
3. Make it right
4. Introductory Exercise
Given three integers
representing the length
of the sides of a triangle
return:
1 if equilateral
2 if isosceles
3 if scalene
Throw an exception if
not well formed.
5. Writing tests
Chose a simple test first, use small steps...
When you have a list of tests, order them to get most
benefit up front.
Forces you to define interfaces early on
One thing at a time; keep a to-do list
What to test:
Requirements, assumptions, boundary cases
−
Goal is to get 100% code coverage with tests
−
6. Making tests run
TDD book offers three main approaches:
Fake It
−
Triangulation
−
Obvious Implementation
−
Goal is to get a passing test ASAP
The longer a test is failing, the less confident you
−
are
7. Fake It
Purpose is to get test running ASAP
Ignore coding conventions
Example: addition operation
Why?
Let's us take little steps...
−
Provides more refactoring confidence
−
Keeps work focused
−
8. Triangulation
How can we abstract conservatively with tests?
Example: addition operation
Safest way to abstract an operation
Allows for very small steps:
Why are small steps important?
−
Do you always want to take small steps?
−
9. Obvious Implementation
When small steps are too slow
Simply implement the solution you see
Faster, but...
Important to watch that bar isn't red for long or
−
often
Harder: solving both “code that works” and “clean
−
code” at the same time
Shouldn't keep getting test failures doing this
−
11. Patterns
Red Bar Patterns
One step test
TDD Patterns
Starter test
Isolated test
Explanation test
Test list
Learning test
Test first
Regression test
Assert first
Evident data
12. Patterns
Green Bar Patterns
Testing Patterns
Fake it
Child test
Triangulate
Obvious
Mock object
Implementation
Log string
One to many
Crash test dummy
Broken test
Clean check-in
13. TDD of Legacy code
1. Identify change points
2. Find an inflection point
3. Cover the inflection point
a) Break the dependencies
b) Write tests
4. Make changes
5. Refactor the covered code
14. Change Points
Where will changes need to be made in code?
Don't assume the first spot you find is the best
Approach resulting in fewest changes may be
−
good?
What about approach using simplest design?
−
Better to work in area with existing unit tests
−
Debuggers are useful in this stage
15. Inflection Point
A narrow interface to a set of classes or functions
Any changes in classes behind an inflection point are
either detectable at the inflection point or
inconsequential to the application.
Inflection points determined by compile time and
run time dependencies and code behavior.
What are some examples from the your project?
16. Covering an Inflection Point
Covering with unit tests (islands of test)
Legacy code: no tests means lots of dependencies
If class, try to create an instance to see links
−
If method, look at signature (and for globals)
−
Look at interactions with data and resources
−
17. Breaking Dependencies
Database:
Find the smallest slice of data needed to get
−
functionality to run; everything else null/empty.
Class:
Mock object: just answers back constants
−
Self shunt: object communicates with test case
−
Method:
Build up structures manually, don't rely on
−
complex helper methods
18. Writing Covering Tests
Goal is to determine the current behavior
Reverse of TDD; covering existing functions
Some tips
Test at the boundary values
−
Ensure “golden” or normal values are tested
−
Tests for handled error conditions
−
Correctness is just the behavior of system now
If lucky, can use old requirements docs :)
19. Make changes
Now that area has test coverage, make changes
Use the full TDD cycle for the changes
Run the tests often
Refactor as much as needed
Extract method (create test first)
−
Extract class refactoring
−
Many others...
−
20. References
Beck, Kent. Test-Driven Development: By Example. Addison-Wesley. 2003
Feathers. Michael. Working Effectively with Legacy Code. Object Mentor,
Inc. 2002. http://www.objectmentor.com/resources/articles/
WorkingEffectivelyWithLegacyCode.pdf
Parrish, Alan et al. Ordering Test Cases to Maximize Early Testing.
University of Alabama. 2003. http://www.agilealliance.com/articles/articles/
ExtremeUnitTesting.pdf
Ynchausti, Randy A. Integrating Unit Testing Into a Software
Development Team's Process. Monster Consulting. 2003. http://
www.agilealliance.com/articles/articles/IntegratingUnitTesting.pdf
Notes de l'éditeur
This was an in-house seminar I gave for our development team (6 people) at work to help us get over the initial challenges in writing unit tests for our code.
The presentation and discussion only took an hour or so. Then we split up into pairs and worked on adding unit tests to our untested Java, C, and Python code for an afternoon.
We then held an hour long meeting to discuss the outcome of the pair work. Details about test frameworks, what worked, what didn't, etc. were discussed.
Reverses the order of typical development. (Where tests are often skipped or ignored at the end)
Quote:
“If you can't write a test for what you are about to code, then you shouldn't even be thinking about coding.”
Dramatically narrows the gap between intent and result in software development. Unintended errors are found much sooner, which saves time and cost.
1) Write a test:
- Think about the task in your mind.
- Invent interfaces you wish you had.
- Include all the elements needed.
2) Make it run:
- Quickly get the bar green with new test
- If can see a solution requiring a while to write,
just write it down on To-Do list
- Will cover in details approaches for this...
- Ignore coding standard, use constants, etc...
3) Make it right (refactor):
- Now remove code duplication
- Meet coding standards
- Keep the bar green!
Ask that they cover the method/class with unit tests.
Stress the importance of recording the order in which they work. What steps did they follow?
Give them about 5-10 minutes. Answer questions as they work.
Then discuss for 5 minutes:
What tests did they choose and why?
Show my example with 6 tests, other author did 65
How many is enough?
Greedy ordering algorithm:
“Write some tests and write the methods that allow you to run more tests sooner.”
Describe task lists:
OPT task list for main work tasks
Pad of paper for things to not forget
Evolution task list for TDD coding
Keeps my ideas for other tests, possible refactorings, etc. Anything that distracts from current test I'm in.
Coverage in industry can be 80-90%
“Code that works” stage
Give example of addition operation on board:
Test: assert(7 == Integer.add(five, two))
Code: return (5+2); hard-coded
Metaphor of rock climber getting to far above last attachment point
Psychological benefit to seeing bar green is more confidence
Concentrating on a single test means you'll do a better job of it, than juggling two or three at a time.
Start with one test:
assert(7 == Integer.add(five, two))
Code: return 7;
Add more asserts to a test case
assert(4 == Integer.add(two, two))
Now must correct implementation to pass new asserts
private int add(int augend, int addend)
{ return augend + addend };
Emphasize the power of choice here to go fast or slow during development.
How many times have we seen the same problem occur again in a edit, compile, debug cycle? Unit tests prevent this type of regression. If making the same mistake, then
“Clean code” stage...
See Beck's TDD book for details.
See Beck's TDD book for details.
First ask what they consider legacy code?
Can be defined as any code without automated unit tests.
Methods:
- debugger to step through
- code tracing
- runtime logging
Method example: parsing a data line of EDR data need array of params and column mapping. Rather than hit the database, I just populate the array with the ones I need manually.
Other ways of breaking external dependencies?
Refactoring... example Extract Method...
Notice, I say “handled” error conditions. These coverage tests only test the current functionality. If an error condition isn't handled, put it on your to-do for later. No refactoring of code yet!
Why is it important to separate out the coverage test writing from refactoring and not mix them?
If can't find a decent inflection point, may need to use system boundaries. This requires writing smoke tests against the whole system as described by McConnell.