That’s My Middle Path
@rabble - cuboxsa.com
source: http://flic.kr/p/6v82qN
Notes de l'éditeur
I’ve wanted to do this talk for a while. I’ve given a bunch of talks on the mechanics of testing. But i feel like there is something missing. We need to talk more about what to test and what not to test. There is a kind of macho, who can test the We need a balance
One way of thinking of testing is to not do them, but to release often, quickly, write code with feature flags. I like to call this the flickr way, it works. It’s very similar to how facebook works really. Lots of small iterations. No tests, in code, you’ve got users for that! If the space between releases is really short, this model works.
http://flic.kr/p/tvUHq
The other direction is testing everything. You want it to be like this. Rushing along, everything is a blur as you race through development. This is called the TATFT method of software development.
http://flic.kr/p/7kKDuo
This is the ‘test all the fucking time’ method of software development. It’s appealing because it’s hard core. Alpha male, Test First, Test Always style.
This is the ‘test all the fucking time’ method of software development. It’s appealing because it’s hard core. Alpha male, Test First, Test Always style.
But if done to extreme, without regard to utility, you get this. Burdened down by your tests.
http://flic.kr/p/GNhm3
Sometimes we need to find balance between these ways. Testing is a technique we use to make better software.
So the question is then what’s the middle way. What rules should we use to decide what should and shouldn’t be tested. I’m going to go through a bunch of best and worst practices. My rules of thumb for the way of testing.
n ways to find the middle way in testing. What to test, what not to test, and when to know the difference. http://flic.kr/p/8kAmwf
I do rails development, the examples come from rails. But the point is try and extract ideas which are useful in many technology stacks.
The worse of the bad testing techniques is to go from no tests to spending weeks just writing tests. To catch up!
http://flic.kr/p/7ZSbRQ
It’s important not to test to little, to be to focused on the details. You end up testing the implementation. Tests fail, but not because the software as a whole is broken. The simple, Assert 2, 1+1. It’s not interesting. http://flic.kr/p/6Zeo1V
Just as testing the details of an implementation, is bad, testing to broadly also can fail for two reasons. First is time. Testing is a tool for debugging. Debugging is what we are really doing most of the time when we say we’re programming. http://flic.kr/p/7azreS
One classic problem is when people start writing tests is that they are testing their use of somebody else’s library. Presumably that library should have tests itself, or if it doesn’t has at least a stable API. Treat it as a black box. http://flic.kr/p/3c89jn
One example in Rails is to do tests that confirm Active Record is working correctly. That a has_many association can be created and deleted.
I know there was just a talk about Gherkin, the extension of cucumber to make it work in multiple human languages. But i think you should just say no to Cucumber. Clients don’t want to write them. More to the point they can’t do them well. It’s a fascinating exercise, but cucumber stories do not translate in to good code. It’s bulky, nobody reads the stories, code when it fails is easier to understand.
http://flic.kr/p/5aZYkP
http://flic.kr/p/7qQZni
Even though natural language tests like cucumber are, in my humble opinion a disaster, integration tests are a great idea. They’re code, they are ways of dealing with our web apps in a way similar to real use, but simulated. We’re mocking the real browser, and replacing it with something more useful for testing.
http://flic.kr/p/kZtKo
This is what you should do. Test the bugs. Write tests when things break. What you should do is manually try and reproduce a bug. Then once you can ‘see’ it, then you write the test. Then with the test, you can fix it. This is the way to do it. Don’t do that sprint thing.
http://flic.kr/p/JAE3v
When you go in an refactor, then is vital. Really this is the only time it makes sense to spend any time writing just tests.
http://flic.kr/p/88X9DY
So when you’re refactoring, as opposed to updating, then you need tests. Because by definition, refactoring, is to change the implementation without changing the functionality. So you need a test to confirm the functionality is the same.
http://flic.kr/p/aCDk7
The mocky stubbly things. Taking functionality out and replacing it with fakes. There be dragons here. On the one hand, if you have the full system, it’s slow to test. Things like networks, payment processing gateways, they aren’t things you want to hit when
http://flic.kr/p/6Q6en
Show XKCD comic of compiling....
Testing is the new compiling. Perhaps we needed the cognitive space. A break in our work, and so in interpreted languages we started writing tests, to fill the gap which used to be created by compiling.
Show XKCD comic of compiling....
Testing is the new compiling. Perhaps we needed the cognitive space. A break in our work, and so in interpreted languages we started writing tests, to fill the gap which used to be created by compiling.
http://flic.kr/p/4heE23
http://flic.kr/p/55d56F
One lesson from the BDD experience is that you assertions must make sense. Think of writing the tests so when they fail, the test name makes sense, and the failure is directed.
http://flic.kr/p/4Jrqs1
Tests are different from application code. It’s ok to repeat yourself. Some.
http://flic.kr/p/gLUeh
The tests should be damp. As in, repeat yourself some. Because each test is of a variation of functionality, it requires some repetition. Not DRY, but damp as it were.
http://flic.kr/p/4qBKRw
Your tests should be atomic. Each test has the same environment. This means your fixtures need to work well. If one test has an effect on another, then you’re in for a world for hurt. You know what happens when you start smashing volatile atoms together, right?
http://flic.kr/p/7Gssw1
If you fail to keep your tests separate from each other, things fall apart
http://en.wikipedia.org/wiki/Nuclear_explosion
I strongly agree with what Trotter said in the previous talk. Outside In. Finding the sweet spot, outside of your application, outside of the implementation, which you can test.
Similar to the atomic thing. You want your tests to be self contained. A single activity or thing. The tricky part here is to keep each test self contained, a bit of functionality, a unit as it were, but not so focused as to be a sub-unit.
http://flic.kr/p/5ZFbHE
Easily be able to see what works and what doesn’t, clear red green results. If you’ve got one of those grahpical IDE things, then use that. The idea is you have to easily, automatically, know what works.
So, it’s code, look to see if worked, code, see if it worked, code, see if it worked. That is the old school model of software development.
http://flic.kr/p/7vVAJW
That’s my middle path, there is no one path. But i think we need to start calling out what we want, what we do, what we shouldn’t do in testing. While there may be more than one way to do it, all the ways aren’t the same, some work, some don’t.