Contenu connexe


Finding the Middle Way of Testing

  1. Testing Finding the Middle Path source:
  2. The Flickr Way source:
  3. The QA Way source:
  4. Test Everything Way source:
  5. TATFT source:
  6. Test All The Fucking Time source:
  7. Overloaded With Tests source:
  8. Balance source:
  9. Finding The Way source:
  10. On Rails source:
  11. Test All At Once source:
  12. Don’t Test To Narrowly source:
  13. Don’t Test To Broadly source:
  14. Testing What Is Already Tested source:
  15. The Database
  16. Cucumber source:
  17. What You Should Do source:
  18. Integration source:
  19. Test The Bugs source:
  20. Refactoring source:
  21. Test Then Change source:
  22. Mocky Stubbly Things source:
  23. Make Tests Fast source:
  24. Make Tests Fast TESTING TESTING! source:
  25. Fixtures light fixture? source:
  26. Data For Tests source:
  27. true is not false source:
  28. Tests Aren’t DRY source:
  29. They’re Damp source:
  30. Atomic source:
  31. Atomic Failure source:
  32. Outside In source:
  33. Self Contained source:
  34. Red Green source:
  35. That’s My Middle Path @rabble - source:

Notes de l'éditeur

  1. I’ve wanted to do this talk for a while. I’ve given a bunch of talks on the mechanics of testing. But i feel like there is something missing. We need to talk more about what to test and what not to test. There is a kind of macho, who can test the We need a balance
  2. One way of thinking of testing is to not do them, but to release often, quickly, write code with feature flags. I like to call this the flickr way, it works. It’s very similar to how facebook works really. Lots of small iterations. No tests, in code, you’ve got users for that! If the space between releases is really short, this model works.
  4. The other direction is testing everything. You want it to be like this. Rushing along, everything is a blur as you race through development. This is called the TATFT method of software development.
  5. This is the ‘test all the fucking time’ method of software development. It’s appealing because it’s hard core. Alpha male, Test First, Test Always style.
  6. This is the ‘test all the fucking time’ method of software development. It’s appealing because it’s hard core. Alpha male, Test First, Test Always style.
  7. But if done to extreme, without regard to utility, you get this. Burdened down by your tests.
  8. Sometimes we need to find balance between these ways. Testing is a technique we use to make better software.
  9. So the question is then what’s the middle way. What rules should we use to decide what should and shouldn’t be tested. I’m going to go through a bunch of best and worst practices. My rules of thumb for the way of testing.
  10. n ways to find the middle way in testing. What to test, what not to test, and when to know the difference.
  11. I do rails development, the examples come from rails. But the point is try and extract ideas which are useful in many technology stacks.
  12. The worse of the bad testing techniques is to go from no tests to spending weeks just writing tests. To catch up!
  13. It’s important not to test to little, to be to focused on the details. You end up testing the implementation. Tests fail, but not because the software as a whole is broken. The simple, Assert 2, 1+1. It’s not interesting.
  14. Just as testing the details of an implementation, is bad, testing to broadly also can fail for two reasons. First is time. Testing is a tool for debugging. Debugging is what we are really doing most of the time when we say we’re programming.
  15. One classic problem is when people start writing tests is that they are testing their use of somebody else’s library. Presumably that library should have tests itself, or if it doesn’t has at least a stable API. Treat it as a black box.
  16. One example in Rails is to do tests that confirm Active Record is working correctly. That a has_many association can be created and deleted.
  17. I know there was just a talk about Gherkin, the extension of cucumber to make it work in multiple human languages. But i think you should just say no to Cucumber. Clients don’t want to write them. More to the point they can’t do them well. It’s a fascinating exercise, but cucumber stories do not translate in to good code. It’s bulky, nobody reads the stories, code when it fails is easier to understand.
  19. Even though natural language tests like cucumber are, in my humble opinion a disaster, integration tests are a great idea. They’re code, they are ways of dealing with our web apps in a way similar to real use, but simulated. We’re mocking the real browser, and replacing it with something more useful for testing.
  20. This is what you should do. Test the bugs. Write tests when things break. What you should do is manually try and reproduce a bug. Then once you can ‘see’ it, then you write the test. Then with the test, you can fix it. This is the way to do it. Don’t do that sprint thing.
  21. When you go in an refactor, then is vital. Really this is the only time it makes sense to spend any time writing just tests.
  22. So when you’re refactoring, as opposed to updating, then you need tests. Because by definition, refactoring, is to change the implementation without changing the functionality. So you need a test to confirm the functionality is the same.
  23. The mocky stubbly things. Taking functionality out and replacing it with fakes. There be dragons here. On the one hand, if you have the full system, it’s slow to test. Things like networks, payment processing gateways, they aren’t things you want to hit when
  24. Show XKCD comic of compiling.... Testing is the new compiling. Perhaps we needed the cognitive space. A break in our work, and so in interpreted languages we started writing tests, to fill the gap which used to be created by compiling.
  25. Show XKCD comic of compiling.... Testing is the new compiling. Perhaps we needed the cognitive space. A break in our work, and so in interpreted languages we started writing tests, to fill the gap which used to be created by compiling.
  28. One lesson from the BDD experience is that you assertions must make sense. Think of writing the tests so when they fail, the test name makes sense, and the failure is directed.
  29. Tests are different from application code. It’s ok to repeat yourself. Some.
  30. The tests should be damp. As in, repeat yourself some. Because each test is of a variation of functionality, it requires some repetition. Not DRY, but damp as it were.
  31. Your tests should be atomic. Each test has the same environment. This means your fixtures need to work well. If one test has an effect on another, then you’re in for a world for hurt. You know what happens when you start smashing volatile atoms together, right?
  32. If you fail to keep your tests separate from each other, things fall apart
  33. I strongly agree with what Trotter said in the previous talk. Outside In. Finding the sweet spot, outside of your application, outside of the implementation, which you can test.
  34. Similar to the atomic thing. You want your tests to be self contained. A single activity or thing. The tricky part here is to keep each test self contained, a bit of functionality, a unit as it were, but not so focused as to be a sub-unit.
  35. Easily be able to see what works and what doesn’t, clear red green results. If you’ve got one of those grahpical IDE things, then use that. The idea is you have to easily, automatically, know what works. So, it’s code, look to see if worked, code, see if it worked, code, see if it worked. That is the old school model of software development.
  36. That’s my middle path, there is no one path. But i think we need to start calling out what we want, what we do, what we shouldn’t do in testing. While there may be more than one way to do it, all the ways aren’t the same, some work, some don’t.