20. Never write a test that succeeds the 1 st time Start with null case, or something that doesn’t work Try something trivial to make the test work Loose coupling & testability go hand in hand Use mock Objects Write the test first Charles' Six Rules of Unit Testing
21.
22.
23.
24.
25.
26.
27.
28.
29.
30. Benefits of Unit Testing It provides a strict, written contract that the piece of code must satisfy. It find problems early in the development cycle. It allows the programmer to refactor code at a later date , and make sure the module still works correctly (i.e. regression testing). It may reduce uncertainty in the units themselves and can be used in a bottom-up testing style approach. It provides a sort of living documentation of the system . Developers looking to learn what functionality is provided by a unit and how to use it. By testing the parts of a program first and then testing the sum of its parts, integration testing becomes much easier.
31.
32.
33. Naming standards for unit tests BAD_DATA or EMPTY_ARRAY or NON_INITIALIZED_PERSON Variable names should express the expected input and state Public void Sum_NegativeNumAs1stParam_ExcepThrown() Test name should include name of tested method or class. [MethodName_StateUnderTest_ExpectedBehavior] testCalculator() Test Name should only begin with Test if it is required by the testing framework or if it eases development and maintenance of the unit tests in some way. Public void SumNegativeNumber2() Test name should be presented as a statement or fact of life that expresses workflows and outputs Public int Sum(params int[] values), Public int Sum_NumberIsIgnored() Test name should include the expected input or state and the expected result for that input or state Public void Sum_simpleValues_Calculated () Test name should express a specific requirement. The basic naming of a test comprises of three main parts [MethodName_StateUnderTest_ExpectedBehavior]
34. Naming standards for unit tests- Cont…… : The [Category] attribute when applied to a method associates the Test within a user-defined category. Category Tests with the [Ignore] attribute are skipped over when the Tests are run . Ignore: Tests with the [Explicit] attribute won't run unless you manually run them . Explicit: Similar to finalizers Fixture TearDown Similar to constructors. Fixture Setup a method with the [TearDown] attribute is called at the end of every test within a fixture. TearDown Test Fixtures can designate a special piece of code to run before every Test within that Fixture. That method is decorated with the [Setup] attribute. SetUP Methods within the Fixture that are decorated with the [Test] attribute and contain code that validates the functionality of our target . Test Test Suites are an older style of organizing tests. They're specialized fixtures that programmatically define which Fixtures or Tests to run. Suite Synonymous with " TestFixture ", a fixture is a class that contains a set of related tests. Fixture to refer to the piece of functionality that is testing. Target / Subject
35. Naming standards for unit tests- Cont…… you should only write the methods that you need today. Adding methods for future purposes only adds visual noise for maintenance purposes AVOID: Empty Setup methods (You can always go back) CONSIDER: Splitting Test Libraries into Multiple Assemblies Suites represent significant developer overhead and maintenance. Categories offer a unique advantage in the UI and at the command-line that allows you to specify which categories should be included or excluded from execution. CONSIDER: Using Categories instead of Suites or Specialized Tests For example, you could execute only "Stateful" tests against an environment to validate a database deployment. In scenarios where you are testing sets of common classes or when tests share a great deal of duplication, consider creating a base TestFixture that your Fixtures can inherit. CONSIDER: Deriving common Fixtures from a base Fixture If you have a requirement where you want to test in production or verify at the client's side, you can accomplish this simply by bundling the test library with your release. CONSIDER: Separating your Tests from your Production Code.
36. Naming standards for unit tests- Cont…… If your application has features that differ slightly for application roles, it's likely that your test names will overlap. CONSIDER: Using prefixes for Different Scenarios Some have adopted a For<Scenario> syntax (CanGetPreferencesForAnonymousUser). Other have adopted an underscore prefix _<Scenario> (AnonymousUser_CanGetPreferences). Since Exceptions are typically thrown when your application is a performing something it wasn't designed to do, prefix "Cannot" to tests that are decorated with the [ExpectedException] attribute. CONSIDER: Use "Cannot" Prefix for Expected Exceptions Examples: CannotAcceptNullArguments , CannotRetrieveInvalidRecord . Most tests require special knowledge about the functionality your testing, so a little documentation to explain what the test is doing is helpful. DO: Document your Tests A few comments here and there are often just the right amount to help the next person understand what you need to test and how your test approaches demonstrates that functionality. The test name should match a specific unit of functionality for the target type being tested. Some key questions you may want to ask yourself: "what is the responsibility of this class?" "What does this class need to do?" Think in terms of action words. DO: Name Tests after Functionality For example, a test with the name CanDetermineAuthenticatedState provides more direction about how authentication states are examined than Login .
37. Naming standards for unit tests- Cont…… Sometimes we create tests for bugs that are caught late in the development cycle, or tests to demonstrate requirements based on lengthy requirements documentation . As these are usually pretty important tests (especially for bugs that creep back in). AVOID: Unclear Test Names it's important to avoid giving them vague test names that represent a some external requirement like FixForBug133 or TestCase21 . PascalCase should suffice. Imagine all the time you save not holding down the shift key. AVOID: Using underscores as word-separators use_underscores_as_word_separators_for_readability, If you find that your tests are named after the methods within your classes, that's a code smell that you're testing your implementation instead of your functionality. AVOID: Naming Tests after Implementation If you changed your method name, would the test name still make sense? Tests that are marked with the Ignore attribute should include a reason for why this test has been disabled. AVOID: Ignore Attributes with no explanation
38. Naming standards for unit tests- Cont…… Finished with Unit Testing As Categories are sensitive to case and spelling, you might want to consider creating your own Category attributes by deriving from CategoryAttribute. CONSIDER: Defining Custom Category Attributes Using Categories is a powerful way to dynamically separate your tests at runtime, however their effectiveness is diminished when developers are unsure which Category to use. Categories DO: Limit the number of Categories
39.
Notes de l'éditeur
Code refactoring is the process of changing a computer program 's source code without modifying its external functional behavior in order to improve some of the nonfunctional attributes of the software. Advantages include improved code readability and reduced complexity to improve the maintainability of the source code, as well as a more expressive internal architecture or object model to improve extensibility . Unit Testing is conducted by the Developer during code development process to ensure that proper functionality and code coverage have been achieved by each developer both during coding and in preparation for acceptance into iterations testing.
If you do all the stuff that you know you’re supposed to anyway (loose coupling, high cohesion, etc.), writing tests is really easy. :-) Seriously : Loosely coupled and highly cohesive software is much easier to test than tightly coupled systems, or ones where it’s hard to tell what something is really for (low cohesion). There’s a lot of Patterns for making that easier, like Inversion of Control, Strategies, etc. Writing the test before you write the code guarantees that your code is “testable.” With very few exceptions, code that is hard to test is a HUGE flag that the design is bad (i.e., tightly coupled, low cohesion). If you have dependencies between tests, or have to do tons of setup, then it’s a pain to write tests and to run them.
From past experience, projects go to lengths to separate tests from code but don't place a lot of emphasis on how to structure Test assemblies. Often, a single Test library is created, which is suitable for most projects. However, for large scale projects that can have hundreds of tests this approach can get difficult to manage. I'm not suggesting that you should religiously enforce test structure, but there may be logical motivators to divide your test assemblies into smaller units, such as grouping tests with third-party dependencies or as an alternative for using Categories. Again, separate when needed, and use your gut to tell you when you should.
1. Eventually, you'll want to circle back on these tests and either fix them or alter them so that they can be used. But without an explaination, the next person will have to do a lot of investigative work to figure out that reason. In my experience, most tests with the Ignore attribute are never fixed.