2. Outline for this Presentation
Motivations
Basic concepts
Environments
Testing Principles
Frameworks
Examples
3. Coding’s great
But it just isn’t enough
If we want to deliver we need...
Agile
Methodologies
Motivations
Basic
Code
Automatic
Testing
Automated
Deployment
Continuous
Integration
Clean
Code
4. Motivations
Manual testing is tedious, expensive, error-prone and limited in scope.
Automated tests offer better coverage of code → higher quality for our clients →
help keep existing clients happy.
Tests give us confidence in changing our code → more rapid feature turnout for
our clients → help bring in new clients.
Proper test coverage helps catch bugs early in the development cycle → making
development cheaper.
Tests make sure a bug fixed once - stays fixed, and is “documented”.
Tests can verify both progression (new features) and regression (breaking existing
features)
Proper testing assists in architecture and design. A piece of code that’s difficult to
test is usually a “code smell” for poor design.
5. Basic Concepts
Originators
Kent Beck, co-creator of JUnit (with Erich Gamma), coined the term TDD, pioneered work in
programming design patterns, one of the original signers on the “Agile Manifesto”
Erich Gamma, co-creator of JUnit, part of the “Gang of Four” (GOF), published an influential work on
design patterns.
Robert Martin (A.K.A. “Uncle Bob”), an influential writer and coder, originator of “Clean Code”, also
contributed to the “Agile Manifesto”.
Martin Fowler, author of the influential book: “Refactoring”, defined some of the basic transformations
later automized by most IDE’s we use today, also on the “Agile Manifesto”
6. Basic Concepts
Unit tests: test the behavior of a single “unit” of code: class, file, module. This is
the most basic level of testing.
Integration tests: test the behavior of several components together. They are
longer-running tests and are usually more complex
System tests: test the behavior of the system as a whole - that the server can
start, that it can connect to the resources that it needs, that it’s in harmony with
other systems.
Other forms of tests: manual tests, performance tests, stress tests, health-checks
8. Unit tests
Should run very fast (milliseconds)
Are focused on a single “unit” of logic (class, source-file etc.)
Aim at covering all (or almost all) existing code-paths in the unit of code under
test: this includes exceptional cases, odd inputs, rare combinations.
Are sometimes written prior to the code being tested (q.v TDD)
Should be able to run in isolation, from within the IDE, or command-line
Should be able to run in parallel
Should not require any heavy resources in order to run (so they can run quickly,
and in isolation) - hence no IOC frameworks (Spring etc.), no web-servers, no
databases, no access to external resources
9. Integration tests
Usually require a more complex setup (database, files, large input, complex state)
Hence, are usually slower (up to several seconds)
Usually aim at testing the integration of various components and their ability to
function together.
Due to exponentially growing complexity, do not aim at covering all code paths,
but only critical / minimal code flows.
Tend to break more easily than unit tests and are more difficult to maintain.
Examples for such tests: DAO layer, DI wiring, “end-to-end” tests
10. System tests
Are the slowest and most complex of tests (tens of seconds and up)
Are resource intensive
Tend to break the most
Hence should be minimal - only what’s critical and can’t be tested otherwise
Examples: server start-up, basic responsiveness, ecosystem health-check
Unlike unit and integration tests, these tests usually have asynchronous code-
flows: start the server(s); wait; run the test code; wait; verify; shutdown the
server(s); wait
11. Environments
Our test code runs in very different environments:
Development / research - stand-alone machines. Focus is on speed, isolation
from external resources and easy setup. Main problem is dev frustration (“why
your tests so slow!? why I need to install 50 Perl packages on my machine to test
something written in Java???! W$#^%”)
Build machines - focus is on resource allocation and test isolation. Main problem
is overall group frustration (“why you break master build again?! why you crash
Jenkins machine??!”)
Labs, QA - focus is on automated deployment, switching between multiple
versions (“why can’t I deploy version X.Y.Z?! why no proper database
migration?!”)
12. Development cycle with tests
Developer checks out code.
Verifies current version / branch’s build is “green” in build system (e.g. Jenkins)
Creates a new feature branch.
Writes tests, writes code, runs all unit tests. Repeat.
Reviews her changes locally and commits code to feature branch.
Commit triggers a build, running most unit and integration tests.
Commit gets reviewed by peers after test build is green.
Code gets merged to master/trunk - triggering a new test build upon merge.
Version is tagged and published to artifactory.
Version is deployed to staging - still more tests: performance, stress, manual.
Version is deployed to production - health checks are run, possibly triggering a
“roll-back”.
13. Test structure
Almost all tests are comprised of three parts:
1.Setup: this includes instantiating objects needed for tests - whether these are
real objects, stubs or mocks; starting and configuring any resources (such as
database entries, web-services); etc.
2.Running the actual code under test
3.Verifying the result is as expected. Verifications include checking the actual
results returned from the code run; Verifying any “side effects” of the code (entries
written to database, messages in logs etc.); When using mocks verification might
include checking certain methods were called or were not called by the code
under test.
14. Test structure
Unit test example:
test (“Writer passes result to db”) {
// setup
Database db = mock(Database);
Writer writer = new Writer(db);
// run
writer.writeToDb(new Result(“yo!”));
// verify
Mock.verify(db).insert(“yo!”);
}
15. Test structure
Integration test example:
test (“Writer passes result to db”) {
// setup
Database db = new Database(); //
slow!
Utils.setupDb(db);
// slow!
Writer writer = new Writer(db);
// run
writer.writeToDb(new Result(“yo!”));
// slow!
// verify
Utils.verifyEntry(db, “some-table”, “yo!”); // slow!
// cleanup
17. Testing principles
Tests should be independent of each other - no assumption as to the order with
which tests are executed should be made (and indeed, many times this order is
unpredictable). Execution order can of course be controlled within each test, by
using before-each, before-all, after-each and after-all methods for setup and
cleanup
Tests should clean up after themselves, leaving the system (global memory, file-
system) in exactly the state it was at prior to the test being run. Hence need to be
weary of: global variables being modified; data persisted to database or to disk
(always use temporary tables and files)
18. Testing principles
We test our code, not others’ code. If our code uses a database server, we don’t
test that the database knows how to update rows - we assume it works. We test
that our code does what it’s expected to do.
Tests should be very descriptive: what’s being tested, what are the assumptions of
the test.
Tests should not overlap each other - duplicate tests means a greater
maintenance overhead with no addition gain.
Well written tests serve as the best form of documentation for the detailed
behavior of the code under test.
19. Testing principles
Tests should normally focus on visible behavior. We’re usually interested in the
“what”, and only less often in the “how”.
Hence, tests are usually targeted at “interfaces”, not “implementations”.
A broken test should easily point to where the problem is.
Understanding code is difficult enough - we need to try to keep testing code
simple: setup, run, verify.
Tests are not “cost-free”, they have “drag” - they add another layer of code that
needs to be maintained - hence they should be carefully devised and properly
maintained.
Tests are not a “silver bullet” - they can’t replace proper coding, careful code
reviews, thorough monitoring, general carefulness and responsibility.
20. Testing principles
Some things are inherently difficult to test with today’s technologies:
Asynchronous code
Extremely exceptional conditions (OOM, deadlocks)
Event-driven, non-deterministic code flows
Integrations of complex systems
UI visual layout
21. Testing principles
Other things are not worth the trouble of writing automated tests for:
Prototypes, proofs-of-concept and other forms of playing-around-in-the-dark.
Ad-hoc executions.
Code that needs to be “fiddled” into place.
Problem #1: sometimes our prototypes become our products…
Problem #2: what might at first seem like a one-timer, might turn out to be a
repeated task.
Solution #1: think hard beforehand.
Solution #2: Be prepared to write tests “ex-post”.
22. Frameworks
Java - JUnit
Scala - ScalaTest
Python - unittest (PyUnit)
C - Check
Perl - TAP (Test::More)
Matlab - internal framework (as of r2013a)
24. Final Thoughts
In 99.99% of the situations, not writing tests is a very poor decision.
It’s usually made by engineers who either don’t know better or somehow think
battle-proven methodologies don’t apply to them.
“Tight deadlines” are probably the worst excuse ever given to not writing tests:
testing makes development faster, not slower.
Introducing tests to “legacy code” is much harder than writing code with tests in
the first place - code without tests is a significant “technical debt”.
Mastering automated testing takes time - it might seem rather difficult at first.
The right time to start is now!