One day workshop aimed at giving attendees an overview of testing in an agile environment and an understanding of test automation within a development team.
3. Focus areas
● Requirements
● Defining acceptance tests
● Implementing automation
● Integrating into the development team
3
4. Why test
4
● Does the system do what we expect?
● Have changes affected previous behaviour?
● How does the system work?
● Is this system ready to be used?
7. Gathering Requirements
7
● Good testing starts with good requirements
● Capture intent and not solution
● Think about non functional requirements
● Make sure that requirements are clear
8. 8
Requirement
Secure expensive equipment
Acceptance Criteria
All equipment must have a
kensington lock connected
Test Passes
Requirement
Secure expensive equipment
Acceptance Criteria
All equipment must be
physically secured to desks
Test Fails
9. Example: Simple Requirement
Add a new LinkedIn contact
As a user
In order to keep in touch with a previous colleague
I want to be able to add a new contact
9
Add a new LinkedIn contact
Users who wish to keep in touch with previous
colleagues should be able to add another user as a
contact so that they can benefit from updates and
posts from that user.
Some assumptions….
User account creation, authentication, updates and search functionality already exists.
Connecting with former colleagues is a unique journey with it’s own flow compared with
connecting with friends or strangers.
10. Practical: Create a requirement
● Choose a product owner
● Pick a well understood web based system as a basis
● Identify a feature (new or existing) to work against
● Describe the intent of the feature with a simple narrative
10
11. Example: Acceptance Criteria
11
Add a new LinkedIn contact
Users who wish to keep in touch with previous colleagues should be able to add another
user as a contact so that they can benefit from updates and posts from that user.
Acceptance Criteria
● User should be able to ask to connect with another user when viewing a profile
● Connection should be made only after the other user has agreed
● User should see the connection in their contact list if successful
14. Identifying scenarios
14
● Start by defining the use case
● Identify additional scenarios by considering different entry points
● Keep test scenarios small
● Don’t try to draw 1-1 relationships between acceptance tests and criteria
● Focus on posing questions rather than answering them
15. Example: Scenario list
Scenario: Connection request is approved
Scenario: Connection request is rejected
Scenario: User attempts to request again after rejection
15
Add a new LinkedIn contact
Users who wish to keep in touch with previous colleagues should be able to add another user as a contact so that they can benefit from
updates and posts from that user.
Acceptance Criteria
● User should be able to ask to connect with another user when viewing a profile
● Connection should be made only after the other user has agreed
● User should see the connection in their contact list if successful
16. Practical: Defining your scenarios
● Identify the “happy path” scenarios at a high level
● Extend the scenarios with negative test cases and “what if’s”
16
18. Introducing Behaviour Driven Development
Introduced by Dan North in 2006
Devised as a way of getting the benefits of TDD earlier in the process
Natural language layer on top of testing
Links directly into test automation
Helps when working with non technical stakeholders
Can produce “Living Documentation”
Maps standard test structure to “keywords” 18
19. Using Behaviour Driven Development
Preconditions, actions and assertions are all
expressed as sentences with a keyword
Each sentence in a test is a “step”
Multiple steps come together to form a
“scenario”
Multiple scenarios are grouped into a
“feature”
Feature is not a story, multiple stories can go into a
feature
19
Feature: Introducing BDD
Scenario: Explain BDD
Given a room full of people
When I explain BDD
Then attendees should be able
to write their own BDD scenario
20. Example: Test Steps
Scenario: Connection request is approved
Given the following new users:
|Bob|
|Jeff|
And I am authenticated as user ‘Bob’
When I request to connect with ‘Jeff’
And ‘Jeff’ accepts my connection request
Then I should see ‘Jeff’ in my contact list
20
Scenario: User attempts to request after rejection
Given the following new users:
|Bob|
|Jeff|
And I am authenticated as user ‘Bob’
And ‘Jeff’ has previously rejected a connection
request from me
When I request to connect with ‘Jeff’
Then I should be informed that I am not permitted
to send another request to ‘Jeff’
21. Practical: Capturing test steps
Choose a “happy path” scenario to expand
Describe the steps using BDD structure
Describe the preconditions as briefly as possible
Describe the actions a user will perform
Describe the assertions used to validate the functionality 21
22. Acceptance method
Sometimes useful to define how a story will be verified
Some functional outcomes may not be easily verifiable in the standard
testing approach
Gives clarity when team responsibilities are unclear
Enables a pragmatic approach to quality assurance 22
23. Test automation
23
Positives Negatives
vs
Faster feedback
Consistent results
Supports CI / CD
Integrates into Agile development
Can be easily reused and extended
Complexity
Skills gap
Lead time
Cost
25. Levels of testing
● Low level unit tests
● Service / integration tests
● User interface tests
○ Note, not necessarily “Graphical User Interface”
● Speed and value implications at each layer 25
www.martinfowler.com
26. Building a test framework
26
● Collection of tools to support a number of needs
○ Way of defining test cases
○ Tools for interacting with the system under test
○ Tools for making assertions
○ Mechanism for reporting of results
● Structured for re-use and maintenance
28. Growing over time
Start with just the basics
A test framework should evolve in line with the codebase
Capabilities are added to the framework only when required
Don’t be scared to refactor
28
29. Example: Growth over time
29
Authenticate
as user
Authenticate
as admin
Add a new
contact
Log In
User
Dashboard
Admin
Dashboard
User
Search
Profile
View
Request
Connection
30. Managing state - why
30
All tests should be consistently repeatable
Tests should be able to run in parallel
It should not be possible for tests to be affected by other test runs or manual
testing
Tests should be focused on just the area relevant to the scenario
31. Managing state - how
31
Place the system in an expected state
Maintain test data per scenario
Destroy test data
Expose all capabilities in a programmatic way
33. Single team
● Testing should always be an aspect of the development team
● Automated tests should be written at the same time the code is
implemented
● Quality control must become part of the team definition of done
● All team members should be encouraged to engage with tests 33
34. Verifiable requirements
● Testing should be involved at every stage of the development process
● Easier communication with demonstrable requirements
● Capturing test scenarios helps stakeholders streamline what they want
● Scope clarity enables better estimation
34
35. Execution
35
Integrate automation into the continuous integration pipeline
Run against production like test environments
Use tests as deployment gates
Publish reports and make available
36. Example: CI /CD pipeline
36
Build Code
& Run Unit
Tests
Run Acceptance
Tests
Deploy to QA
Environment
Deploy to UAT
Environment
Deploy to Production
Environment
Deploy to Perf Test
Environment
Commit code
change
Run Performance
Tests
Manual acceptance /
look and feel testing
New
Requirement
Developer
Tests fail
Change
rejected
Tests Fail
Tests pass
Change
accepted
37. Defect management
● Review test reports and raise defects
● Ensure that defects make their way back into the workstream
● Manage tests based on expected outcome
● Use automation to speed up defect resolution
● Use defects to improve automation coverage 37
38. Further reading
Print Resources
Agile Testing
Lisa Crispin, Janet Gregory
Test focused agile development, testing in a
development team
The Cucumber Book
Matt Wynne, Aslak Hellesoy
Cucumber BDD tool and the ‘Gherkin’ DSL
38
Web Resources
https://dannorth.net/
Dan North
Origins of BDD, team process
https://gojko.net/
Gojko Adzik
Requirements gathering
http://sauceio.com/
Sauce Labs
Testing process, automation tools
Assuming everyone has a basic understanding of Agile.
Check the room:
Split into teams of ~4 people with each team having identified one product owner
We’ll focus on these topics throughout the workshop.
Does the system do what we expect?
We need to verify that the system, once implemented, does what was originally intended.
Have changes affected previous behaviour?
If we have added some new functionality we can verify that this works but also need to be sure that previously delivered functionality is still working.
How does the system work?
It’s amazing how many systems exist where functional areas are completely misunderstood. This stems from a few issues:
Requirements were not properly validated when it was implemented. It doesn’t necessarily do what it was intended to do.
The functionality has been tweaked / evolved to meet a different use case than originally intended.
Legacy system used by a small number of users
Comprehensive test suites offer a source of truth and demonstrate how the system works and the use cases it is intended for.
Is the system ready for use?
Controversial topic as it relates to “confidence” which is not black and white. In general though business stakeholders tend to view test coverage as a guarantee that functionality is at least roughly correct.
This can be a false sense of security if the tests themselves are not validated or the assertions are poor.
Agile Testing
Concept that encapsulates all of the terms. Agile testing is a collection of ideas with the goal of providing constant validation and fast feedback alongside the iterative improvement delivered by an agile team.
===
Continuous Integration
A CI pipeline allows for automation of the whole SDLC. When a code change is pushed, it is built and deployed automatically. The change can then be tested as part of the pipeline and any problems flagged immediately (depending on speed of tests) to the team. You might also encounter the term Continuous Delivery which is where the change is deployed into production if the tests pass.
Test Automation
The concept of automation is key to delivering on the goals of agile testing. In order to keep up with regular changes to the system and allow for ideas like CI and CD tests need to run regularly as changes are made, rather than waiting for a testing phase.
This covers many levels of testing (discussed later). Unit, integration and acceptance testing all needs to be part of the pipeline.
===
Acceptance / Functional Testing - Normally used interchangeably. General idea is tests which execute the functionality of the system as a full stack - ie, all components / integration points / interaction methods. These are usually example led and should be the smallest sets of automated tests as they tend to be slower and more brittle.
UI Testing
Often misunderstood to be the same as acceptance testing. While often you would test the UI as part of acceptance tests as it represents the “full stack”, an acceptance test could for example be executed at the API level. The UI can also be tested independently with mocks and stubs.
===
Regression Testing
The idea of constantly checking for damage to existing functionality when making changes. This is concept exists in any type of testing, but is effectively something you get for free with test automation.
===
Test Driven Development
Simple concept that’s difficult to master, the idea of having a test which fails before starting work on any code change. Usually applies to unit testing (development) but as a general idea also extends into acceptance testing.
Behaviour Driven Development
Put simply, this is just a natural language layer over any level of test automation. BDD aims to wrap technical tests in language which can be understood by business stakeholders so that test pass / failure reports can be shared more easily. Like TDD it includes the idea of capturing the outcome before starting work. Often misunderstood to be an integral part of Acceptance Testing.
Good testing starts with good requirements
We can’t properly test anything where it’s intended behaviour is not clearly understood up front. By the same token, how can a developer implement anything correctly if this is not clear.
Requirements should clearly show what is expected and how it will be validated.
Capture intent and not solutionA requirement should always focus on the intent and not the expected implementation by describing the expected outcome. If an implementation / approach change, this should normally not impact the underlying requirement. Many teams create “technical stories” which in my experience are very rarely required.
Problems with proposed implementation MAY impact the requirement (e.g. if what was intended is not viable / too expensive).
Think about non functional requirements
Many agile teams focus entirely on the functional outcome. Proper testing / acceptance should also reflect any nonfunctional requirements associated with the functionality.
Sometimes non functionals can be considered crosscutting concerns that apply to all and don’t need to be explicitly stated (e.g. all web pages must load within 3 seconds) but others may be specific (e.g. functionality must be restricted to users with a certain role).
Any test approach needs to verify that new functionality conforms to the expected behaviour and meets any security and performance expectations.
Make sure that requirements are clear
A good set of requirements should be immediately understood by anyone reading them. This means avoiding complex language, business speak and abbreviations as well as avoiding too much context. It’s often useful to describe the scope of the story (e.g. what is not included) wherever there is uncertainty.
Encountered on a client site and taken as a good visual aid to demonstrate why requirements should be captured against intent not implementation.
Choose a product owner
Someone needs to be the “source of truth” and will answer any questions about the expected functionality. This doesn’t have to be accurate, the answers can be made up as long as they are consistent.
Pick a well understood web based system to test
We’ll use a real world system and work through the process of identifying and implementing tests. Pick something where the domain is well understood to simplify this process.
e.g. Amazon, Reddit, Google, Facebook
Identify a feature
You can either choose an existing feature of your system or make up a new one.
Make sure it’s not too simple, but also not too complicated at this stage.
Describe the intent of the feature
Agree and capture a small amount of text which describes:
What we are trying to achieve at a high level
Who the functionality is intended for
Any context (previous / next steps) that applies
You can either write some freeform text or use the Agile standard “As a”, “In order to”, “I want to” structure
Capture high level statements describing expected outcome
Capture a simple list of things that the system should support once this requirement has been met.
Avoid any implementation assumptions.
What can the user do which they couldn’t do before?
Add acceptance criteria around restrictions / limitations
What restrictions (e.g. permissions) should apply to the functionality? Are there certain situations where a certain acceptance criteria would not be applicable?
Describe the use case first
A test scenario should always start with the most common / expected use case as the base test.
Identify additional scenarios by considering different entry points
Think about different types of user or different ways they can interact to identify additional scenarios
e.g. logging in has a different behaviour for an unregistered user vs a registered user
Keep test scenarios small
Each scenario should verify one piece of behaviour within a specific context.
This ensures that in the event of a failure, it is easy to understand the problem. It also avoids the risk of missing problems at other points in the system (where a test fails out and therefore doesn’t validate other functionality)
Don’t try to draw 1-1 relationships between acceptance tests and criteria
An acceptance criteria can have one or more tests which validate it. Likewise, a single test may be able to validate multiple acceptance criteria.
Refer back and make sure that everything is covered without trying to form a relationship.
Test from the point of view of the userAll test scenarios should focus on user interaction. Any scenario should describe the situation in terms of how the user interacts with the system.
Focus on posing questions rather than answering them
When identifying scenarios, the important thing is to have a list of “what if” situations. Capturing the expected behaviour forms the later evolution of the scenario.
Identify the “happy path” scenarios at a high level
Start off by capturing the “happy path” scenarios which describe how the user is expected to interact with the functionality.
Each scenario should at this point be a simple sentence describing the situation.
If a slightly different interaction with the functional area will result in different behaviour, this should be captured as a separate scenario.
Focus on identifying examples where behaviour will be different, not every possible variation of input.
Refer to acceptance criteria to ensure that you have scenarios which will be able to validate all the required behaviour.
Extend the scenarios with negative test cases
A good way to capture scenarios once you move beyond the obvious ones is by asking “what if” and trying to think of areas where users could encounter problems.
Think about things like security / permissions as a good start point
Consider what invalid inputs a user could provide
Preconditions
Written in the first person for clarity
Choose a “happy path” scenario to expand
Select the scenario which represents the majority of users of the functionality
Describe the steps using BDD structureIn reality acceptance tests can be described in any way you like, but BDD is widely used and convenient so we’ll use it in this case.
Describe the preconditions as briefly as possible
You can often summarise preconditions in one or two lines. Past tense.
Describe the actions a user will perform
Keep brief and avoid hard coding data. Test data management is a large topic in it’s own right
Describe the assertions used to validate the functionality
Everything discussed so far has a strong bias towards automation, but we’ll have a look at some positives and negatives to keep in mind. This does sometimes drive the “acceptance method” associated with a story.
Positives
Faster feedback
Tests can run as soon as something changes and catch regressions immediately.Consistent results
Removes any human error from the test process so results are consistent.
Worth noting that this is useful even when automated tests have a problem, as it makes it easier to identify the issue.Supports CI / CD
Continuous integration / delivery is a key part of agile development and just doesn’t work without test automation.Integrates into Agile development
As with CI / CD, it’s almost impossible for a team to be Agile without automation. There’s no time in a sprint to do full regression test runs every time a developer finishes work on a story, and a definition of done should always include some certainty that nothing else has been affected.
The traditional model of testing at the end means that no one really benefits from agile.Can be easily reused and extended
A good automation framework can be used to do many things, like generating data for UAT environments or as the basis for performance testing. Will cover this more later when talking about managing test state.Standardised reporting
Reports are automatically generated and conform to a standard format. Failures are clear and obvious, as well as it being clear when the failure first occurred due to test run history.
Negatives
Complexity
Setting up an automation framework and integrating it into a CI pipeline is far more complex than writing and executing manual test cases.Skills gap
Following from the increased complexity, there will almost certainly be a skills gap where existing test personnel do not have the technical skills to get started.Lead time
Due to the skills gap there’s a lead time associated with getting automation in place. Testers need to be trained up and developers need to spend time initially setting up the basic framework.
It’s important that short cuts aren’t taken here as a good set of automated test still need the “tester mindset” and not just the tech skills.Cost
Automation is expensive. People with the right skills cost more to begin with, but it goes further than that.
You’ll normally need a specific automation environment, which can be very expensive depending on your infrastructure. It should be fairly stable (so not dev) and not be impacted by users interacting with it. This isn’t always the case though, good test data management can remove this issue entirely.
Low level unit tests
Developer level test which verifies that units of code responds correctly to inputs.
Run at code build time / after code commit.
Fastest set of tests which prevent obvious problems from making it into test environments. External dependencies mocked / stubbed.Service / integration tests
Verify multiple components (internal or external) communicating to ensure things like API contracts are correct
Slower than unit tests due to more set-up and potential dependencies.
User interface tests
Highest level of test, against whatever the interface to the system is. Could be a GUI, or an API, or a CLI.
Slowest set of tests to execute as the entire system needs to be running. UI tests in particular add an extra time cost due to speed of tools like webdriver.Speed and value implications at each layer With reference to the diagram:
UI tests are the slowest to run and also the most brittle and therefore require the most maintenance.
While these are arguably the most valuable as they test real world use cases, the maintenance has a cost as well as the speed.
Need to find the right balance and avoid the “ice cream cone” where all tests are slow and brittle.
Collection of tools to support a number of needs This applies to any kind of test framework.
When considering ways of interacting with the system under test you need to consider not only things like UI interaction, but also mechanisms for setting system state. This requires a certain amount of input from developers to make a system testable by design.
Structured for re-use and maintenance
Extract out common bits of code to separate classes
Make any set up / interaction code as separate as possible so it can be updated in line with changes.
Any environment specific config should be externalised.
Should be held to similar standards as development code
Coding standards, code reviews, branching strategies.
Ideally uses the same development language as the system under test
With an acceptance test framework you can theoretically use any language you like.
It’s always best to match what the devs are using to improve cross skilling and encourage engagement from the whole team.
Test definition, interaction, assertion, reporting.
Means start at the same time
Otherwise core func test
Start with just the basics
Test definition, interaction, assertion, reporting.
A test framework should evolve in line with the codebase
Start creating the framework as soon as development starts.
If coming in later, get a set of tests around the core functionality only then add new tests in line with development.
Use defects as a way of identifying the areas where automation is lacking.
Capabilities are added to the framework only when required
Don’t try and make a perfect framework right away. Invest only as needed.
Don’t be scared to refactor As with any piece of code that grows over time, it will become unwieldy.
As developers stop to handle technical debt and tidy up code, the same should be done with the test codebase.
This is one of the most important problems to solve early in the life of a test framework.
All tests should be consistently repeatable
Inconsistent test failures are the most frustrating thing in automation. If a test fails, it should always fail.
Tests should be able to run in parallel
At a certain size, the only way to keep test run times down is to run in parallel.
If tests are chained together or use common test data this becomes impossible.
It should not be possible for tests to be affected by other test runs or manual testing
Data for a given test should not be impacted by another test running at the same time
There should also be no impact caused by manual testing on the same environment. Although recommended, teams often don’t have multiple test environments.
Tests should be focused on just the area relevant to the scenario
Ideally you want your test scenarios to be very clear in what they are testing. That means avoiding lots of test code which sets up data by laboriously going through previous steps.
Back to making a system testable by design.
Place the system in an expected state
Inject users and any other entities which exist in the system
Trigger workflow transitions to move those entities into the desired state
Maintain test data per scenario
Partially a requirement of the framework itself, but important to call out.
Using the ability to place the system in an expected state, each test should be able to maintain it’s own set of test data and it’s own view of the system state
Destroy test data
Depends on the system but usually a good idea to clean up.
Expose all capabilities in a programmatic way
Every aspect of data setup should be available through code with no manual intervention.
Developers should be ensuring that anything they develop can be interacted with by the tests in some way.
API’s are always the best to allow a level of abstraction - direct data access is a much higher maintenance overhead.
Testing should always be an aspect of the development team
No more “dev done” and throw over the fence. Testing needs to become an integral part of the delivery approach.
Tests should be written at the same time the code is implemented
It is usually possible to write 80% of an automated test against the same requirement that the developer is working from
Automated tests should be finalised as soon as a piece of functionality is available to test
Quality control must become part of the team definition of done
Re-iterating point 1, the team must understand that checking for regressions and verifying that the implemented code meets the requirements is part of their work.
Nothing can be considered to be complete until this is done, as without this verification what they deliver has no value.
All team members should be encouraged to engage with tests
Anyone can implement tests. Someone with a testing mindset should be involved in defining the scenarios, but implementation can be shared.
If someone breaks an automated test at any level, they should fix it. They are a team resource.
Testing should be involved at every stage of the development process
Define acceptance tests as part of gathering requirements
Evolve tests in line with changes (unlike with traditional test plans) by making tests part of the requirement
Validate at the end against the same tests
Easier communication with demonstrable requirements
A test (especially BDD) gives a clear example of how functionality is expected to work. This improves developer understanding and helps with implementation.
Capturing test scenarios helps stakeholders streamline what they want
Capturing the tests by asking “what if” often results in use cases not previously considered which leads to better requirements.
Scope clarity enables better estimation
Using the tests as part of a team definition of done makes the scope very clear. Any behaviour not described in a test can be considered additional, separate work (at the discretion of the team and product owner).
At this point you have a set of tests which can be run from a local machine. Many teams stop there, but that only delivers a fraction of the value.
Integrate automation into the continuous integration pipeline
Whatever CI tool the dev team uses should include jobs for running the tests. These should automatically trigger at various points.
Run against production like test environments
Normally where DevOps comes into play, code which passes low level tests should be deployed into a test environment.
Any test environment should be “production like” in structure if not in scale. I.e. any external dependencies should be available, all services running without mocks or stubs.
Use tests as deployment gates
One of the major reasons to have automation in your agile testing approach is to have fast regression testing. This means making the tests block code changes from proceeding through a CI in the case of failures.
If you reach a point of high confidence in your automation, it’s then a short step to Continuous Deployment (CD) by allowing the pipeline to deploy all the way to production.
Publish reports and make available
Make the reports visible! Especially if using BDD, you need to make these available to non business stakeholders so they see the status of the system.
Reports should not just show failures, but also what’s currently working and how.
Tests are the best source of truth on the behaviour of the system (as mentioned earlier under the heading “living documentation”).
Review test reports and raise defects
Automated test runs still need to be monitored and defects raised when failures occur.
Ensure that defects make their way back into the workstream
When a test failure reveals a defect, these need to be properly brought back into the dev team workstream.
Depends on how the team wants to work, but in many cases failures should be addressed immediately by the team member responsible for introducing the regression.
If it’s particularly complex or low priority, the defect should make its way into the backlog for inclusion later (e.g. in later sprints if working in a scrum team).
Manage tests based on expected outcome If a test is expected to fail due to an existing defect, this should be handled in the tests to avoid failing test runs being ignored.
Normally the easiest way to do this is ignoring a test related to a defect, but requires careful process management.
Use automation to speed up defect resolution
Developers can use automated tests to quickly replicate an issue as well as verify themselves that it has been resolved.
Reduces “churn time” where defects go back and forth between team members for fix and validation.
Use defects to improve automation coverage
Many teams try to ignore or marginalise defects. It’s far better to get value from them.
Defects raised after manual testing are useful to highlight where automation is weak. With proper review the team can use this information to continuously improve coverage.