Agile teams incrementally deliver functionality based on user stories. In the sprint to deliver features, frequently software qualities such as security, scalability, performance, and reliability are overlooked. Often these characteristics cut across many user stories. Trying to deal with certain system qualities late in the game can be difficult, causing major refactoring and upheaval of the system’s architecture. This churn isn’t inevitable. Especially if you adopt a practice of identifying those characteristics key to your system’s success, writing quality scenarios and tests, and delivering on these capabilities at the opportune time. We will show how to write Quality Scenarios that emphasize architecture capabilities such as usability, security, performance, scalability, internationalization, availability, accessibility and the like. This will be hands-on; we present some examples and follow with an exercise that illustrates how you can look at a system, identify, and then write and test quality scenarios.
Testing System Qualities Agile2012 by Rebecca Wirfs-Brock and Joseph Yoder
1. Testing System Qualities
Rebecca Wirfs-Brock
Joseph Yoder
Copyright 2012 Rebecca Wirfs-Brock, Joseph Yoder,
Wirfs-Brock Associates and The Refactory, Inc.
2. Introducing Rebecca
President, Wirfs-Brock Associates
Agile enthusiast (involved with experience
reports since 1st agile conference, board
president Agile Open Northwest)
First engineering job in Quality Assurance
Pattern enthusiast, author, and Hillside Board
Treasurer
Old design geek (author of two object design
books, inventor of Responsibility-Driven
Design, advocate of CRC cards, hot spot
cards, & other low-tech design tools, IEEE
Software design columnist)
Consults and trains top companies on agile
architecture, responsiblity-driven design,
enterprise app design, agile use cases,
design storytelling, pragmatic testing
Runs marathons!!!
3. Introducing Joseph
Founder and Architect, The Refactory, Inc.
Pattern enthusiast, author and Hillside Board
President
Author of the Big Ball of Mud Pattern
Adaptive systems expert (programs adaptive
software, consults on adaptive
architectures, author of adaptive
architecture patterns, metatdata maven,
website: adaptiveobjectmodel.com)
Agile enthusiast and practitioner
Business owner (leads a world class
development company)
Consults and trains top companies on design,
refactoring, pragmatic testing
Amateur photographer, motorcycle enthusiast,
enjoys dancing samba!!!
4. Some Agile Myths
• System Qualities can be
added at the last moment.
• We can easily adapt to changing
requirements (new requirements).
• You can change the system fast!!!
• Don’t worry about the architecture.
5. Most testers spend the majority of their time writing functional tests
…BUT THERE’S A LOT MORE TO TEST
THAT YOUR SOFTWARE WORKS AS
ADVERTISED
6. Pragmatic Testing Issues
• What kinds of testing
should you focus on?
• Who writes them?
• Who runs them?
• When are they run?
• How are they run?
7. Testing System Qualities
Qualities we could consider…
• Usability
• Security
• Performance
• Scalability
• Internationalization
• Availability
• Flexibility
• Accessibility
• Location
• Regulation
9. Functional Tests vs. System Quality Tests
Functional:
– How do I …?
– Tests user stories work as advertised
• “As a reviewer I want to add a note to a chart”
• Compute the charge for an invoice
– Tests boundary conditions
• Can I add more than one note at the same place?
• Are excess charges computed correctly?
10. System Quality Tests
• How does the system handle…?
– system load …? number of add note
transactions/minute under normal load
– system support for…? simultaneously
updating charts
– usability…? ease of locating and selecting notes
• Tests that emphasize architecture capabilities
and tangible system characteristics
12. Specify Measurable Results
• Meter: An appropriate • Scale: The values
way to measure you expect
– Natural Scale:
• Response time in
milliseconds
– Constructed:
• 1-10 ranking
– Proxy:
• Projecting throughput
using sample data
13. Example Performance Scenario
Source of Stimulus Artifact Response Response
Stimulus Measure
Users Initiate System Transactions
order processed Average
latency of 2
Environment seconds
Normal
conditions
“Users initiate 1,000 order transactions per
minute under normal operations; transactions
are processed with an average latency of 2
seconds.”
14. Possible Performance Scenario Values
Portion of Possible Values
Scenario
Source External systems, users, components, databases
Stimulus Periodic events, sporadic or random events (or a
combination)
Artifact The system’s services, data, or other resources
Environment The state the system can be in: normal, overloaded,
partial operation, emergency mode…
Response Process the event or event sequence and possibly change
the level of service
Response Times it takes to process the arriving events (latency or
Measure deadline by which event must be processed), the
variation in this time, the number of events that can be
processed within a particular time interval, or a
characterization of events that cannot be processed
(missed rate, data loss)
15. Example Security Scenario
Source of Stimulus Artifact Response Response
Stimulus Measure
Data System
Tries to within the maintains Correct data
Correctly
transfer system is restored
identified audit trail
money Environment within a day of
individual
between Normal reported event
accounts operation
“A known, authorized user transfers money between
accounts. The user is later identified as an embezzler
by the institution they belong to and the system then
restores funds to the original account.”
16. Possible Security Scenario Values
Portion of Possible Values
Scenario
Source A human or another system. May be identified (correctly or not) or
be unknown.
Stimulus An attack or an attempt to break security by trying to display
information, change information, access system services, or reduce
system availability
Artifact The system’s services or data.
Environment The system might be online or offline, connected to or disconnected
from the network, behind a firewall or open to the network.
Response Authenticates user; hides identity of the user; blocks or allows access
to data and/or services; records access/modification attempts by
identity; stores data in encrypted format; recognizes unexplained high
demand and informs a user or other system, or restricts availability.
Response Time/effort/resources required to circumvent security measures with
Measure probability of success; probability of detecting attack; probability of
identifying individual responsible for attack or access/modification;
time/effort/resources to restore data/services; extent to which
data/services are damaged and/or legitimate access denied.
17. Example Modifiability Scenario
Source of Stimulus Artifact Response Response
Stimulus Measure
Developer Add support UI, Source Modification made
for new with no schema 2 days to
code,
service changes code and
Service
code test, 1 day
Code
to deploy
Table
Environment
Compile time, data definition
“A developer adds support for a new service code to the
system by adding the service code to the definitions table and
modifying the UI to make it available to users. The
modification is made with no data schema changes ”
18. Possible Modifiability Scenario Values
Portion of Possible Values
Scenario
Source End user, developer, system administrator
Stimulus Wishes to add/delete/modify/vary functionality.
Wishes to change some system quality such as
availability, responsiveness, increasing capacity.
Artifact What is to be changed: system user interface,
platform, environment, or another system or API
with which it interoperates
Environment When the change can be made: runtime, compile
time, build time, when deployed,…
Response Locates places to be modified; makes
modification without affecting other
functionality; tests and deploys modification
Response End user, developer, system administrator
Measure
19. Example Availability Scenario
Source of Stimulus Artifact Response Response
Stimulus Measure
Unknown Unexpected System Record “raw”
Report report in database No lost
Sensor
and log event data
Environment
Normal
conditions
“An unknown sensor sends a report. The system stores
the raw data in the unknown sensor database (to
potentially be processed or purged later) and logs the
event. ”
20. Example Usability Scenario
Source of Stimulus Artifact Response Response
Stimulus Cancel System Measure
request System
backs out Cancel takes
End user Environment
pending less than one
At runtime transaction second
and releases
resources
“A user can cancel a pending request within two seconds.”
23. Write A Quality Scenario
• Using the template handout,
write a quality scenario. Be specific.
• Options:
– Response to data received from an unknown sensor.
– Adding new analyzer plug-ins.
– Relocating a sensor and being able to correlate data
from its previous location (if desired).
– Detecting and troubleshooting equipment failures
– Detecting unusual weather conditions and incidents
• What quality does your scenario address?
24. You can’t test warm and fuzzy…
“It should be easy to place an online order”
TURN VAGUE STATEMENTS INTO
CONCRETE MEASURABLE ACTIONS
25. Turning Warm Fuzzies into a
Testable Usability Scenario
Source of Stimulus Artifact Response Response
Stimulus System Measure
Place Order
Novice order completed Time to
user Environment complete order
entry task
Web interface
with online help
“80% of novice users should be able to place an order in
under 3 minutes without assistance.”
or
“80% of novice users should be able to place an order in
under 3 minutes only using online help.”
27. Some options…
• Toss out a reasonable number, then
discuss to come to a consensus
• Average informed individuals’ estimates
• Use an existing system as baseline
• Values for similar scenarios
• Benchmark working code
• …
28. There is more than “pass” or “fail”
• Landing Zone: Lets you
define a range of
acceptable values
– Minimal: OK, we
can live with that
– Target: Realistic goal,
what we are aiming for
– Outstanding: This
would be great, if
everything goes well
29. Landing Zones
• Roll up product or
project success to
several key scenarios
• Easier to make sense of
the bigger picture:
– What happens when one
quality scenario edges
below minimum? How
do others trend?
– When will targets be
achieved? At what cost?
30. Minimum Target Outstanding
Throughput (txns 50,000 70,000 90,000
Performance per day)
Average txn time 2 seconds 1 second < 1 second
Intersystem data 95% 97% 97%
consistency (per
Data Quality cent critical data
attributes
consistent)
Data accuracy 97% 99% >99%
Managing Landing Zones
Too many scenarios and you lose track of what’s really important
Define a core set, organize and group
Roll up into aggregate scenarios
Re-calibrate values as you implement more functionality…be agile
31. Quality Testing Cycle for TDD
Identify and
Write Code
Verify
Write Quality
and Tests
Quality Scenarios
Scenarios
Clean up Code
(Refactor/revise
/rework)
Ship
Check
Ready to all tests succeed
it!!! all Tests
Release? Succeed
32. Test Coverage Can Overlap…
Smoke
Tests Quality
Scenarios
Integration
Tests
Acceptance
Tests
(Functional and
qualities)
Unit
Tests
33. Pragmatic Test Driven
Development Is…
• Practical. Testing system qualities can fit into
and enhance your current testing.
• Thoughtful. What qualities need to be tested?
Who should write tests? When should you test
for qualities?
• Realistic. You only have so much time and energy.
Test essential qualities.
34. Summary
• Quality Scenarios are easy to write and read.
• Writing quality test scenarios drives out
important cross-cutting concerns.
• If you don’t pay attention to qualities they
can be hard to achieve at the last moment.
• Measuring system qualities can require
specialized testing/measurement tools.