Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
[DSBW Spring 2009] Unit 09: Web Testing
1. Unit 9: Web Application Testing
Testing is the activity conducted to evaluate the quality of a product
and to improve it by finding errors.
Testing
dsbw 2008/2009 2q 1
2. Testing Terminology
An error is “the difference between a computed, observed, or
measured value or condition and the true, specified, or theoretically
correct value or condition” (IEEE standard 610.12-1990).
This “true, specified, or theoretically correct value or condition”
comes from
A well-defined requirements model, if available and complete
An incomplete set of fuzzy and contradictory goals, concerns, and
expectations of the stakeholders
A test is a set of test cases for a specific object under test
A object under test may be the whole Web application, components
of a Web application, a system that runs a Web application, etc.
A single test case describes a set of inputs, execution
conditions, and expected results, which are used to test a specific
aspect of the object under test
dsbw 2008/2009 2q 2
3. Testing [and] Quality
Testing should address compliance not only to functional
requirements but also to quality requirements, i.e., the kinds of
quality characteristics expected by stakeholders.
ISO/IEC 9126-1 [Software] Quality Model:
dsbw 2008/2009 2q 3
4. Goals of Testing
The main goal of testing is to find errors but not to prove their
absence
The large number of quality characteristics, potential input values and
possible side conditions and processes make it impractical to achieve
complete test coverage, especially in Web development, where testing
tends to be constrained by restricted resources and extreme time pressure.
A test run is successful if errors are detected. Otherwise, it is unsuccessful
and “a waste of time”.
Therefore, testing should adopt a risk-based approach:
Test first and with the greatest effort those critical parts of an
application where the most dangerous errors are still undetected
A further aim of testing is to bring risks to light, not simply to
demonstrate conformance to stated requirements.
Test as early as possible at the beginning of a project: errors happened in
early development phases are harder to localize and more expensive to fix
in later phases.
dsbw 2008/2009 2q 4
5. Test Levels
Unit tests
Test the smallest testable units (classes, Web pages, etc.) independently of one
another.
Performed by the developer during implementation.
Integration tests
Evaluate the interaction between distinct and separately tested units once they
have been integrated.
Performed by a tester, a developer, or both jointly.
System tests
Test the complete, integrated system.
Typically performed by a specialized test team.
Acceptance tests
Evaluate the system with the client in an “realistic” environment, i.e. with real
conditions and real data.
Beta tests
Let friendly users work with early versions of a product to get early feedback.
Beta tests are unsystematic tests which rely on the number and “malevolence”
of potential users.
dsbw 2008/2009 2q 5
6. Fitting Testing in the Development Process
Planning: Defines the quality goals, the general testing strategy, the test plans for all
test levels, the metrics and measuring methods, and the test environment.
Preparing: Involves selecting the testing techniques and tools and specifying the test
cases (including the test data).
Performing: Prepares the test infrastructure, runs the test cases, and then
documents and evaluates the results.
Reporting: Summarizes the test results and produces the test reports.
dsbw 2008/2009 2q 6
7. Web Testing: A Road Map
Content Interface
Testing Testing
user
Navigation
Testing
Component
Testing
Configuration
Testing
Security
Performance
technology Testing
Testing
dsbw 2008/2009 2q 7
8. Content Testing: Main Objectives
Uncover syntactic errors in text-based documents, graphical
representations, and other media
Ex: typos, grammar mistakes
Uncover semantic errors in any content object presented as
navigation occurs
Ex: errors in the accuracy or completeness of information
Find errors in the organization or structure of content that is
presented to the end-user.
Ex: cross-referencing errors
dsbw 2008/2009 2q 8
9. Content Semantics Testing: A Check List
Is the information factually accurate?
Is the information concise and to the point?
Is the layout of the content object easy for the user to understand?
Can information embedded within a content object be found easily?
Have proper references been provided for all information derived from other
sources?
Is the information presented consistent internally and consistent with
information presented in other content objects?
Is the content offensive, misleading, or does it open the door to litigation?
Does the content infringe on existing copyrights or trademarks?
Does the content contain internal links that supplement existing content?
Are the links correct?
Does the aesthetic style of the content conflict with the aesthetic style of the
interface?
dsbw 2008/2009 2q 9
10. Interface Testing: Main Objectives
Find errors related to specific interface mechanisms
Interface mechanisms: links, forms, client scripts, etc.
Find errors in the way the interface implements the semantics of
navigation, WebApp functionality or content display
Identify usability pitfalls
Uncover interface errors due to different client-side configurations
dsbw 2008/2009 2q 10
11. Interface Testing: Strategy
1. Interface features are tested to ensure that design rules, aesthetics, and
related visual content is available for the user without error.
Interface features: fonts, colors, frames, images, borders, tables, etc.
2. Individual interface mechanisms are unit tested.
3. Each interface mechanism is tested within the context of a use-case for a
specific user category (integration testing).
4. The complete interface is tested against selected use-cases to uncover
errors in the semantics of the interface (system testing)
In parallel, usability-related issues are addressed Usability Testing
In parallel, the interface is tested within a variety of environments (e.g.,
browsers) to ensure that it will be compatible Compatibility Testing
dsbw 2008/2009 2q 11
12. Interface Testing : A Check List for Interface Mechanisms
Links: basically, uncover bad/broken internal/external links
Forms: check that
labels correctly identify input fields
mandatory fields are identified visually for the user
no input data is lost in transmission
appropriate defaults are defined
browser functions (e.g. “back” arrow) do not corrupt input data
form-validating scripts work properly and provide meaningful messages
Client-side scripting
Dynamic HTML
Pop-up windows
Streaming content
Cookies
dsbw 2008/2009 2q 12
13. Interface Testing: Usability Tests
Designed by development team … executed by end-users
Usability Testing process:
Define a set of usability testing categories and identify goals for each.
1.
Design tests that will enable each goal to be evaluated.
2.
Select participants who will conduct the tests.
3.
Instrument participants‟ interaction with the WebApp while testing is conducted.
4.
Develop a mechanism for assessing the usability of the WebApp
5.
Usability Test Levels:
Usability of a specific interface mechanism (e.g., a form) can be assessed
Usability of a complete Web page (encompassing interface mechanisms, data
objects and related functions) can be evaluated
Usability of the complete WebApp can be considered.
dsbw 2008/2009 2q 13
14. Interface Testing: Usability Issues
Interactivity
Are interaction mechanisms (pull-down menus, forms) easy to grasp and use?
Layout
Are links, menus, content placed in a manner that allows finding them quickly?
Aesthetics
Do users “feel comfortable” with layout, color, typeface, …?
Display Characteristics
Does the WebApp make optimal use of screen size and resolution
Time Sensitivity
Are features, functions and content used or acquired in a timely manner
Personalization
Does the WebApp tailor to specific needs or different users?
Accessibility
Does the WebApp conform to the W3C‟s Web Content Accessibility Guidelines?
dsbw 2008/2009 2q 14
15. Interface Testing: Compatibility Tests
Compatibility tests first define a set of “commonly encountered” client side
computing configurations and their variants according to
different computing platforms
typical display devices
browsers available
likely Internet connection speeds
Then derive a series of interface tests to uncover errors or execution
problems that can be traced to configuration differences.
Browser testing:
Is the WebApp‟s state managed correctly, or could inconsistent states occur
when navigating directly to a page, e.g. by using the “Back” button?
Can a (dynamically generated) Web page be bookmarked during a
transaction, and can users navigate to that page later without having to enter a
user name and password to log in?
Can users open the WebApp it in several browser windows concurrently?
How does the Web application react when cookies or script languages are
deactivated?
dsbw 2008/2009 2q 15
16. Navigation Testing: Main Objectives
Ensure that the mechanisms that allow the WebApp user to travel
through the WeApp are all functional (Navigation Syntax Tests)
Validate that each navigation semantic unit can be achieved by the
appropriate user category (Navigation Semantics Tests)
Navigation Semantic Unit (NSU): A set of information and
related navigation structures that collaborate in the fulfillment of
a subset of related user requirements.
Each NSU is defined by a set of navigation paths that connect
navigation nodes (Web pages, content objectes, functionalities)
Each NSU implements one, sometimes more, use cases.
dsbw 2008/2009 2q 16
17. Testing Navigation Syntax: A Check List for Nav. Mechanisms
Navigation links—these mechanisms include internal links within the
WebApp, external links to other WebApps, and anchors within a specific
Web page.
Redirects—these links come into play when a user requests a non-existent
URL or selects a link whose destination has been removed or whose name
has changed.
Bookmarks—although bookmarks are a browser function, the WebApp
should be tested to ensure that a meaningful page title can be extracted as
the bookmark is created.
Frames and framesets—tested for correct content, proper layout and sizing,
download performance, and browser compatibility
Site maps—Each site map entry should be tested to ensure that the link
takes the user to the proper content or functionality.
Internal search engines—Search engine testing validates the accuracy and
completeness of the search, the error-handling properties of the search
engine, and advanced search features
dsbw 2008/2009 2q 17
18. Testing Navigation Semantics: Relevant Questions (1)
Is the NSU achieved in its entirety without error?
Is every navigation node (defined for a NSU) reachable within the
context of the navigation paths defined for the NSU?
If the NSU can be achieved using more than one navigation path,
has every relevant path been tested?
If guidance is provided by the user interface to assist in navigation,
are directions correct and understandable as navigation proceeds?
Is there a mechanism (other than the browser „back‟ arrow) for
returning to the preceding navigation node and to the beginning of
the navigation path.
Do mechanisms for navigation within a large navigation node (i.e., a
long web page) work properly?
If a function is to be executed at a node and the user chooses not to
provide input, can the remainder of the NSU be completed?
dsbw 2008/2009 2q 18
19. Testing Navigation Semantics: Relevant Questions (2)
If a function is executed at a node and an error in function
processing occurs, can the NSU be completed?
Is there a way to discontinue the navigation before all nodes have
been reached, but then return to where the navigation was
discontinued and proceed from there?
Is every node reachable from the site map? Are node names
meaningful to end-users?
If a node within an NSU is reached from some external source, is it
possible to process to the next node on the navigation path. Is it
possible to return to the previous node on the navigation path?
Does the user understand his location within the content
architecture as the NSU is executed?
dsbw 2008/2009 2q 19
20. Component Testing
Focuses on a set of tests that attempt to uncover errors in WebApp
functions
Conventional black-box and white-box test case design methods
can be used at each architectural layer (presentation, domain, data
access)
Form data can be exploited systematically to find errors:
Missing/incomplete data
Type conversion problems
Value boundary violations
Fake data
Etc.
Database testing is often an integral part of the component-testing
regime
dsbw 2008/2009 2q 20
21. Configuration Testing: Server-Side Issues
Is the WebApp fully compatible with the server OS?
Are system files, directories, and related system data created correctly
when the WebApp is operational?
Do system security measures (e.g., firewalls or encryption) allow the
WebApp to execute and service users without interference or performance
degradation?
Has the WebApp been tested with the distributed server configuration (if
one exists) that has been chosen?
Is the WebApp properly integrated with database software? Is the WebApp
sensitive to different versions of database software?
Do server-side WebApp scripts execute properly?
Have system administrator errors been examined for their affect on
WebApp operations?
If proxy servers are used, have differences in their configuration been
addressed with on-site testing?
dsbw 2008/2009 2q 21
22. Configuration Testing: Client-Side Issues
Hardware—CPU, memory, storage and printing devices
Operating systems—Linux, Macintosh OS, Microsoft Windows, a
mobile-based OS
Browser software—Internet
Explorer, Mozilla/Netscape, Opera, Safari, and others
User interface components—Active X, Java applets and others
Plug-ins—QuickTime, RealPlayer, and many others
Connectivity—cable, DSL, regular modem, T1
dsbw 2008/2009 2q 22
23. Security Testing
Designed to probe vulnerabilities
of the client-side environment,
the network communications that occur as data are passed from
client to server and back again,
and the server-side environment
On the client-side, vulnerabilities can often be traced to pre-existing
bugs in browsers, e-mail programs, or communication software.
On the network infrastructure
On the server-side, Review the DSBW Unit on
At host level
WebApp Security
At WebApp level
dsbw 2008/2009 2q 23
24. Performance Testing: Main Questions
Does the server response time degrade to a point where it is
noticeable and unacceptable?
At what point (in terms of users, transactions or data loading) does
performance become unacceptable?
What system components are responsible for performance
degradation?
What is the average response time for users under a variety of
loading conditions?
Does performance degradation have an impact on system security?
Is WebApp reliability or accuracy affected as the load on the system
grows?
What happens when loads that are greater than maximum server
capacity are applied?
dsbw 2008/2009 2q 24
25. Performance Testing: Load Tests
A load test verifies whether or not the system meets the required
response times and the required throughput.
Steps:
Determine load profiles (what access types, how many visits per
1.
day, at what peak times, how many visits per session, how many
transactions per session, etc.) and the transaction mix (which
functions shall be executed with which percentage).
Determine the target values for response times and throughput
2.
(in normal operation and at peak times, for simple or complex
accesses, with minimum, maximum, and average values).
Run the tests, generating the workload with the transaction mix
3.
defined in the load profile, and measure the response times and
the throughput.
The results are evaluated, and potential bottlenecks are
4.
identified.
dsbw 2008/2009 2q 25
26. Performance Testing: Stress Tests
A stress test verifies whether or not the system reacts in a controlled way in
“stress situations”, which are simulated by applying extreme conditions,
such as unrealistic overload, or heavily fluctuating load.
The test is aimed at answering the questions:
Does the server degrade „gently‟ or does it shut down as capacity is
exceeded?
Does server software generate “server not available” messages? More
generally, are users aware that they cannot reach the server?
Are transactions lost as capacity is exceeded?
Is data integrity affected as capacity is exceeded?
Under what load conditions the server environment fails? How does
failure manifest itself? Are automated notifications sent to technical
support staff at the server site?
If the system does fail, how long will it take to come back on-line?
Are certain WebApp functions (e.g., compute intensive functionality,
data streaming capabilities) discontinued as capacity reaches the 80 or
90% level?
dsbw 2008/2009 2q 26
27. Performance Testing: Interpreting Graphics
Load: the number of
requests that arrive at
the system per time
unit
Throughput: the
number of requests
served per time unit.
SLA: Service Level
Agreement
dsbw 2008/2009 2q 27
28. Test Automation
Automation can significantly increase the efficiency of testing and enables
new types of tests that also increase the scope (e.g. different test objects
and quality characteristics) and depth of testing (e.g. large amounts and
combinations of input data).
Test automation brings the following benefits:
Running automated regression tests on new versions of a WebApp
allows to detect defects caused by side-effects to unchanged
functionality.
Various test methods and techniques would be difficult or impossible to
perform manually. For example, load and stress testing requires to
simulate a large number of concurrent users.
Automation allows to run more tests in less time and, thus, to run the
tests more often leading to greater confidence in the system under test.
Web Site Test Tools: http://www.softwareqatest.com/qatweb1.html
dsbw 2008/2009 2q 28
29. References
R. G. Pressman, D. Lowe: Web Engineering. A Practitioner’s
Approach. McGraw Hill, 2008. Chapter 15.
KAPPEL, Gerti et al. Web Engineering, John Wiley & Sons, 2006.
Chapter 7.
dsbw 2008/2009 2q 29