Assignment 1 Week 2.docx
1
Assignment 1: Topic Selection
Assignment 1: Topic Selection
Software Engineering: The Autotest Framework
Jessica Hill Scott
Dr. Teresa Wilburn
RES 531: Research Methods
January 12th 2014
Topic Selection: Software Engineering: Automated Testing/Programs That Test Themselves
Topic Description:
Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Although crucial to software quality and widely deployed by programmers and testers, software testing still remains an art, due to limited understanding of the principles of software. The difficulty in software testing stems from the complexity of software: we cannot completely test a program with moderate complexity. Testing is more than just debugging. The purpose of testing can be quality assurance, verification and validation, or reliability estimation. Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality (Hetzel). Automated testing is a widely used phrase. To understand what it entails, it is necessary to distinguish several increasingly ambitious levels of automation. What is best automated today is test execution. In a project that has generated thousands of test cases, running them manually would be tedious, especially as testing campaigns occur repeatedly. For example, it is customary to run extensive tests before every release. Traditionally, testers wrote scripts to run the tests. A related goal, also addressed by some of today’s tools, is regression testing. It is a common phenomenon of software development that some corrected faults reappear in later versions, indicating that the software has partly “regressed.” A project should retain any test that failed at any stage of its history, then passed after the fault was corrected; test campaigns should run all such tests to spot cases of regression. Automated tools should provide resilience. A large test suite is likely to contain some test cases that, in a particular execution, crash the program. Resilience means that the process may continue anyway with the remaining cases. One of the most tedious aspects of testing is test case generation. With modern computers we can run very large numbers of test cases. Usually, developers or testers have to devise them; this approach, limited by people’s time, does not scale up. Commonly used frameworks mostly address the first three goals: test execution, regression testing, and resilience. They do not address the most labor-intensive tasks: preparing test cases, possibly in a minimized form, and interpreting test results. Without progress on these issues, testing confronts a paradox: While the growth of computing power should enable us to perform ever more exhaustive tes ...
1. Assignment 1 Week 2.docx
1
Assignment 1: Topic Selection
Assignment 1: Topic Selection
Software Engineering: The Autotest Framework
Jessica Hill Scott
Dr. Teresa Wilburn
RES 531: Research Methods
January 12th 2014
2. Topic Selection: Software Engineering: Automated
Testing/Programs That Test Themselves
Topic Description:
Software testing is any activity aimed at evaluating an attribute
or capability of a program or system and determining that it
meets its required results. Although crucial to software quality
and widely deployed by programmers and testers, software
testing still remains an art, due to limited understanding of the
principles of software. The difficulty in software testing stems
from the complexity of software: we cannot completely test a
program with moderate complexity. Testing is more than just
debugging. The purpose of testing can be quality assurance,
verification and validation, or reliability estimation. Testing can
be used as a generic metric as well. Correctness testing and
reliability testing are two major areas of testing. Software
testing is a trade-off between budget, time and quality (Hetzel).
Automated testing is a
widely used phrase. To understand what it entails, it is
necessary to distinguish several increasingly ambitious levels of
automation. What is best automated
today is test execution. In a project that has generated thousands
of test cases, running them manually would be tedious,
especially as testing campaigns occur repeatedly. For example,
it is customary to run extensive tests before every release.
Traditionally, testers wrote scripts to run the tests.
A related goal, also addressed by
3. some of today’s tools, is regression testing. It is a common
phenomenon of software development that some corrected faults
reappear in later versions, indicating that the software has
partly “regressed.” A project should retain any test that failed at
any stage of its history, then passed after the fault was
corrected; test campaigns should run all such tests to spot cases
of regression. Automated tools
should provide resilience. A large test suite is likely to contain
some test cases that, in a particular execution, crash the
program. Resilience means that the process may continue
anyway with the remaining cases.
One of the most tedious aspects of testing is test case
generation. With modern computers we can run very large
numbers of test cases. Usually, developers or testers have to
devise them; this approach, limited by people’s time, does not
scale up. Commonly used frameworks
mostly address the first three goals: test execution, regression
testing, and resilience. They do not address the most labor-
intensive tasks: preparing test cases, possibly in a minimized
form, and interpreting test results. Without progress on these
issues, testing confronts a paradox: While the growth of
computing power should enable us to perform ever more
exhaustive tests, these manual activities dominate the process;
they limit its practical effectiveness and prevent scaling it up.
The AutoTest framework
includes traditional automation but particularly innovates on
test case generation, oracles, and minimization. It has already
uncovered many faults in released software and routinely finds
new ones when given classes to analyze.
Reason for choosing topic:
I chose this topic because testing is directly related to my job.
It’s something that I do on a daily basis. Some of the testing I
perform in manual and some is automatic. I want to give insight
on how automating test cases can benefit a company. Automated
software testing has long been considered critical for software
organizations.
4. Why this topic is important to me:
Automating test cases are important because it can save the
company time and money, improve accuracy and increase test
coverage. Automation also does what manual testing cannot; it
helps the developers and testers and ultimately improves team
morale.
References:
Hetzel, William C., The Complete Guide to Software Testing,
2nd ed. Publication info: Wellesley, Mass.: QED Information
Sciences, 1988. ISBN: 0894352423.Physical description: ix, 280
p.: ill; 24 cm.
programs_that_test_themselves.pdf
computer 46
C O V E R F E AT U R E
Published by the IEEE Computer Society 0018-9162/09/$26.00
6. to avoid incidents by warning the users of needed main-
tenance actions. This self-testing capability is an integral
part of the design of such artifacts.
The lesson that their builders have learned is to design
for testability. This concept was not always understood:
With cars, for example, we used to have no clue (save for
the oil gauge) that major mechanical trouble might be im-
minent; if we wanted to know more, we would take our
car to a mechanic who would check every component
from scratch, not knowing what actually happened during
operation. Today’s cars, in contrast, are filled with sensors
The AutoTest framework automates the
software testing process by relying on
programs that contain the instruments of
their own verification, in the form of con-
tract-oriented specifications of classes and
their individual routines.
Bertrand Meyer, ETH Zurich and Eiffel Software
Arno Fiva, Ilinca Ciupa, Andreas Leitner, and Yi Wei, ETH
Zurich
Emmanuel Stapf, Eiffel Software
PrograMs
ThaT TesT
TheMselves
AutoTest helps provoke failures
and manage information about the
7. corresponding faults.
47SeptemBer 2009
in later versions, indicating that the software has partly
“regressed.” A project should retain any test that failed
at any stage of its history, then passed after the fault was
corrected; test campaigns should run all such tests to spot
cases of regression.
Automated tools should provide resilience. A large test
suite is likely to contain some test cases that, in a particular
execution, crash the program. Resilience means that the
process may continue anyway with the remaining cases.
One of the most tedious aspects of testing is test case
generation. With modern computers we can run very
large numbers of test cases. Usually, developers or testers
have to devise them; this approach, limited by people’s
time, does not scale up. The AutoTest tools complement
such manual test cases with automatic tests exercising
the software with values generated by algorithms. Object-
oriented programming increases the difficulty because it
requires not only elementary values such as integers but
also objects.
Test oracles represent another challenge. A test run is
only useful if we know whether it passed or failed; an oracle
is a mechanism to determine this. Here too a manual pro-
cess does not scale up. Approaches such as JUnit include
oracles in test cases through such instructions as “assert
(success_criterion),” where “assert” is a general mechanism
that reports failure if the success_criterion does not hold.
This automates the application of oracles, but not their
preparation: The tester must still devise an assert for every
test. AutoTest’s approach removes this requirement by rely-
8. ing on contracts already present in the code.
Another candidate for automation is minimization. It is
desirable to retain and replay any test that ever failed. The
failure may, however, have happened after a long execu-
tion exercising many instructions that are irrelevant to the
failures. Retaining them would make regression testing too
slow. Minimization means replacing a test case, whenever
possible, with a simplified one producing a failure that
evidences the same fault.
Commonly used frameworks mostly address the first
three goals: test execution, regression testing, and re-
silience. They do not address the most labor-intensive
tasks: preparing test cases, possibly in a minimized form,
and interpreting test results. Without progress on these
issues, testing confronts a paradox: While the growth of
computing power should enable us to perform ever more
Extraction is that some of the most important test
cases are not devised as such: They occur when a
developer tries the program informally during de-
velopment, but then it’s execution fails. The failure is
interesting, in particular for future regression testing,
but usually it is not remembered: The developer fixes
the problem and moves on. From such failures, Test
Extraction automatically creates test cases, which can
be replayed in subsequent test campaigns.
Integration of Manual Tests• : supports the development
and management of manually produced tests. Unlike
Test Generation and Test Extraction, this functionality
relies on state-of-the-art techniques and includes no
major innovation, but it ensures a smooth interaction
of the automatic mechanisms with existing practices
by ensuring all tests are managed in the same way
regardless of their origin—generated, extracted, or
9. manual.
These mechanisms, initially developed for research
purposes at ETH Zurich, have now been integrated into
the EiffelStudio environment and are available both as an
open source download (http://eiffelstudio.origo.ethz.ch)
and commercially. Research continues on the underlying
theory and methods (http://se.ethz.ch/research/autotest).
Our working definition of testing focuses on one essen-
tial aspect: To test a program is to try to make it fail.1 Other
definitions include more lofty goals, such as “provid[ing]
information about the quality of the product or service”
(http://en.wikipedia.org/wiki/Software_testing). But in
practice, the crucial task is to uncover failures of execu-
tion, which in IEEE-standard terminology2 reflect faults
in the program, themselves the result of mistakes in the
developer’s thinking. AutoTest helps provoke failures and
manage information about the corresponding faults.
‘AUTOmATED TESTing’
“Automated testing” is a widely used phrase. To under-
stand what it entails, it is necessary to distinguish several
increasingly ambitious levels of automation.
What is best automated today is test execution. In a proj-
ect that has generated thousands of test cases, running
them manually would be tedious, especially as testing
campaigns occur repeatedly—for example, it is customary
to run extensive tests before every release. Traditionally,
testers wrote scripts to run the tests. The novelty is the
spread of frameworks such as JUnit (www.junit.org) that
avoid project-specific scripts. This widely influential de-
velopment has markedly improved testing practice, but it
only automates a specific task.
10. A related goal, also addressed by some of today’s tools,
is regression testing. It is a common phenomenon of soft-
ware development that some corrected faults reappear
C O V E R F E AT U R E
computer 48
Contract” sidebar describes the use of contracts in more
detail.
In the traditional Eiffel process, developers write pro-
grams annotated with contracts, then manually run these
programs, relying on the contracts to check the execu-
tions’ correctness. AutoTest’s Test Generation component
adds many more such executions by generating test cases
automatically.
Execution will, on entry to a routine r, evaluate r’s pre-
condition and the class invariant; on exit, it evaluates r’s
postcondition and the invariant. For correct software, such
evaluations always yield true, with no other consequence;
but an evaluation to false, known as a contract violation,
signals a flaw:3
A precondition violation signals a possible fault in the •
client (the routine that called r).
A postcondition or invariant violation signals a pos-•
sible fault in the supplier (r itself).
If the call is a result of automatic test generation, the
interpretation of the first case is more subtle:
11. If the tool directly issued the call to • r, this is a problem
with the tool’s generation strategy, not the software
under test; the test case should be ignored. Testing strat-
egies should minimize such spurious occurrences.
If another routine performed the call, the caller did •
not observe r’s specification, signaling a fault in that
routine.
The benefit of using contracts as oracles is that the soft-
ware is tested as it is. Other tools using contracts often
require software that has been specially prepared for
testing. With Eiffel or Spec# (http://research.microsoft.
com/SpecSharp)—and JML, the Java Modeling Language, if
used to write code rather than to instrument existing Java
code—contracts are there from the start.
In practice, no special skill is required of programmers
using Design by Contract. Although the approach can be
extended to full formal specifications, most contracts
in common usage state simple properties: A variable is
positive, two references point to the same object, a field
is not void. In addition, contracts are not just a theoretical
possibility; programmers use them. Analysis of a large
body of Eiffel code, proprietary and open source, indicates
widespread contract use, accounting for 1.5 to 7 percent
of lines.4
In such a context, writing simple contracts becomes as
natural as any other programming task.
Not all failures result from explicit contract violations;
another typical case is arithmetic overflow. AutoTest re-
cords all failures in the same way. Unlike many static
analysis tools, AutoTest produces no false alarms: Every
exhaustive tests, these manual activities dominate the
12. process; they limit its practical effectiveness and prevent
scaling it up.
The AutoTest framework includes traditional automa-
tion but particularly innovates on test case generation,
oracles, and minimization. It has already uncovered many
faults in released software and routinely finds new ones
when given classes to analyze.
COnTrACTS AS OrAClES
AutoTest exercises software as it is, without instrumen-
tation. In particular, its approach does not require writing
oracles.
What makes this possible is that the software under
test consists of classes with contracts: Routines may
include preconditions and postconditions; classes may
include invariants. In contract-supporting languages
such as Eiffel, contracts are Boolean expressions of the
underlying programming language, and hence can be
evaluated during execution; this provides the basis of
the contract-based approach to testing. The “Design by
D esign by Contract1 is a mechanism pioneered by Eiffel that
characterizes every software element by answering three
questions:
What does it expect? •
What does it guarantee? •
What does it maintain?•
Answers take the form of preconditions, postconditions, and
invariants. For example, starting a car has the precondition that
the ignition is turned on and the postcondition that the engine is
running. The invariant, applying to all operations of the class
13. CAR, includes such properties as “dashboard controls are
illumi-
nated if and only if ignition is on.”
With Design by Contract, such properties are not expressed in
separate requirements or design documents but become part of
the software; languages such as Eiffel and Spec#, and language
extensions such as JML, include syntax—keywords such as
require, ensure, and invariant—to state contracts.
Applications cover many software tasks: analysis, to make
sure requirements are precise yet abstract; design and imple-
mentation, to obtain software with fewer faults since it is built
to a precise specification; automatic documentation, through
tools extracting the contracts; support for managers, enabling
them to understand program essentials free from implementa-
tion details; better control over language mechanisms such as
inheritance and exceptions; and, with runtime contract moni-
toring, improvements in testing and debugging, which AutoTest
takes further.
reference
1. B. Meyer, “Applying ‘Design by Contract,’” Computer, Oct.
1992, pp. 40-51.
DeSign By contract
49SeptemBer 2009
New objects • diversify the pool.
Creating a new object every time would restrict tests •
to youthful object structures. For example, a newly
created list object represents a list with zero elements
14. or one element; realistic testing needs lists with many
elements, obtained by creating a list then repeatedly
calling insertion procedures.
When the decision is to create an object, this object
should satisfy the class invariant. AutoTest relies on the
violation it reports reflects a fault in either the implementa-
tion or the contract.
TEST gEnErATiOn
There has been considerable research on test generation
from specifications. The “Using Specifications for Test Case
Generation: A Short Survey” sidebar highlights some key
aspects of this research.
The Test Generation part of AutoTest is a push-button
testing framework. The only information it requires is a set
of classes to be tested. The tool takes care of the rest by au-
tomating three of the key tasks cited earlier:
To generate tests, it creates instances of the classes •
and calls their routines with various arguments.
To determine success or failure, AutoTest uses the •
classes’ contracts as oracles.
The tool produces minimized versions of failed tests •
for regression testing.
An important property for users is that the environment
will treat all tests in the same way, regardless of their origin
(generated, manual, or extracted); this applies in particular
to regression testing.
Figure 1 shows the principal steps for testing a set of
classes:
15. Generate instances of the classes under test.•
Select some of these objects for testing. •
Select arguments for the features to be called.•
Run the tests.•
Assess the outcome: pass or fail, applying the con-•
tracts as oracles.
Log results and failure-reproducing test cases.•
Construct a minimized form of every logged test and •
add it to the regression suite.
The test-generation strategies involve numerous choices
controlled by parameters to AutoTest. Extensive experimen-
tation has produced default values for all these parameters.
Obtaining objects and other values
The unit of testing is a routine call of the form target.
routine (arguments). It requires at least one object, the
target; the arguments may include other objects and primi-
tive values.
To obtain test inputs, AutoTest maintains an object
pool. Whenever it needs an object of a type T, it decides
whether to create a new instance of T or draw from the
pool. Creation is necessary if the pool does not contain
an instance of T; but even if it does, AutoTest will, with a
preset frequency (one of the tool’s parameters), create an
object and add it to the pool. An effective strategy needs
both possibilities:
t he goal of automating testing based on specification is an
active research topic.
Robert V. Binder (• Testing Object-Oriented Systems: Models,
Patterns and Tools, Addison-Wesley, 1999) emphasizes con-
tracts as oracles.
16. Dennis Peters and David Parnas (“Using Test Oracles Gener-•
ated from Program Documentation,” IEEE Trans. Software
Eng., Mar. 1998, pp. 161-173) use oracles derived from speci-
fications, separate from the program.
The jmlunit script pioneered some of the ideas described •
in this article, in particular, postconditions as oracles and
the observation that a test that directly violates a precon-
dition does not signal a fault. In jmlunit as described by
Yoonsik Cheon and Gary T. Leavens (“A Simple and Practi-
cal Approach to Unit Testing: The JML and JUnit Way,”
ECOOP 2002—Object-Oriented Programming, LNCS 2374,
Springer, 2002, pp. 1789-1901), test cases remain the user’s
responsibility.
Korat (C. Boyapati, S. Khurshid, and D. Marinov, “Korat: •
Automated Testing Based on Java Predicates,” Proc. 2002
ACM SIGSOFT Int’l Symp. Software Testing and Analysis,
ACM
Press, 2002, pp. 123-133) is an automated testing framework
that uses some of the same concepts as AutoTest; to gener-
ate objects it does not use creation procedures but fills
object fields and discards the result if it violates the invari-
ant. Using creation procedures seems preferable.
DSD-Crasher (C. Csallner and Y. Smaragdakis, “DSD-Crasher:
•
A Hybrid Analysis Tool for Bug Finding,” ACM Trans. Soft-
ware Eng. and Methodology, Apr. 2008, vol. 17, no. 2, art. 8)
infers contracts from executions, then statically explores
paths under the resulting restricted input domain, and gen-
erates test cases to verify the results.
Debra Richardson, Owen O’Malley, and C. Tittle (“Approaches
•
to Specification-Based Testing,” ACM SIGSOFT Software Eng.
Notes, Dec. 1989, pp. 86-96) emphasize extending existing
implementation-based testing to use specifications.
Alexandre K. Petrenko (“Specification Based Testing: •
17. Towards Practice,” Perspectives of System Informatics, LNCS
2244, Springer, 2001, pp. 287-300) surveys existing
approaches.
A. Jefferson Offutt, Yiwei Xiong, and Shaoying Liu (“Criteria •
for Generating Specification-Based Tests,” Proc. 5th Int’l
Congress Eng. of Complex Computer Systems, IEEE CS Press,
1999, pp. 119-129) discuss generating test inputs from state-
based specifications.
uSing SpecificationS for teSt
caSe generation: a Short Survey
C O V E R F E AT U R E
computer 50
Adaptive random testing and object distance
To improve on purely random strategies, adaptive
random testing (ART)5 attempts to space out values evenly
across their domains. This applies in particular to integers.
In object-oriented programming, many interesting inputs
are objects, with no immediate notion of “evenly spaced
out.” We introduced object distance6 to extend ART by en-
suring that a set of objects is representative. The distance
between objects o1 and o2 is a normalized weighted sum
of three properties:
distance between the types, based on their distance •
in the inheritance graph and the number of distinct
features;
distance between the immediate values of the objects •
(primitive values or references); and
18. for matching fields, object distance computed recur-•
sively with an attenuation factor.
Our measurements show that ART with object distance
uncovers new faults but generally does not find faults
faster than the basic random strategy, and misses some
faults found by this strategy. It thus complements rather
than replaces the basic random strategy.
minimization
AutoTest preserves all failed tests, automatic or manual,
for replay in regression testing.
Preserving the entire original scenario is generally im-
practical, since the execution may involve many irrelevant
instructions. AutoTest’s minimization algorithm attempts
to derive a shorter scenario that still triggers the failure.
The idea is to retain only the instructions that involve the
target and arguments of the failing routine. Having found
such a candidate, AutoTest executes it to check that it re-
produces the failure; if it does not, AutoTest retains the
original. While theoretically not complete, the algorithm
is sound since its resulting scenario always triggers the
same failure. In practice it is near-complete, often reducing
scenario size by several orders of magnitude.7
Boolean queries
A promising strategy, comparable to techniques used for
model checking, follows from the observation that classes
often possess a set of argument-less Boolean-valued queries
on the state: “is_overdraft” for a bank account; “is_empty”
for any container structure; “after,” stating that the cursor is
past the last element, for a list with cursors. We investigated
a Boolean query conjecture:8 The argument-less Boolean
19. queries of a well-written class yield a partition of the cor-
responding object state space that helps testing strategies.
The rationale for this conjecture is that such queries
characterize the most important divisions of an object’s
possible states: An account is overdraft or not, it is open
normal mechanism for creating instances, satisfying the
invariant: creation procedures (constructors). The steps
are as follows:
Choose a creation procedure (constructor). •
Choose arguments, if needed, with the strategies de-•
fined below for routine calls. Some of these arguments
may be objects, requiring recursive application of the
strategy (selection from pool or creation).
Create the object and call the procedure.•
Any object this algorithm creates at any stage is added
to the pool, contributing to diversification. Any failure
of these operations is logged, even if the operation is not
explicitly part of the requested test. The purpose of test-
ing is to cause failures; it does not matter how: The end
justifies the means.
Besides objects, a call may need primitive values of
types such as INTEGER or CHARACTER. The current strat-
egy uses
distinguished values preset for each type such as, for •
integers: 0, minimum and maximum integers, ±1,
and so on; and
other values from the range, selected at random.•
This approach may appear simplistic. We are indeed
investigating more advanced policies. We have learned,
20. however, that in devising testing strategies sophisticated
ideas do not necessarily outperform simpler approaches.1
The main measure of effectiveness for a testing strate-
gy—at least if we do not rank faults by risk level, but treat
all faults as equally important—is the fault count function
fc (t), the number of faults found in t seconds of testing. A
“smart” strategy’s ability to find more faults or find them
faster can be outweighed by a longer setup time. It is es-
sential to submit any idea, however attractive, to objective
evaluation.
Generate
and select
inputs
Run test
cases with
selected inputs
Minimize
failing
test cases Regression
test suite
Log
results
Log files
Interpret
results (pass/fail)
TC3
21. TC2
TC1
TC1: 1 0 1 0 1 0 1 1 0
TC2: 0011101000
TC3: 1110100101
Figure 1. Test Generation’s automated testing process.
51SeptemBer 2009
Figure 2a shows the state after a failure in a bank ac-
count class, with an incorrect implementation of “deposit”
causing a postcondition violation when a user attempts to
withdraw $100 from an account with a balance of $500.
The lower part of the figure shows the source code of the
routine “withdraw,” containing an erroneous postcondition
tagged “withdrawn”: The plus should have been a minus.
Execution causes the postcondition violation shown at the
top part of the figure. The message is the normal EiffelStu-
dio reaction to a postcondition violation, with the debugger
showing the call stack.
Test Extraction’s innovation is to turn this failure au-
tomatically into a test case. Figure 2b shows an example
of an extracted test, including the different components
necessary to reproduce the original exception: “test_with-
draw” calls the routine “withdraw,” and “context” describes
the target object’s state.
Subsequent test executions will display the status of the
extracted test, which initially fails, as shown in Figure 2c.
Once the postcondition has been corrected, the test will
pass and the status will turn green.
22. Minimization allows AutoTest to record and replay many
such violations. The key idea is that it is not necessary to
replay the program execution as it actually happened; as
any failure is the result of calling a routine on a certain
object in a certain object structure, it suffices to record
that structure and, when replaying, to call the routine on
the target object.
As software evolves, a test may become inapplicable.
To address this situation Test Extraction will check, before
replaying the test, that both the object’s invariant and the
routine’s precondition hold. If either does not, it would
make no sense to run the test; Test Extraction marks it
invalid.10
ExAmplE SESSiOn wiTh AUTOTEST
Originally an independent tool, AutoTest is now simply
the testing part of the EiffelStudio environment. To start
the following example session, just launch EiffelStudio.
While the functionalities are the same across all supported
platforms, the user interface, shown for Windows in the
screenshots in Figure 3, will have a different look and feel
on, for example, Linux, Solaris, or Mac OS X.
To perform automatic tests on the application class
BANK_ACCOUNT and the library classes STRING and
or closed, it bears interest or not. Combining them
yields a representative partition of the space set,
containing dramatically fewer elements. With a
typical class, considering all possible instance
states is intractable, but combining n Boolean
queries yields 2n possibilities, or abstract query
states; in our experience, n is seldom more than
23. 10—for example, only 25 percent of the 217 classes
in the EiffelBase 6.4 library have more than 10
argument-less Boolean queries. The algorithm may limit
this number further by considering only combinations
that satisfy the invariant.
The conjecture suggests looking for a test suite that
maximizes Boolean query coverage (BQC): the percentage
of abstract states exercised. While this strategy is not yet a
standard component of AutoTest, our experiments suggest
that it may be useful. It involves trimming abstract query
states through a constraint solver, then using a theorem
prover for clauses involving noninteger queries. In experi-
ments so far, the strategy yields a BQC close to 100 percent
with minimal invariant adaptation; routine coverage in-
creases from about 85 percent for basic AutoTest to 99 or
100 percent, and the number of faults found increases
significantly.
Test generation results
Table 1 shows results of applying Test Generation (no
BQC) to the EiffelBase9 and Gobo (www.gobosoft.com)
data structure and algorithm libraries, widely used in
operational applications, and to an experimental library
providing complete specifications.
These results are typical of many more experiments.
As the tested classes have different semantics and sizes
in terms of various code metrics, the experiments appear
representative of many problem domains. Since AutoTest
is a unit testing tool and was used for this purpose in the
experiments, we do not claim that these results are rep-
resentative of the performance of contract-based random
testing for entire applications or software systems.
24. TEST ExTrACTiOn
During development, programmers routinely execute
the program to check that it proceeds as expected. They
generally do not think of these executions as formal test
cases. If results are wrong or the execution otherwise fails,
they fix the problem and return to development; off goes
a potentially interesting test, which could have benefited
future regression testing. The programmers could create
a test case, but most of the time they will not find the task
worth the time—after all, they did correct the problem, or
at least they addressed the symptoms.
Test Extraction will create the test for developers and
give it the same status as any other manual or generated
test. Figure 2 provides an example.
table 1. test generation results.
Tested library Faults
percent failing
routines percent failed tests
EiffelBase 127 6.4 (127/1,984) 3.8 (1,513/39,615)
Gobo libraries 26 4.4 (26/585) 3.7 (2,928/79,886)
Specification library 72 14.1 (72/510) 49.6 (12,860/25,946)
C O V E R F E AT U R E
computer 52
25. Figure 2. Test Extraction example: (a) catching a contract
violation, (b) turning this failure automatically into a test case,
and (c)
using the extracted test to reproduce the original exception.
(a)
(c)
(b)
53SeptemBer 2009
eters, such as the classes to be tested and how long random
testing should be performed. AutoTest will test the classes
listed and any others on which they depend directly or
indirectly.
LINKED_LIST, launch the “New Eiffel test” wizard, as
shown in Figure 3a. In the first pane, choose the radio
button labeled “Synthesized test using AutoTest.” The last
wizard window will ask you to specify AutoTest param-
Figure 3. Example session using AutoTest: (a) “New Eiffel test”
wizard, (b) sample AutoTest statistics, and (c) minimized
witness.
(b)
(c)
(b)
(a)
26. C O V E R F E AT U R E
computer 54
The development benefited from discussions with numerous
people, in particular Gary Leavens, Peter Müller, Manuel Oriol,
Alexander Pretschner, and Andreas Zeller. Bernd Schoeller
suggested the use of Boolean queries to reduce state spaces,
which Lisa (Ling) Liu studied experimentally. Test Extraction,
as developed by Andreas Leitner, was originally called CDD
(Contract-Driven Development). We presented an earlier ver-
sion of this article, on Test Generation only, at SOFSEM 2007:
B. Meyer et al., “Automatic Testing of Object-Oriented Soft-
ware, Proc. 33rd Conf. Current Trends in Theory and Practice of
Software Development, LNCS 4362, Springer, 2007, pp. 114-
129.
Design by Contract is a trademark of Eiffel Software.
references
1. B. Meyer, “Seven Principles of Software Testing,”
Computer,
Aug. 2008, pp. 99-101.
2. IEEE Std. 610.12-1990, IEEE Standard Glossary of Software
Eng. Terminology, IEEE, 1990.
3. B. Meyer, Object-Oriented Software Construction, 2nd ed.,
Prentice Hall, 1997.
4. P. Chalin, “Are Practitioners Writing Contracts?,” Rigorous
Eng. Fault-Tolerant Systems, LNCS 4157, Springer, 2006, pp.
100-113.
27. 5. T.Y. Chen, H. Leung, and I. Mak, “Adaptive Random Test-
ing,” Proc. 9th Asian Computing Science Conf. (Asian 04),
LNCS 3321, Springer, 2004, pp. 320-329.
6. I. Ciupa et al., “ARTOO: Adaptive Random Testing for Ob-
ject-Oriented Software,” Proc. 30th Ann. Conf. Software
Eng. (ICSE 08), ACM Press, 2008, pp. 71-80.
7. I. Ciupa et al., “On the Predictability of Random Tests for
Object-Oriented Software,” Proc. 2008 Int’l Conf. Software
Testing, Verification, and Validation (ICST 08), IEEE CS
Press, 2008, pp. 72-81.
8. I. Ciupa et al., “Experimental Assessment of Random Test-
ing for Object-Oriented Software,” Proc. 2007 Int’l Symp.
Software Testing and Analysis (ISSTA 07), ACM Press, 2007,
pp. 84-94.
9. B. Meyer, Reusable Software: The Base Object-Oriented
Com-
ponent Libraries, Prentice Hall, 1994.
10. A. Leitner, “Contract Driven Development = Test Driven
Development - Writing Test Cases,” Proc. 6th Joint Meeting
of the European Software Eng. Conf. and the ACM SIGSOFT
Symp. the Foundations of Software (ESEC-FSE 07), ACM
Press, 2007, pp. 425-434.
11. L. Liu, B. Meyer, and B. Schoeller, “Using Contracts and
Boolean Queries to Improve the Quality of Automatic Test
Generation,” Tests and Proofs, LNCS 4454, Springer, 2007,
pp. 114-130.
12. A. Leitner et al., “Efficient Unit Test Case Minimization,”
Proc. 22nd IEEE/ACM Int’l Conf. Automated Software Eng.
28. (ASE 07), ACM Press, 2007, pp. 417-420.
13. I. Ciupa et al., “Finding Faults: Manual Testing vs.
Random+
Testing vs. User Reports,” Proc. 19th Int’l Symp. Software
Reliability Eng. (ISSRE 08), IEEE Press, 2008, pp. 157-166.
Bertrand Meyer is Professor of Software Engineering at
ETH Zurich (Swiss Federal Institute of Technology), Zurich,
Switzerland, and cofounder and Chief Architect of Eiffel
Software, based in Santa Barbara, Calif. His latest book is
Touch of Class: An Introduction to Programming Well
By default, AutoTest will report result statistics, contract
violations, and other failures in HTML, as in Figure 3b.
All three classes under test are marked in red, indicating
that at least one feature test triggered a failure in each.
Expanding the tree node shows the offending features:
For BANK_ACCOUNT, “default_create,” “balance,” and “de-
posit” were successful (green), but “withdraw” had failures.
Clicking it displays the failure details. This includes a wit-
ness for each failure: a test scenario, generated by the tool,
which triggers the failure. Scrolling shows the first wit-
ness’s offending instructions.
The witness reproduces the postcondition failure (re-
sulting from a fault planted for illustration) encountered
using Test Extraction. This means that AutoTest found the
same erroneous postcondition as the manual process.
Figure 3c shows a real fault, in the routine “adapt” of
the library class STRING. The seldom-used “adapt” serves
to initialize a string from a manifest value “Some Charac-
ters,” as an instance not of STRING but of some descendant
MY_STRING. The witness reveals that “adapt” is missing
29. a precondition requiring a nonvoid argument. Without it,
“adapt” accepts a void but passes it on to “share,” which
demands a nonvoid argument. The fault, since corrected,
was first uncovered by AutoTest.
W
e have used the AutoTest framework
to perform large-scale experiments,11-13
totaling tens of thousands of hours of
CPU time, that investigate such ques-
tions as: How does the number of faults
found by random testing evolve over time? Are more faults
uncovered as contract violations or through other excep-
tions? How predictable is random testing? Are there more
faults in the contracts or in the implementation? How do
uncovered faults compare to those found by manual test-
ing and by software users?
The AutoTest tools provide significant functional help.
In addition, they yield a better understanding of the chal-
lenges and benefits of tests. Testing will never be an exact
science; it is an imperfect approach that becomes useful
when more ambitious techniques such as static analysis
and proofs let us down. If we cannot guarantee the absence
of faults, we can at least try to find as many as possible,
and make the findings part of a project’s knowledge base
forever, replaying the scenarios before every new release to
ensure old faults don’t creep back in. While more modest
than full verification, this goal is critical for practical soft-
ware development.
Acknowledgments
The original idea for AutoTest came from discussions with
Xavier Rousselot. Karine Bezault started the first version. Per
30. Madsen provided useful suggestions on state partitioning.
55SeptemBer 2009
in automated software testing, and is now an engineer
at Google’s Zurich office. Contact him at [email protected]
google.com.
Yi Wei was an engineer at Eiffel Software and is now a
research assistant at the Chair of Software Engineering at
ETH Zurich, where he is working toward his PhD in auto-
mated software testing and self-repairing programs. Wei
received an MS in engineering from Wuhan University in
China. Contact him at [email protected]
Emmanuel Stapf is a senior software developer at Eiffel
Software, where he leads the EiffelStudio development
team. His research interests include compiler development,
integrated development environments, and testing. Stapf
received an engineer’s degree from ENSEEIHT in Toulouse,
France. Contact him at [email protected]
(Springer, 2009), based on the introductory programming
course at ETH. He is a Fellow of the ACM and president of
Informatics Europe. Contact him at [email protected]
ethz.ch.
Arno Fiva was an engineer at Eiffel Software and is now
completing his MS at the Chair of Software Engineering at
ETH Zurich. His research focuses on automated software
testing. Contact him at [email protected]
Ilinca Ciupa was a research assistant at the Chair of Soft-
ware Engineering at ETH Zurich, where she received a PhD
in automated software testing. She is now an automation
engineer at Phonak in Switzerland. Contact her at ilinca.
[email protected]
31. Andreas Leitner was a researcher at the Chair of Soft-
ware Engineering at ETH Zurich, where he received a PhD
Week 3 Assignment 2 - Context of the Problem (Re-
submission).docx
1
Assignment 2: Context of the Problem
Assignment 2: Context of the Problem
Jessica Scott
Dr. Teresa Wilburn
RES 531: Research Methods
January 26th 2014
32. Describe the history of the problem and why it is a problem:
Automated testing is a contemporary concept in software
testing. Software testers subject every new development project
to automatic tests. New software used in automobiles,
machineries and industrial plants processes have auto test
frameworks. Auto test frameworks use programs that have
inbuilt software for self-verification. They auto-detect
complications in main software that be sources of system
failures and inconvenience operations. Improving the
integration of the testing tools reliefs the test engineer from
various manual tasks and thus his/her time and skills can be
better allocated, for example, to making model-based testing
(including the creation of adequate models) an integral part of
the development and test lifecycle (http://nl.atos.net). In most
instances, personnel operating machines detect complications in
tools after it causes defects. Correcting mechanical
complications in engineering machines and systems necessitates
design for testability.
Identify where and for whom it is a problem:
Past engineering innovations, machines and systems lacked
design for testability. Failure of early machines to function
efficiently because of mechanical complications was easy. Test
engineers used manual testing techniques but defects still
existed in most software (Mikhail, Berndt & Kandel, 2010).
Manual testing involved test engineers counterchecking any
33. defect that could exist in the software. Test engineers used
various combinations of inputs to detect faults in the software
(Moreira & Werkmann, 2010). Engineers recorded observed
results and compared them against the expected outcome. This
necessitated the invention of auto test.
Automating tests is the best way to increase the effectiveness,
efficiency and coverage of your software testing. Automated
testing provides various benefits to developers: coverage to
detect bugs and errors – early and later during the development
– and significantly reduces the cost of failure, saves time
through its repeatability and earlier verification, and leverages
the improved resource productivity. Implementing the test
automation for your mobile application development process is
the best way to gain these benefits and migrate your
development to effectively use of resources and time
(http://testdroid.com). Automated testing tools can playback
actions during and improve coverage of the process. Testing
tools compare the report to the process of to the expected
results. Repeating automatic tests is easier than manual testing.
Automated testing, therefore, is successful in enhancing
development projects. It saves time and improves accuracy of
developmental projects (www.infosys.com).
Auto test software test engineers apply a collection of
three complimentary functionalities. The first functionality of
auto test is test generation. Test generations create and run the
cases. Automatic test cases do not require the intervention of
test engineers. They detect mechanical complications in
engineering machines and systems. The second functionality is
test extraction. Test extractions automatically releases in-built
test cases during mechanical failure. Test extraction produces
test cases even in subsequent failures. The third functionality of
auto test is the integration of manual tests. Auto test software
should be compatible with other manual tools.
Identify the purpose of your research on this problem:
Automatic testing is a means to improve the efficiency of
engineering systems and machines. Initially, test engineers
34. conducted manual testing of new software. Testing every new
product, machine or system is costly. Software developers
expressed the necessity to develop programs that would conduct
the testing automatically; therefore, automatic testing was
developed.
Increased complexity of programs necessitates the
application of integrated tests. The usability and quality of final
test programs is a factor of consideration in automatic test
development. Market changes demands the development of
quality software products. Time and again, forward-thinking
companies of all sizes and industries have successfully
demonstrated the power of automated functional testing to
thoroughly test, rapidly develop and reduce the cost of
delivering high-quality applications. Yet 85% of organizations
that attempt automation fail (http://www.borland.com). Changes
in the development cycle further complicate the role of software
developers working on auto tests. A central challenge is the
necessity to detect faults in automated software testing tools.
References:
Graham, D., & Fewster, M. (2012). Experiences of Test
Automation: Case Studies of Software Test Automation. Upper
Saddle River, NJ: Addison-Wesley.
Helppi, Ville. (2013). Best Practice #1: Increase efficiency and
productivity with Test Automation. Retrieved February 16, 2014
from http://testdroid.com/testdroid/5851/increase-efficiency-
35. and-productivity-with-test-automation.
Mikhail, R. F., Berndt, D. J., & Kandel, A. (2010). Automated
Database Applications Testing: Specification Representation
For Automated Reasoning. Singapore: World Scientific.
Moreira, J., & Werkmann, H. (2010). An Engineer's Guide to
Automated Testing of High-Speed Interfaces. Norwood: Artech
House.
How to Successfully Automate the Functional Testing Process.
Retrieved February 16, 2014, from
http://www.borland.com/_images/Silk-Test_WP_How-to-
successfully-automate-the-functional-testing-process_tcm32-
205735.pdf.
Integrating Model-based Testing in a Project’s Tool Landscape.
Retrieved February 16, 2014, from
http://nl.atos.net/content/dam/nl/documents/atos-wp-
modelbasedtesting-tam.pdf.
Realizing Efficiency & Effectiveness in Software Testing
through a Comprehensive Metrics Model. Retrieved February
16, 2014, from http://www.infosys.com/engineering-
services/white-papers/Documents/comprehensive-metrics-
model.pdf.
Week 4 Assignment 3 - The Problem Statement.docx
1
Assignment 2: The Problem Statement
Assignment 3: The Problem Statement
36. Why Automated Software Testing is Preferred over Manual
Jessica Scott
Dr. Teresa Wilburn
RES 531: Research Methods
February 2, 2014
Why Automated Software Testing is Preferred over Manual
Testing of software has become a vital requirement for all
computer users in the modern world due to the increasing
number of fake and fabricated software that have flooded the
computer software market. The fake software has become so
common that all computer software uses are compelled to test
their software to ensure that they are genuine. Consequently,
37. there has been an increase in the number of software tests
carried out by computer software firms. This has led to the
introduction of test automation that is meant to enhance
software testing. Test automation not only increases the speed
of software testing but also adds additional tests that would not
be possible with manual testing (Hayes, p. 241, 2004). It has
transformed software testing by automating some laborious
testing tasks and introducing new and better tasks. Therefore,
this proposal will handle the topic of automated testing and why
automated testing is preferred to manual testing.
References:
Hayes, L. G. (2004). The automated testing handbook.
Richardson, TX: Software Testing Institute.
Week 6 Assignment 4 - Research Questions.docx
1
Assignment 4: Research Questions
Assignment 4: Research Questions
38. Jessica Scott
Dr. Teresa Wilburn
RES 531: Research Methods
February 16, 2014
The main purpose or objective of the study is to find out why
Automated Software Testing is preferred over manual. For
purposes of guiding this paper through and reaching out to the
main objective or purpose of the study, the researcher is obliged
to hinge the study to a reasonable hypothesis. Both automated
and manual testing approaches are tailored to meet the demands
of computing and processing, albeit, in different ways. This
would imply that consideration of the two testing methodologies
should be given equal grounds when handling any research on
comparison between the two (Hayes, 2004).
The objectives of this study would be designed to ensure
that the purpose of the study is considered. This will include the
39. consideration of the hypothesis of the research and the desired
results for the study. The purpose of the study is quite broad;
however, this purpose could be achieved through the following
goals and objectives;
· To determine the major differences between automated and
manual testing
· To find out the major factors considered in evaluating which,
between manual and automated testing, would be most
preferable in handling different situations and,
· To determine how manual and automated testing perform with
regard to time consumed, initial and long term investment in
human resources, reliability and programming
Since test automation is an investment it is rare that the testing
effort will take less time or resources in the current release.
Sometimes there's the perception that automation is easier than
testing manually. It actually makes the effort more complex
since there's now another added software development effort.
Automated testing does not replace good test planning, writing
of test cases or much of the manual testing effort
(www.methodsandtols.com). This cannot be verified without
clear cut questions on the suitability of the two. Effectively, the
necessity for research questions on the same cannot be ignored.
A number of research questions could be developed to help
define and guide the research into meeting its purposes. The
following research questions would apply to this study:
1. What are the major differences between automated and
manual testing? How are these differences reflected in the
results or end of a testing practice?
2. What are the major factors considered in evaluating which,
between manual and automated testing, would be most
preferable in handling different situations?
3. How does manual and automated testing perform with regard
to time consumed, initial and long term investment in human
resources, reliability and programming?
The above research questions would be very instrumental in
delivering on the purposes of the research. The first question
40. seeks to build the ground for the research. For purposes of
getting to understand what automated and manual testing are,
the first question would bring the reader and the researcher to
terms with various elements regarding testing processes and
desired results. With the answering of this question, the reader
would get to understand and appreciate the role played by the
assumptions made on the preference of automated testing over
manual testing (Dustin, Garrett & Gauf, 2009).
The second question seeks to
develop the framework from which analysis of the main purpose
of the research can take place. In this regard, the study seeks to
use qualitative approaches seek to find out the main aspects
considered by users on what would amount to an effective
testing mechanism (Leedy & Ormrod, 2013).
The third question sums up the research through
direct analysis of a number of factors considered in computing
and testing. The mentioned factors, including; time consumed,
initial and long term investment in human resources, reliability
and programming have been considered as good enough to test
the assumption that automated testing is most preferred by users
to manual testing. By evaluating these factors, the research
would get the edge that each testing approach has with regard to
main functions of testing and the desired results.
References:
Hayes, L. G. (2004). The automated testing handbook.
41. Richardson, TX: Software Testing
Institute.
Leedy, P. D., & Ormrod, J. E. (2013). Practical research:
Planning and design. Boston: Pearson.
Dustin, E., Garrett, T., & Gauf, B. (2009). Implementing
automated software testing: How to
save time and lower costs while raising quality. Upper Saddle
River, NJ: Addison-Wesley.
Zallar, Kerry. (n.d.). Practical Experience in Automated
Testing. Retrieved February 15, 2014,
from
http://www.methodsandtools.com/archive/archive.php?id=33.
Week 7 Assignment 5- Significance of the Study.docx
1
Assignment 5: Significance of the Study
Assignment 5: Significance of the Study
Jessica Scott
Dr. Teresa Wilburn
RES 531: Research Methods
February 23, 2014
42. This study is of great significance in as far as acquisition of
knowledge on automated and manual testing methods is
concerned. Through this study, the researcher would appreciate
the role of testing methodologies and how they can be used
differently by both individuals and organizations. By carrying
out this study the benefits of automated testing approach and
comparison with the manual testing method would be clearly
defined. Appreciation of the findings from this study would
inform the decisions of the research, together with other
individuals with regard to choice of testing methods.
This study will be of great help to IT
experts and technological innovators. Automation is indeed one
of the most important tools used in facilitating effective and
efficient operations. Whereas automated testing is good for
business operations, technologists can always find better ways
to develop better automated testing systems. This paper would
provide a good ground for designers to develop better systems,
43. through acknowledging the gap between desired characteristics
of testing methodologies and the existing designs (Leedy&
Ormrod, 2013). Even with increased automation, the design of
new automated systems must ensure that the testing processes
are much better every other time. This significance is in
recognition that technology is never static.
Researchers on testing
methods would greatly benefit from this study. The need for
secondary information on this research topic can never be
underestimated. Whereas primary researches are preferred in
giving credible information about any research topic including
that of testing method, secondary sources of information are
good for making direct and indirect inferences from past
findings (Hayes, 2004). This study will in effect provide a
framework through which other studies will make reference to,
compare and contrast the findings.
References:
Hayes, L. G. (2004). The automated testing handbook.
Richardson, TX: Software Testing Institute.
Leedy, P. D., & Ormrod, J. E. (2013). Practical research:
Planning and design. Boston: Pearson
Week 8 Assignment 6 Research Design & Methodology.docx
1
Assignment 6: Research Design & Methodology
44. Assignment 6: Research Design & Methodology
Jessica Scott
Dr. Teresa Wilburn
RES 531: Research Methods
March 2, 2014
Qualitative research is a type of research characterized by aims,
which correspond to understanding some aspects of social life
and the method it applies which results to words rather than
numbers. It is a widely used method although it has its setbacks
such as the samples to be used may be small and not necessarily
representative of the boulder population. This makes it complex
to know how far we can generate results. The results may lack
rigor it gets complicated telling how far the findings go to
depending on the researcher’s opinion. Given an example of a
question, whether a person wants a lobby for better access to
45. health care in a location where user fees got introduced, you
might undertake a cross sectional survey which will tell the
researcher that a set percentage of the population does not have
access to care. Qualitative method can answer such a question
through interviews or focus groups. Qualitative method becomes
vital when little becomes known on the topic to be researched.
It helps in generating the hypothesis behind the
study. (Ericsson, 1984).
The nature of testing technology provides room for use of
either qualitative or quantitative methods to research design. In
this research, qualitative approaches to research design would
be most preferred. In this regard, descriptions on interrelations
between usage of the manual and automated software testing
would be prioritized over the frequencies (Quantitative)
provided by various measures of accuracy and efficiency.
Effectively, the study would be coined around the need to get a
comprehensive understanding of the uses and gratifications of
students in the use of manual and automated testing, rather than
being inclined to much of statistical figures on the same as it
would have been the case with a qualitative research.
Description would be given much priority as compared to
frequency. Whereas the efficiency of a testing method lies with
how the design of the method helps in realizing the goals of the
testing method, such efficiencies can be noted based on analysis
of the descriptions given by users of these testing methods.
The sampling
methodology for this study was a stratified random sample of
residents in these regions who are consumers of both automated
and manual testing methodologies. By rooting for the stratified
sampling methodology, the researcher improved the
representativeness of the sample by reducing sampling error.
Buber, Gadner & Richards (2004), are of the opinion that
random sampling would always be the best methods in obtaining
a representative sample. However, this does not imply that such
a methodology would guarantee of a 100% sample that is
representative. However, random sampling would provide a
46. higher probability in terms of accurate representation,
particularly where the targeted population or audience have
common or shared characteristics.
The research would really on the use
of questionnaire for the study. A total of 120 respondents were
given the questionnaires from which a number of questions were
answered regarding their experiences in the usage of both
qualitative and quantitative testing methods. In doing this, the
qualification for a respondent required that the respondent must
have used both manual and automated testing methodologies.
For purposes of efficiency, the study would concentrate on
employees from top research and consultancy firms in the US.
The choice of the firms in question was based on their
transitions from manual to automated systems used in these
organizations’. Questionnaires would be issued to twenty
respondents identified through stratified sampling in these
organizations. The
analysis and conclusions drawn from the research would really
on both primary and secondary data whereas the researcher
takes the primary data as is envisaged by the administration of
questionnaires, the efficiency of the research can only be
guaranteed where the findings and results from the
questionnaire can be supported by various theories and
concepts. Consistency of results from the questionnaires with
the theoretical underpinnings would to a great extent help in
arriving as reasonable conclusions on the suitability of the
automatic and manual testing methods (Wertz, Charmaz,
McMullen, Josselson, Anderson, & McSpadden (2011). The
result would be valid if the questions answered in the
questionnaire are reasonable and have the lowest levels of
conflict.
47. References:
Ericsson, K. A., & Simon, H. A. (1984). Protocol analysis:
Verbal reports as data. Cambridge, Mass: MIT Press.
Buber, R., Gadner, J. & Richards, L. (2004) Applying
qualitative methods to marketing management research. UK:
Palgrave Macmillan, pp141-156.
Wertz, Frederick J., Charmaz, Kathy, McMullen, Linda M.,
Josselson, Ruthellen, Anderson, Rosemarie, & McSpadden,
Emalinda (2011). Five Ways of Doing Qualitative Analysis
Phenomenological Psychology, Grounded Theory, Discourse
Analysis, Narrative Research, and Intuitive Inquiry. New York,
New York: Guilford Press.
Week 8 Assignment 7 Organization of the Study.docx
1
Assignment 7: Organization of the Study
Assignment 7: Organization of the Study
Jessica Scott
48. Dr. Teresa Wilburn
RES 531: Research Methods
March 9, 2014
Organization of the Study:
The need for a research that is well designed and allows for
ease in interrelating information is all that will inform my
designing of the research topic. Whereas this research is to take
a formal outline of a research paper, it would also be designed
to be contingent to the research purposes and objectives.
Effectively, all the topics and subtopics used in the research
would be chosen with the intention of dealing with all matters
concerning, Automated and Manual Testing. Effectively, the
researcher intends to use the following outline in the
development of the final research paper. The final chapter has
three sections: an Introduction, the Summary and Conclusion
49. Chapter One: Introduction
This chapter provides the basics of the research in terms of the
subject or topic and the trigger for researching on the topic. The
introduction provides a concise statement of the purpose of the
study, research sub questions, and the methodology used. Other
than this, the introduction would also have the justification for
the study and the significance of the same.
Chapter Two: Summary
In this chapter, researcher would provide the most important
substantive details of the research conducted in the previous
research chapters. A DRP summary must convey a very clear
and positive influence, providing a cogent grasp of the research
interest and findings. Through the research summary, a reader
would understand the major motivations and concepts being
advanced by the research. It is also through the summary that
the analysis of the research will take place.
Chapter Three: Conclusions
This chapter will present research conclusions based on the
consolidated summary of the analyses and findings as reported
in the summary section. The conclusions made would rely on
the inferences made to primary research, secondary research and
theoretical underpinnings of the research topic.
DRPManual.pdf
Directed
Research
Project
Manuscript Guidebook
and Project Format
Manuscript Guidebook
and Project Format
51. Grading
...............................................................................................
.................................................................................... 5
LRC Collection of Directed Research Projects
...............................................................................................
......................... 6
Information Literacy and Library Use
...............................................................................................
...................................... 6
Directed Research and Human Subjects Policy Statement
...............................................................................................
........ 6
Section 3: Stages of DRP Development
...............................................................................................
............... 7
Research Question and Subquestions
...............................................................................................
....................................... 7
Research Proposal (Chapter One)
...............................................................................................
............................................ 7
Chapter 1: Introduction
...............................................................................................
........................................................... 7
Chapter 2: Literature Review
...............................................................................................
................................................... 9
The Research Chapters
...............................................................................................
............................................................ 9
Summary and Conclusions Chapter
...............................................................................................
......................................... 9
Draft of the Directed Research Project
...............................................................................................
52. ..................................... 9
The Final Project
...............................................................................................
.................................................................... 10
The Defense
...............................................................................................
........................................................................... 10
Section 4: The Directed Research Project Proposal
.......................................................................................... 11
The Directed Research Project Proposal Flow Chart
...............................................................................................
.............. 11
Section 5: Characteristics of Research
...............................................................................................
.............. 12
Section 6: Planning and Designing the Research Proposal
............................................................................... 13
Key Questions for Planning and Designing the DRP Proposal
...............................................................................................
13
Components of the DRP Proposal
................................................................................ ...............
.......................................... 14
Context of the Problem
...............................................................................................
.......................................................... 14
Statement of the Problem
...............................................................................................
....................................................... 14
Research Question/Hypothesis and
Subquestions/Subhypotheses
.......................................................................................... 15
Significance of the Study
53. ...............................................................................................
........................................................ 16
Research Design and Methodology
...............................................................................................
........................................ 16
Premises of the Qualitative and Quantitative Research
...............................................................................................
.......... 17
Organization of the Study
...............................................................................................
...................................................... 17
Proposed Reference List
...............................................................................................
......................................................... 17
Section 7: Writing the Research Chapters 18
Citing the Literature
...............................................................................................
............................................................... 19
Research Sampling
................................................................................ ...............
................................................................. 20
Analysis and Findings
...............................................................................................
............................................................ 20
Section 8: The Final Chapter – Summary and Conclusions
............................................................................. 21
Section 9: DRP Format Requirements
...............................................................................................
.............. 22
Section 10: Certificate of Approval Form
...............................................................................................
......... 25
55. research project will be monitored by a supervising faculty
member and must be defended by
the student in an oral examination. The oral defense may be
conducted in a conference-style
meeting of student, instructor, and second reader or technical
advisor. A second type of defense
allows students to present a synopsis of their projects during
one of the last two scheduled
class meetings. Students are encouraged to discuss the project
with an instructor or Academic
Advisor early in their programs. Students may not fulfill the
directed research requirement by
completing another course.
LEArNING OuTCOMES
Upon completion of the Directed Research Project, the student
will be able to:
1. Design, conduct, analyze, interpret, apply and write original
research studies applicable
to academic course content and/or the professional work
environment.
2. Present research results in a clear, organized and effective
oral delivery.
3. Identify and use major reference tools appropriately.
3
SECTION 2:
Introductory
Guidelines
56. Introduction
The Directed Research Project (DRP) is designed as a vehicle
for the graduate student to complete
a research project in his/her field of major concentration. THE
DRP IS NOT A TERM PAPER. The
research project is monitored through its completion by a
supervising seminar professor and, in some
instances, an additional faculty technical advisor. Students must
defend the completed DRP in a
meeting attended by the seminar professor and technical advisor
(if applicable).
rEquIrED rEFErENCE
Leedy, P. D., & Ormrod, J. E. (2010). Practical research:
Planning and design (9th ed.).
New Jersey: Prentice Hall Inc.
Strayer University. (2009). Directed research project:
Manuscript guidebook and
project format. Washington, D.C.: Author.
American Psychological Association. (2010). Publication
manual of the
American Psychological Association (6th ed.). Washington,
D.C.: Author.
Note to Instructors: The American Psychological Association
allows one desk copy per instructor; if you have ever received
one before,
a second copy can not be obtained. Copies should be requested
on an individual—not institutional—basis.
Maimon, E., Peritz, J., Yancey, K. (2010). A writer’s resource
(3rd ed.). New York, NY:
McGraw Hill Publishing.
57. 4
TEAChING STrATEGIES
The course will be conducted as an independent research
project, which will be monitored by
the instructor. The initial class sessions will be used to assist
the students to define their research
problems, develop their research proposals (Chapter 1 of the
DRP), and initiate their research
efforts. Subsequent Individual Project Review meetings between
the supervising faculty member
and each student will help address any individual concerns or
problems the student might be
having, and monitor the project’s progress. Instructors will
establish progress milestones and
requirements for draft writings to help the students in managing
their research projects. The final
DRP report will be defended by the student in a presentation to
the instructor, as a minimum,
with possible participation by a technical advisor and/or other
class members.
PEEr rEvIEwEr/TEChNICAL ADvISOr
In those rare instances when the project is out of the scope of
the expertise of the instructor, a
technical advisor may be required to assist the instructor in
guiding and assessing the student’s
project.
CErTIFICATION AND ASSESSMENT FOrM
Both the Supervising Instructor and the Peer
Reviewer/Technical Advisor (if applicable) will
also complete a certification and an assessment form that helps
58. to assess student learning.
The completed assessment form goes to the appropriate
Department Chair and the Office of
Institutional Research for compilation and analysis.
CLASS SIzE
Limit class size to fifteen (15).
COurSE rEquIrEMENTS
Students are required to identify a problem within their major
fields that the research will intend
to solve. Chapter 1 will be written by the student detailing what
the completed DRP/research
will entail. This is to be submitted to the seminar professor by
the deadline prescribed by the
professor. Individual project reviews will be conducted with the
seminar professor. Every student
is expected to meet the scheduled times to review his/her
progress and to finalize the problem
statement research questions. The help of a technical advisor
may be solicited at the discretion
and approval of the seminar professor.
5
rEGISTrATION rEquIrEMENTS AND rECOMMENDATIONS
Students must meet three basic requirements prior to registering
for the DRP. These are noted
below. Two additional student status criteria are strongly
recommended to promote academic
success.
1. Required: Prior attendance in DRP Orientation (DRP 999).
Students will not be allowed
59. to register for the DRP without completion of this orientation
session.
2. Required: Completion of at least 45 quarter hours of graduate
study creditable to the
Master’s degree.
3. Required: Completion of RES 531, which is a prerequisite for
all 590 DRP courses.
3. Recommended: Cumulative GPA of at least 3.00, with no
pending “I” grades.
4. Recommended: Concurrent registration with, at most, one
other graduate course.
GrADING
The nature of this course precludes written examinations as a
means of determining student
achievement. Therefore, the DRP and its defense, along with
student attendance, will determine
the final grade.
1. To achieve an “A” grade, the DRP must be excellent in
content (both factual and
grammatical) and in presentation (both written and oral). The
student must have met
all the draft deadlines, and the final manuscript must have been
submitted by the last
scheduled class. Excellent DRPs that are submitted after the end
of the quarter in which
they are started are not likely to be awarded an “A”. Only
selected “A” graded projects
will be included in the Learning Resources Center (LRC)
collection at the Wilkes Library.
60. 2. The DRP is not a term paper; it is more than a term paper.
3. The DRP must be the student’s original work. Plagiarism will
result in an “F” for the
course and possible disciplinary action, which may include
expulsion from the program.
6
LrC COLLECTION OF DIrECTED rESEArCh PrOjECTS
Directed Research Projects may be recommended by the Dean of
the School to be submitted to
Strayer University’s Wilkes Library as part of its Directed
Research Project Collection. These
projects will be made available to other University students as a
demonstration of the University’s
expectations in completion of the Directed Research Project.
Although the basic criterion for
inclusion is an earned “A” grade, all aspects of a project will be
reviewed to judge the work’s
value to Strayer’s collection. Major points that will be screened
by seminar professors are
presentation (organized and professional approach),
grammar/mechanics, and content (factual
and analytical material). Students who would like to view bound
hard copies of previously
submitted Directed Research Projects can do so by showing
their IDs to their LRC Managers and
requesting copies from the Wilkes Library. These DRPs are for
in-library use only. Online model
DRPs are also available on the Strayer website under Learning
Resources Center (LRC).
INFOrMATION LITErACy AND LIbrAry uSE
61. All students taking the DRP are encouraged to visit their LRC
to tour of the facility and be
shown the online features. In addition, students may review the
collection of Directed Research
Projects that have been identified as those successfully
demonstrating the expected standards of
the DRP. Students may use this collection to extract content and
examine the format. Besides
LRC and Wilkes Library resources, use of institutional
collections such as Library of Congress
and large university libraries are recommended. Strayer
University has a consortium agreement
with the University of Alabama in Huntsville to assist with DRP
research. Resources may be
obtained in both hard copy and electronic formats. This
collection can be accessed through
Strayer University’s website: www.strayer.edu under Current
Students/Learning Resource Center.
DIrECTED rESEArCh AND huMAN SubjECTS POLICy
STATEMENT
The Directed Research Project (DRP) is intended to be
completed within a single quarter. Given
this time constraint, it would be difficult to collect data and
institute a review process to ensure
compliance with standards for gathering data from human
subjects to effectively contribute
toward and support the research project.
In view of these considerations, DRP research must be
restricted to projects that do not gather
primary data from individuals. This would include information
gathered through questionnaires,
tests, surveys, observations, or interviews.
This restriction only affects the use of persons as primary
62. sources. Any data that is available in
the public domain, such as information published by federal,
state or local governments and
various research organizations, colleges or universities may be
used. Likewise, information from
published sources may be freely used.
7
SECTION 3:
Stages of
DrP Development
rESEArCh quESTION AND SubquESTIONS
Each student is required to identify a problem within his/her
major field which the research
will intend to address. A research question proposal, written by
the student detailing what the
contemplated DRP will entail, is submitted to the seminar
professor for approval.
The research question and subquestions proposal consists of:
1. Topic
2. Statement of the problem
3. Specific research question and subquestions to be addressed
rESEArCh PrOPOSAL (ChAPTEr 1)
Chapter 1 provides an introduction to the DRP. This chapter
serves as the student’s research
proposal.
63. CHAPTER 1: InTRoduCTIon
After the research question and research subquestions are
approved, the student develops the
complete introduction for the professor’s review and approval.
This provides the reader with
a summary of the candidate’s research. In it, the researcher
outlines the research problem, the
research questions that need to be addressed to resolve this
problem, methods the researcher
has chosen to gather data to answer the research questions, and
possible implications of
resolving the research problem. Thus, Chapter 1 consists of:
8
1. Context of the problem (background information and
introduction to the problem)
2. Statement of the problem
3. Specific research question and subquestions to address the
problem
4. Significance of the study (Why is this study important? Who
will benefit?)
5. Research design and methodology (How will this research be
conducted?) This section
is used to describe and justify the research methodology used
for collecting the data to
answer the candidate’s research questions. Note the guidelines
below.*
64. 6. Organization of the study
7. Tentative Reference List
*Note: Guidelines for #5—Research Design and Methodology
rESEArCh quESTION AN D SubquESTIONS
Each student is required to identify a problem within his/her
major field which the research
will intend to address. A research question proposal, written by
the student detailing what the
contemplated DRP will entail, is submitted to the
It is generally required that a university establish an internal
review board (IRB) to scrutinize all
proposed research studies that involve human subjects. This is
necessary to assure that legal and
ethical procedures are followed. The manual entitled Research
and Strategic Communication
(Ormond & Leedy, 2010, p. 104) reiterates the requirement that
any college, university or
research institution will have an internal review board to protect
human subjects and their
privacy.
The two basic techniques for gathering data are primary and
secondary. Primary research
techniques are when information is obtained directly from a
person or his/her private records.
Primary research could involve interviews; surveys;
questionnaires; school, employment, and
health records; tests; observations; or any method that puts the
researcher in direct contact with
personal information related to the subject.
Secondary research information is obtained from primary
research sources. It could involve
65. scholarly books; peer reviewed journals; unpublished research
papers; information from federal,
state, county, or local governments; numerical data that is
available from government sources;
any data that has been published by private or nonprofit
organizations (this includes not for
profit organizations); and information available from a college,
university, or trade school.
Essentially, any information available in the public domain may
be analyzed in either a verbal
format or by applying statistical tests to numerical data to arrive
at a research conclusion.
Only the secondary research techniques and secondary data can
be used in the Directed Research
Project until Strayer establishes an Internal Review Board.
9
CHAPTER 2: REvIEw of THE LITERATuRE
In this chapter, the candidate reviews the main bodies of
existing knowledge and literature that
relate to addressing the research problem. It is during this
review that the candidate refines
the research questions that form the basis for his or her research
project. A DRP student is
expected to read, evaluate and synthesize at least twenty (20)
sources of literature relevant to
his/her research problem. While these sources will probably not
comprise a comprehensive
coverage of the available literature, they should reflect a
representative sampling of current and/
or classic findings and texts. This literature review is not an
annotated bibliography. Rather, the
66. review of the literature is used to examine relevant scholarly
sources and connections between
these sources with respect to analysis of factors such as the
following: comparisons, contrasts,
consistencies, inconsistencies, strengths, weaknesses,
reliability, validity, significance, limitations,
positions (and relation to the student’s perspective), theoretical
approaches, and/or research
methods.
Writing a review of the related literature takes planning and
organization, and the researcher
must emphasize the relationship of the literature to his/her
research topic. According to the
University of Toronto Writing Support website
(http://www.utoronto.ca/writing), the literature
review should “be organized around and related directly to the
thesis or research question…,
synthesize results into a summary of what is and is not known,
identify areas of controversy in
the literature, and formulate questions that need further
research.” Various general approaches
can be used to select and organize information for a literature
review. One common method is to
review the literature historically/chronologically. Through this
approach, the DRP student might
identify common threads or trends. Another option is to employ
an issue oriented approach
through exploration of specific themes, conflicts, or debates.
Research methods, theories, or
content related standards (e.g., legal standards, international
regulations, ethical guidelines, etc.)
could also be applied as criteria for analysis and organizational
frameworks for presentation of
the literature.
67. ThE rESEArCh ChAPTErS
Each research subquestion in the statement of the problem
becomes a separate chapter in
the body of the work. In other words, Research Subquestion 1
becomes Chapter 3, Research
Subquestion 2 becomes Chapter 4, etc. Major concepts from
each subquestion should be
reflected in short chapter titles.
In these chapters, the researcher lays out the data gathered via
the research methodology
described in Chapter 1 in a form easily accessible to the reader.
Any analyses presented in these
chapters relate only to relationships between the data and the
research methodology.
SuMMAry AND CONCLuSIONS ChAPTEr
In this final chapter, the research process is concluded. The
researcher describes how the research
problem is resolved through ways that the researcher’s findings
answer the research subquestions
of Chapter 1. It is in this chapter that the contributions to
knowledge, in the realm of theory, are
fully developed and described. This chapter also contains a
discussion of the limitations of the
analysis and suggestions for future research.
DrAFT OF ThE DIrECTED rESEArCh PrOjECT
The draft consists of the entire research project, including the
prefatory pages, introduction,
content chapters, summary, conclusion and bibliography. The
draft becomes the final project
once the student incorporates the professor’s proposed changes
and revisions upon completion of
the defense.
68. 10
ThE FINAL PrOjECT
The final copy consists of:
PRELIMINARY PAGES (each separate; reference A Writer’s
Resource or the APA
Publication Manual for examples)
Title Page (required)
Approval Page (optional)
Abstract (required)
Acknowledgements (optional)
Table of Contents with page references, including preliminary
pages
List of tables with titles and page references
List of illustrations with titles and page references, including
figures, maps, etc.
INDIVIDUAL CHAPTERS
REFERENCES
APPENDIXES
Note: The final document should be at least forty (40) pages,
excluding appendixes.
ThE DEFENSE
69. The defense takes place at a time specified by the seminar
professor. At the discretion of the
DRP professor, this oral defense can be conducted one-on-one
with the student, or presented to
the professor and other invited faculty members, or attended by
seminar classmates and their
guests. A successful defense requires completion of all DRP
chapters according to the design in
this manual. The conclusion must address the research question,
and it must be justified by the
research findings reported in the summary section.
11
SECTION 4:
The Directed research
Project Proposal
ThE DrP PrOPOSAL FLOw ChArT
1 Context of the Problem
2 Statement of the Problem
3
Primary Research Question or Hypothesis
and Subquestions or Subhypotheses
4 Significance of the Study
5 Research Design and Methodology
70. 6 Organization of the Study
7 Prospective References
12
SECTION 5:
Characteristics
of research
The research project focuses on a question in which the
researcher intentionally sets out to
enhance an understanding of a phenomenon and expects to
communicate what was discovered
to the larger community. Leedy and Ormrod (2010) advise:
“Research is the systematic process
of collecting and analyzing information or data in order to
increase our understanding of the
phenomenon about which we are concerned or interested” (p. 2).
ChArACTErISTICS OF rESEArCh
Research originates with a question or problem.1.
Research requires clear articulation of a goal.2.
Research requires a specific plan for proceeding. 3.
Research usually divides the principal problem into more
manageable subproblems.4.
Research is guided by the specific research problem, question,
or hypothesis.5.
71. Research accepts certain critical assumptions.6.
Research requires the collection and interpretation of data in an
attempt to resolve 7.
the problem that initiated the research.
Research is, by its nature, cyclical or, more exactly, helical.8.
Leedy & Ormrod, 2010, pp. 2-3
13
SECTION 6:
Planning and Designing
The research Proposal
The DRP proposal provides the framework whereby the central
research problem can be
subjectively or objectively advanced. Leedy & Ormrod (2010)
list the following among the key
questions for planning and designing the DRP proposal:
KEy quESTIONS FOr PLANNING AND DESIGNING ThE DrP
PrOPOSAL
Purpose What does the researcher want to know, and why does
the researcher
want to know? What does the researcher want to be able to
decide or
offer as a result of the research? Why?
Target Audience Who will be interested in this research when it
is completed? Is anyone
providing funding, other resources, or support?
72. Data Needs and
Collection
What kinds of information or data (published information,
publicly
available numerical data, literature reviews, published
interviews,
documents, historical records, videotapes, annotated
bibliographies,
and/or other secondary sources) are needed to conduct an
analysis,
draw conclusions, and make decisions or recommendations?
How
can this data or information be collected? What are the
identifiable
resources available to support information or data collection?
14
Data Analysis What methodology seems most appropriate for
analysis and
interpretation of the data? The researcher must select an
approach that
is relevant to both the research question and available secondary
source
data such as a content analysis of written documents, historical
trend
analysis, correlational study, case study analysis, theory
development,
or analysis of conceptual representations.
Time Line When is the information or data needed? When must
it be collected?
73. How does data collection fit in with the overall research project
time
line?
Significance Why or how is the study important? Who or what
will benefit from the
research and work-product? Why?
The design of the DRP provides the overall structure for the
procedures the DRP student follows,
the information and data that the DRP student collects, and the
information or data analysis
the DRP student conducts. Simply put, the research design is
the most significant part of the
DRP proposal. Once a supervising faculty member approves the
proposal, it becomes the DRP
Chapter 1 – Introduction.
COMPONENTS OF ThE DrP PrOPOSAL
The seven parts of the DRP proposal include the context of the
problem, statement of the
problem, research question and subquestions or hypothesis and
subhypotheses, significance of
the study, research design and methodology, organization of the
study, and prospective reference
list.
Step 1. Context of the Problem sets up the research statement
with background, purpose and
perhaps some support from the literature or acceptable literature
alternatives. It is here that the
DRP problem or issue is discussed and gives a transitory
explanation of what the completed
research work-product will most likely contain.
Step 2. Statement of the Problem. The DRP research statement
74. of the problem is a three part
statement: an introductory sentence, a problem sentence, and a
transition/closing sentence.
Introductory Sentence: The first sentence introduces the topic of
the research problem that is of
primary interest to the DRP student.
Example:
“Organizational Behavior touts itself as a field that extracts its
contents
from various social sciences.”
The Problem Sentence: The second sentence presents the
structure from which the research
question will be derived.
Example:
“A review of academic and professional journals reveals no
studies
illustrating the Organizational Behavior/Social Science
linkage.”
15
The Transition/Closing Sentence: The third sentence is a
transition or closing sentence.
Example:
“Universities use an Organizational Behavior interdisciplinary
approach to educate business professionals about behaviors
occurring
within organizations and the Organizational Behavior/Social
Science
relationship.”
75. Anyone with or without expertise in this intended research area
of interest can immediately
understand where the DRP research effort is headed and why.
This provides a basis for how the
DRP student will relate the DRP research conclusion back to the
statement of the problem, and
either the primary research question or hypothesis as the
research moves forward.
Step 3. Research Question/Hypothesis and
Subquestions/Subhypotheses. The research question
or hypothesis is derived from the statement of the problem. This
provides a clear basis for the
research to be done. The research question/hypothesis can be
broken into applicable manageable
subquestions or subhypotheses.
Example:
Research Question and Subquestions
The purpose of this research is to determine the following:
How do
universities use Organizational Behavior’s interdisciplinary
approach to
educate business professionals about behaviors within
organizations and
the Organizational Behavior/Social Science relationship? To
answer this
question, the following subquestions will be addressed:
1. What is Organizational Behavior’s core body of knowledge
and
interdisciplinary approach? (Qualitative)
2. What Social Science concepts influence the Organizational
76. Behavior
field’s core body of knowledge and the correlation between
them?
(Quantitative)
3. How are business professionals educated about behaviors
occurring in
organizations? (Qualitative)
16
Step 4. Significance of the Study. The Significance of the Study
section is the researcher’s
opportunity to explain why the research problem under study is
significant in theory and/or
practice. The following example of a declaration of significance
may be helpful:
Example:
Significance of the Study
This case study is important because it recognizes the value and
benefits
of conducting e-business on the WWW.
The study will help clarify the nature of warranted change and
how a
significant segment of the corporate structure communicates
strategically
in business and the professions. This research is also of
importance
because it will add to the growing base of knowledge about e-
business
and the WWW global market place. A third consideration of the
77. significance is that much more can be learned about what
companies can
do to be successful and to circumvent initial failure in the first
place. It is
expected that insights will be gained regarding management and
the need
for effective strategic communication.
To the extent this study reveals how e-business can be
successful,
corporate management may or may not need to be concerned
with
whether or not organizational policy changes are necessary, or
whether
the phenomena are matters of environmental business changes
of the
day; then the study will have contributed to a better
understanding that
is unique to the larger WWW e-business community.
Step 5. Research Design and Methodology. There are two kinds
of DRP research design–
qualitative and quantitative. The first sentence of the section
explains which kind of design the
student will use.
Qualitative research focuses on understanding phenomena,
rather than predicting as in the
application of traditional quantitative or statistical research.
The methodology section describes the procedures the DRP will
follow, describes the information
and/or data that the student will collect, and describes how the
student will develop conclusions
to address the purpose of the study.
78. 17
PrEMISES OF ThE quALITATIvE AND quANTITATIvE
rESEArCh
Qualitative Quantitative
Research Definition A formal systematic, realistic
and consistent subjective strategy
for obtaining information
about a targeted research
group or individual situation
that can be used to describe
life experiences and give them
meaning. Qualitative researchers
explain the complexity of their
data using a literary (or written)
style.
A formal systematic, realistic
and consistent objective strategy
for obtaining information about
a targeted research population.
A method used to describe,
test relationships, and examine
cause and effect relationships.
Quantitative researchers
typically use descriptive
and inferential statistics to
summarize their data.
Research Goal The realistic goal is to clearly
identify a primary research
question to answer and gain an
understanding and insight by