3. Why Evaluate Projects?
• Promotional: To encourage more use
• Scholarly: To confirm or create scientific
knowledge
• Pragmatic: To know what works and what fails
• Ethical: To ensure appropriateness & justify its
use or its budget
• Medicolegal: To reduce liability risks
Friedman & Wyatt (2006)
4. Complexity of Evaluation in Informatics
Friedman & Wyatt (2006)
Medicine &
Health care
Evaluation
Methodology
Information
Systems
8. Health IT as Healthcare Interventions
• Donabedian’s Model
Donabedian (1966), Friedman & Wyatt (2006)
Structure Processes Outcomes
9. Class Exercise
• Can you provide some examples of
measures in each aspect in the
Donabedian’s model that help evaluate
health IT project success?
10. A Mindset for Evaluation
• Tailor the study to the problem
• Collect data useful for making decisions
• Look for intended and unintended effects
• Study the resource while it is under development and
after it is deployed
• Study the resource in the lab and in the field
• Go beyond the developer’s point of view
• Take the environment into account
• Let the key issues emerge over time
• Be methodologically Catholic and eclectic
Friedman & Wyatt (2006)
11. Evaluation vs. Traditional Research
• Different goals
• Who (clients or evaluators) determines the agenda
• Evaluation actively seeks unanticipated effects as well
as anticipated ones
• Both lab and in-situ evaluations important for evaluation
• Evaluations often employ many data-collection paradigm
Friedman & Wyatt (2006)
12. Evaluation Approaches
• Objectivist vs. Subjectivist approaches
• Objectivist characteristics
– Information resources, users, and processes can be measured
– Rational persons should agree on important measures and
desirable outcomes
– It is possible to disprove a hypothesis, but never to fully prove
one
– Quantitative measurement is superior and more precise to
qualitative methods
– We can assess which resource is superior through comparisons
Friedman & Wyatt (2006)
13. Evaluation Approaches
• Objectivist vs. Subjectivist approaches
• Subjectivist characteristics
– What is observed depends fundamentally on the observer
– Context is crucial
– Different perspectives can be legitimately valid on desirable
outcomes
– Verbal description can be highly illuminating
– Evaluation is viewed as an exercise in argument, rather than
demonstration
Friedman & Wyatt (2006)
14. Objectivist Approaches
Objectivist
• Comparison-Based Approach
• Objectives-Based Approach (against stated goals)
• Decision-Facilitation Approach (evaluation to resolve
issues important for decision-making for further
development)
• Goal-Free Approach (purposefully blinded to intended
effects)
Friedman & Wyatt (2006)
15. Subjectivist Approaches
Subjectivist
• Quasi-Legal Approach (e.g. a mock trial)
• Art Criticism Approach
• Professional Review Approach (e.g. site visit by
experienced peers)
• Responsive/Illuminative Approach (derived from
ethnography)
Friedman & Wyatt (2006)
16. Objectivist Studies
• Measurement studies
– “Studies undertaken to develop and refine methods for making
measurements”
– E.g. development and validation of measurement methods,
tools, questionnaires
• Demonstration studies
– Studies that use measurement “methods to address questions of
direct importance in informatics”
– Descriptive studies (no independent variables)
– Comparative studies (investigator creates a contrasting set of
conditions, as in experiments & quasi-experiments)
– Correlational studies (explore hypothesized relationships among
variables that were not manipulated)
Friedman & Wyatt (2006)
17. Study Designs
• Experiments
– Randomized controlled trials
• Quasi-Experiments
– Non-randomized interventions
– Investigator still controls assignment of subjects to
interventions but not through randomization
• Observational Studies
– Investigator has no control over assignment of
subjects into groups
Friedman & Wyatt (2006)
22. Observational Studies
• Cohort studies
– Observe subjects with different exposures over time and
compare outcomes
• Case-control studies
– Compare subjects with outcome of interests (cases) and without
(controls) retrospectively to determine differences in exposure
• Cross-sectional studies
Mann (2003)
27. Threats to Internal Validity: Biases
• Assessment bias
• Allocation and recruitment bias
• The Hawthorne Effect (the tendency for humans to
improve their performance if they know it is being
studied)
• Data collection biases
– Checklist effect
– Data completeness effect (more complete data in intervention cases
than controls)
– Feedback effect
– Carryover effect (spillover effect)
– Placebo effect
– Second-look bias
Friedman & Wyatt (2006)
30. Threats to External Validity
• Study generalizability
– Sample representativeness
– Intervention (including implementation strategies)
– Context
• Developers as evaluators
Friedman & Wyatt (2006)
31. Making Conclusions
• Internal and external validity
• Correlation vs. causation
• Acknowledgement of study limitations
• Anticipated vs. unanticipated effects
• Lessons learned
32. Special Study Methods Used in Informatics
• Surveys
– Study design: Cross-sectional vs. longitudinal
– Subjects
– Sampling methods
• Census
• Random sampling (simple, stratified, cluster)
• Nonproblability sampling (purposive sampling,
quota sampling, etc.)
– Sampling frame
36. Special Study Methods Used in Informatics
• Time and Motion Studies
(Time-Motion Studies)
• Economic Analysis
– Cost-effectiveness analysis
– Cost-benefit analysis
– Cost-utility analysis
– Economic impact analysis
– Return on investment analysis
37. Special Study Methods Used in Informatics
• Qualitative Studies
– Interviews
– Focused groups
– Usability evaluations
– Content analysis
38. Special Study Methods Used in Informatics
• Software Testing & Evaluation
Methodology
• Testing Levels
– Unit testing
– Integration testing
– System testing
– System integration testing
http://en.wikipedia.org/wiki/Software_testing
41. Image source: Senoo et al. (2007) http://dx.doi.org/10.1108/14601060710776725
Nonaka SECI Model
During
Implementation,
Near Go-Live &
Post Go-Live
After Action
Review (AAR) /
Postmortem
Meeting,
Project Evaluation
Before & After
Project Kick-off,
During Project
Planning
During
Implementation,
Near Go-Live
Training
Projece Evaluation as Part of Project’s KM
42. “Half the money I spend on
advertising is wasted; the trouble is
I don't know which half.”
-- John Wanamaker
http://www.quotationspage.com/quote/1992.html, http://en.wikipedia.org/wiki/John_Wanamaker
43. References
• DeLone WH, McLean ER. Information systems success: the quest for the
dependent variable. Inform Syst Res. 1992 Mar;3(1):60-95.
• DeLone WH, McLean ER. The DeLone and McLean model of information
systems success: a ten-year update. J Manage Inform Syst. 2003
Spring;19(4):9-30.
• Dillman DA, Smyth JD, Christian LM. Internet, mail, and mixed-mode
surveys: the tailored design method. 3rd ed. Hoboken (NJ): Wiley; 2008.
512 p.
• Donabedian A. Evaluating the quality of medical care. Millbank Mem Q.
1966;44:166-206.
• Friedman CP, Wyatt JC. Evaluation methods in biomedical informatics. 2nd
ed. New York (NY): Springer; 2006. 386 p.
• Harris AD, McGregor JC, Perencevich EN, Furuno JP, Zhu J, Peterson DE,
Finkelstein J. The use and interpretation of quasi-experimental studies in
medical informatics. J Am Med Inform Assoc. 2006 Jan-Feb;13(1):16-23.
44. References
• Mann CJ. Observational research methods. Research design II: cohort,
corss sectional, and case-control studies. Emerg Med J. 2003;20:54-60.
• Office of Management and Budget, Office of Information and Regulatory
Affairs, Statistical Policy Office. Statistical policy working paper 31:
Measuring and reporting sources of error in surveys. 2001 Jul.