This document provides an overview of monitoring and evaluation systems for health programs. It discusses the purpose and value of M&E, including providing evidence for decision making, organizational learning, and accountability. Key concepts around monitoring, evaluation, and operational research are defined. Principles of integrated design, unbiased measurement, and local capacity building for evaluation are covered. The document also presents examples of research questions and indicators for evaluating health programs from various perspectives.
Monitoring and Evaluation Framework for Health Programs
1. 7/6/2017
1
Construct and Conduct of Monitoring and
Evaluation Systems for Health Programs
Anbrasi Edward and Jennifer Winestock
Christina Bowles
Christian Connections for International Health
Pre-conference Workshop July 13, 2017
Learning Objectives
Module 1: Describe the purpose, characteristics and value
of monitoring and evaluation measures in healthcare
Module 2: Illustrate conceptual models and performance
evaluation frameworks for healthcare systems
Module 3: Principles and concepts for conducting contextual
evaluations
Module 4: Reviewing Strengths and weaknesses of popular
program evaluation models
Module 5: Case Study - Designing a Monitoring and
Evaluation framework for an MNCH Program
2. 7/6/2017
2
Module 1
Describe the Purpose, Characteristics and Value of Monitoring
and Evaluation Measures in Healthcare
Anbrasi Edward
Christian Connections for International Health
Pre-conference Workshop July 13, 2017
3. 7/6/2017
3
Power of Measuring Results
If you do not measure results, you cannot tell success from failure
If you cannot see success, you cannot reward it
If you cannot reward success, you are probably rewarding failure
If you cannot see success, you cannot learn from it
If you cannot recognize failure, you cannot correct it
If you can demonstrate results, you can win public support
Osborne & Gaebler 1992
Purpose of M&E Systems
Provides government officials, development managers, and civil society
evidence from the investments, improving service delivery, planning and
allocating resources, and demonstrating results as part of accountability to
key stakeholders
System reengineering
Rewarding and recognizing
Organizational learning
Focus on results
4. 7/6/2017
4
USAID CLA
Collaborating, Learning, Adapting
facilitating collaboration internally and with external stakeholders
feeding new learning, innovations and performance information back into the
strategy to inform funding allocations, program design and project
management
translating new learning, as well as information about changing conditions,
into iterative strategic and programmatic adjustments
catalyzing collaborative learning, systemic analysis and problem solving
among communities and institutions to develop and implement programs
that are more effective at achieving results.
Rationale for M&E
Relevant and high-quality evaluation is an important tool to track progress,
results and effectiveness of programs
Can help explain why programs are succeeding or failing, and can provide
recommendations for how best to adapt to improve performance
Along with monitoring, evaluation contributes evidence to improve strategic
planning, project design, make mid course corrections and resource
decisions, and they are part of a greater body of knowledge and learning
Improve the quality of evaluations, planning in advance, select appropriate
designs, answer a focused set of questions and encourage
evaluations conducted by external experts
5. 7/6/2017
5
Defining Project Evaluaiton
“the systematic collection of information about the activities, characteristics,
and outcomes of programs, for use by people to reduce uncertainties,
improve effectiveness, and make decisions”
Patton, 2008
“systematic collection and analysis of information about the characteristics
and outcomes of strategies, projects, and activities as a basis for judgments
to improve effectiveness, and/or to inform decisions about current and future
programming”
USAID
“If you can’t measure it, you can’t manage it” W Edwards Deming
“If you can’t measure it, you can’t improve it” Peter Drucker
Evaluation
systemic collection and analysis of data to assess the conceptualization,
design, implementation, and utility of programs
Requires data collection at multiple points in time (baseline and end of
project) in order to demonstrate change
Longer time frame 3-5 years
By examining longer term results, identifying how and why activities
succeeded, failed, or changed, evaluation informs the design of future
projects
Outcome or impact evaluations, which assess program achievements
and effects
Evaluation data may include some routine monitoring data but require a
higher degree of rigor to collect and are gathered by an external
evaluator to the implementing partnership.
Rossi, PH, and HE Freeman. 1993. Evaluation: A systematic approach (5th ed.). Newbury Park, CA: Sage
Publications, Inc.
6. 7/6/2017
6
Monitoring - USAID
Performance monitoring “ongoing and systematic collection of performance
indicator data and other quantitative or qualitative information to reveal whether
implementation is on track and whether expected results are being achieved”
periodic recurring task that begins at the planning stages of the project
objective of improving project design and functioning
Involves ongoing data collection and analysis, provides indications of progress and
achievement of goals at regular intervals, and measures project outputs
Data is routinely collected by the implementing partner, and used to inform decisions
about the program direction on an ongoing basis and report progress to
stakeholders.
Context monitoring; “systematic collection of information about conditions and
external factors relevant to the implementation and performance of a project
and activities”
includes information about local conditions that may directly affect implementation
and performance (other projects operating within the same sector) or external factors
that may indirectly affect implementation and performance (such as macro-economic,
social, or political conditions). Used to monitor assumptions and risks
Complimentary Monitoring
where results are difficult to predict due to dynamic contexts or
unclear cause-and-effect relationships
where traditional monitoring methods may not suffice
measure unintended results, perspectives, and a wide range of other
factors
7. 7/6/2017
7
World Bank M&E Are Synergistic
Monitoring
“A continuing function that uses systematic collection of data on specified
indicators to provide management and the main stakeholders of an ongoing
development intervention with indications of the extent of progress and
achievement of objectives and progress in the use of allocated funds”
embodies the regular tracking of inputs, activities, outputs, outcomes and
impacts
Evaluation (more detailed, time consuming, resource intensive)
“process of determining the worth or significance of a project to determine
the relevance of objectives, the efficacy of design and implementation, the
efficiency or resource use, and the sustainability of results. An evaluation
should (enable) the incorporation of lessons learned into the decision-
making process of both partner and donor”
Operations Research
“a process, a way of identifying and solving program
problems. The goal of operations research is to increase the
efficiency, effectiveness, and quality of services delivered
by providers, and the availability, accessibility, and
acceptability of services desired by users.”
Fisher, A. and J. Foreit. 2012. Designing HIV/AIDS Intervention Studies: An
Operations Research Handbook, Washington, DC: Population Council
8. 7/6/2017
8
Elements of M&E, Operations Research
Monitoring Evaluation Operations Research
Purpose Tracking progress, program
management/ decision--
‐making, program
improvement
Evaluation of outcomes,
success of program
Answers specific questions
regarding implementation
Who performs the
activity
In-‐country implementing
partners (IPs)
Formative evaluation (IPs),
outcome/impact evaluation
(external organization)
Either in-‐country IPs or
evaluation partners
Rigor Less rigorous than evaluation,
data collected on a routine
basis according to data
collection and reporting
systems and processes
established
Very thorough, including
systematic collection of
baseline and final data using
sample survey methods
Very thorough
Examples of
questions answered
At each point in time, what is
the extent of women’s
engagement as measured by
participation in the program,
and level of knowledge on
care-‐giving and care-‐
seeking?
What is the impact of MAMA
on the increase in the
demand/use of health services
by women?
Financial viability
sustainability analysis/cost-
‐benefit studies/cost
effectiveness of alternate
technologies/ approaches
Who most uses the
results
In-‐country IPs Resource document for IPs for
formative evaluation and
evaluation partner for
outcome/impact evaluation,
global community, including
Ministries, academia etc
To be done by external
organizations. This M&E
plan does not address
operations research issues
Evaluation Principles - USAID
Integrated into the Design of Strategies, Projects, and Activities. Planning for and
identifying key evaluation questions at the outset will both improve the quality of
strategy development and project design and guide data collection during
implementation
Unbiased in Measurement and Reporting. Not subject to the perception or reality of
biased measurement or reporting due to conflicts of interest or other factors.
Relevant. Address the most important and relevant questions about strategies,
projects, or activities.
Based on Best Methods; that generate the highest quality and most credible evidence
that corresponds to the questions being asked, taking into consideration time, budget,
and other practical considerations.
Oriented toward Reinforcing Local Capacity; consistent with institutional aims of local
ownership through respectful engagement of all partners, including local beneficiaries,
while leveraging and building local evaluation capacity
Transparent; shared as widely as possible, with a commitment to full and active
disclosure
10. 7/6/2017
10
Results Framework
Research Questions
Perspective of Female Participants
Has content for dissemination of appropriate evidence based (maternal and
infant) priority health information been developed?
Has there been sufficient coverage of the target population? Are women
continuing in the program? Is information being delivered on a timely basis
through CHWs? Has women’s access to quality health information
increased?
Has there been sufficient engagement of the target population through
mHealth messages? Has their awareness of health needs and available
health services increased? Do they express an intention to adopt healthy
behaviors? Has there been an increase in their knowledge on MCH over
time?
Has there been an increase in the demand for health services among
women? How is this demand translated to greater service utilization, both
use of preventive health as well as care seeking behavior at the household
level?
11. 7/6/2017
11
Questions on Program Partnerships
Have a successful partnership, business model and functioning
system been developed?
Has the policy environment in the country improved?
Is there increased knowledge exchange within the community?
What is the extent of scale‐up of the program within the country and
expansion to other countries?
Are other organizations (outside of the implementer group) actively
endorsing and promoting the service?
Evidence of Program Investments
Scale Is the program reaching a significant number of target women
expected?
Sustainability Is the program sustainable in the long run? Does it have
strong partnerships and operate in a supportive policy and socio--
‐economic environment that encourages expansion?
Impact/effect—Has the program affected the envisioned increase in
women’s care-‐giving and care-‐seeking behavior?
12. 7/6/2017
12
Wikidal and Martin : Project Management Workshop, CSU, 2009
M&E Resources
Resource Link
USAID Monitoring Toolkit https://usaidlearninglab.org/evaluation?tab=1
USAID Performance Indicator Reference Sheet https://usaidlearninglab.org/sites/default/files/resource/files/cleared_‐_mt_‐
_f_and_usaid_ref_sheet_cross_walk.pdf
Global Monitoring and Evaluation Framework http://www.unfoundation.org/what‐we‐do/issues/global‐health/mobile‐health‐
for‐development/mama‐meplan.pdf
World Bank Ten Steps to a Results Based
Monitoring and Evaluation System
https://www.oecd.org/dac/peer‐
reviews/World%20bank%202004%2010_Steps_to_a_Results_Based_ME_System.p
df
World Bank Monitoring and Evaluation: Some tools,
methods and approaches
http://siteresources.worldbank.org/EXTEVACAPDEV/Resources/4585672‐
1251481378590/MandE_tools_methods_approaches.pdf
World Bank The Monitoring and Evaluation
Handbook For Business Environment Reform
http://www.publicprivatedialogue.org/monitoring_and_evaluation/M&E%20Hand
book%20July%2016%202008.pdf
World Bank Impact Evaluation Toolkit http://siteresources.worldbank.org/INTIMPEVALTK/Resources/IE_Toolkit_2012.08.
21_ENG.pdf
WHO Global Reference List of 100 Core Indicators http://www.who.int/healthinfo/indicators/2015/en/
MAMA Global Monitoring and Evaluation
Framework – USAID
http://www.mhealthknowledge.org/sites/default/files/MAMA_Global_MEPlan
_FINAL_all_0.pdf