2. Definitions
Monitoring is the routine reporting of
data on program implementation and
performance
Evaluation is the periodic assessment
of program impact at the population
level and value
3. Objectives of M&E
Informing budget allocation decisions
Supporting government planning at the national,
sub national, local and sectoral level
Helping the management of programs and the
provision of public services. Element of any results
based management system
Accountability of local governments and central
agencies
Countries choose one or all of these roles for M&E
systems
4. Track progress towards the agreed results in
the logframe matrix
Checks if assumptions made and risks
identified at the design stage are still valid or
need to be reviewed
Allows (us) and implementing partners to make
mid-course corrections as an integral part of
programme management
Monitoring
5.
6. And Why Do We Need It?
Regular and systematic assessment of
progress
Continued review of partners’ capacity
development needs
Improve results-based reporting on
achievements
Strengthen teamwork and ownership of
the UNDAF among implementing partners
Feeds into evaluation and real-time
learning
7. Systematic, impartial assessment
External, separated from programme management
Determines whether results made a worthwhile
contribution to national development priorities
Criteria: relevance, efficiency, effectiveness, impact, sustainability
3 key functions:
Programme improvement
Accountability
Organisational learning
What is Evaluation?
8. And why do we need to do it?
Whether we are Doing the Right Things
– Relevance/rationale/justification
– Client satisfaction
Whether we are Doing it Right
– Effectiveness/coherence
– Efficiency: optimizing resources
– Sustainability
– Impact
Whether there are Better Ways of Doing it
– Alternatives
– Good practices
– Lessons learned
– Improved positioning to influence next development planning
framework
9. Monitoring vs. Evaluation
Monitoring Evaluation
Systematic, ongoing
During programme implementation
Tracking of activities and progress
According to demand/need-based
For short term corrective action
Accountability for implementation
Contributes to evaluation
Conducted by insiders
Are we doing things right?
Systematic, periodic
During and after programme
implementation
Judgement of merit, value or worth
of a programme/project
Compared to evaluation criteria
(relevance, effectiveness, impact)
For decision-making about future
programmes
Accountability for results
For office & organizational learning
Conducted by impartial outsiders
Did we do the rights things?
10. M&E System:
Tools
Results Matrix
M&E Plan
Reliable data systems
Annual Review [and report]
Single Progress Report per your cycle
Evaluation
11. What should a M&E System Measure?
Source: Adapted from ADB (2006) Introduction to Results Management, p. 7
World Bank (2001) PRSP Sourcebook, p. 108.
Access to, use of, and
satisfaction with
services
Effects on dimensions of
well-being
Goods and services
produced
Tasks undertaken to
transform inputs to
outputs
Financial, human and
material resources
Outcomes
Impact
Outputs
Activities
Inputs
School enrollment
rates
Improve literacy
Number of schools
built; textbooks, etc.
Building of schools
Distribution of
textbooks, etc.
Spending on
primary education
Indicative Example:The Results Chain
12. What is an Indicator
Indicators:are measures used to
demonstrate changes over time;point to
the results;enable us to be “watchdogs”;
Are essential instruments for monitoring
and evaluation.
13.
14. 14
Indicators
– there are many calculation types
1. “Count” – no denominator
numerator - number of events, observations, individuals (frequency)
2. “Proportion” – numerator is part of denominator
expressed as per 100 (%), 1000, 10 000, 100 000
3. “Ratio” – numerator is not part of denominator
comparing 2 different numerators
4. “Rate” – a detailed proportion
number of events during a specific period
15.
16. 16
I. Systems classification
INPUT
monitors affordability of resources
measures availability / quality of resources
PROCESS
monitors activities that are carried out
measures accessibility of services – coverage and quality
OUTPUT
monitors results of activities
measures acceptability - use, change, performance, coverage and quality
OUTCOME
monitors changes in health status of populations IMPACT
measures appropriateness - effectiveness, efficiency, equity, sustainability
18. Quantitative indicators can be expressed in a number of ways,
depending on the data involved and its use. These can include whole
numbers, decimals, ratios, fractions, percentages and monetary values
— quantitative factors can always be expressed as a number. Qualitative
indicators, on the other hand, are expressed as either independent
statements or as relative terms such as "good," "better," and "best."
Examples of Quantitative Indicators can be:
The number of people attending a training
No. of Community Organizations
Rates of HIV Infection
The average rice harvest per hectare
The cost of transport to market
Increase in household income
Infant Mortality Rate
19. Qualitative indicators’
The term ‘qualitative indicators’ is made up of two very important
research concepts. Qualitative and quantitative information make up the
two types of discoverable information. Quantitative is generally the
easiest to understand and manipulate since it is based on numbers and
hard facts.
20. Qualitative or Performance Indicators
( change outcomes)
1.Greater freedom of expression
2.Ease of access to a facility
3.Participation in Youth Groups
4.Participation Levels in Sports
5.Increased Hopes of the people towards
betterment of the democratic systems
6.Women’s participation in decision making
7.Improved working relations among staff
8.Level of Satisfaction with the services
21. Some examples of outcome indicators
Source: ADB. 2005. Practice Note on Results-based Country
Strategies and Programs, Annex 2.
Outcome Indicator(s)
Improved road network % of roads in good condition;
average travel time
Improved public sector
performance
% of population satisfied with
public services; tax collection (%
of GDP)
Improved health services Utilization rate of healthcare
centers
Increased tourism Number of tourists visiting the
region
22. Examples of data and sources…
Type Indicator Instrument Agency Level
Input Public
expenditures by
category
Budget
documents;
actual
expenditure
data;
expenditure
tracking
surveys
Ministries of
finance and
planning;
sectoral
ministries;
auditing
agencies
National and
various sub-
national levels
Output Outputs from
public
expenditure:
infrastructure
and services
Administrative
records and
management
information
systems (MIS)
Sectoral
ministries;
project
implementation
units; local
administrations
National and
various sub-
national levels;
facilities
(schools,
clinics, etc)
Source: Excerpt from World Bank, PRSP Sourcebook, Ch3
Table 3.2
23. Some basics for a good M&E
system…..
In addition to technical requirement of M&E,
the institutional side is equally important –
i.e., creating and sustaining demand, and
ensuring the utilization of M&E information"
Effective coordination among agencies
involved in M&E essential ( stick, carrots, and
sermons)
24. Incentives are key: carrots, sticks, sermons
Carrots Sticks Sermons
Conduct “How are
we doing” team
meetings
Highlight good/bad
results (using M&E)
High-level statements
of endorsement
Awards or prizes for
managing for results
Set performance
targets
Awareness-raising
seminars
Staff incentives, e.g.
recruitment,
promotion
Require performance
“exception reporting”
Pilot rapid
evaluations, impact
evaluations
Output or outcome-
based performance
triggers
Include information
on results when
appraising managers
Highlight examples of
useful, influential
M&E
Source: Keith Mackay, How to Build M&E Systems to Support Better Government, World Bank
2007.
25. Some basics for a good M&E system (2)
Participation of stakeholders – promotes
learning, nurtures demand and
ownership, lends credibility to M&E
process
Capacity building for M&E; build on
existing systems
Feedback mechanisms/dissemination
26. Building capacity in M&E
• Training for M&E practitioners and managers in
government
• Introductory training in M&E for program
managers and staff
• Training for potential external evaluators of
programs
27. M&E tools developed
Annual Program Evaluation
Menu of Evaluations
Models and Matrices of Pilot Programs
Logic Framework (MML) (Objective
Selection, Indicators, Goals)
Monitoring
Logic Framework at the program and
now at the sectoral level
Performance Indicators
28. Conclusions
It is important to understand the political and
institutional context to advance in the construction
of M&E
There is no unique way to construct these systems
and the process is gradual
Incentives are necessary to produce the M&E
information
More than a capacity to manage data, a constant
leadership from the government is required to check
if it is offering effective services to its citizens
Supply is required, as well as demand: the capacity
inside the country to design, implement, and use
M&E
29. More types
Direct indicators correspond precisely
to results at any performance level.
Indirect or "proxy" indicators
demonstrate the change or results if
direct measures are not feasible.
30. References
World Bank website on Building Government M&E Systems: En anglais:
www.worldbank.org/ieg/ecd/
World Bank website on Tools for Evidence Based Policy Making:
www.worldbank.org/toolsforevidencebasedpolicy
OPCS Colombia Information, Monitoring, and Evaluation Loan
World Bank, Influential Evaluations: Evaluations that Improved Performance
and Impacts of Development Programs, 2004. Available in Spanish
http://www.worldbank.org/ieg/ecd/
World Bank, Monitoring & Evaluation: Some Tools, Methods and Approaches,
2004. Available in Spanish
http://www.worldbank.org/ieg/ecd/me_tools_and_approaches.html
Ernesto May and others (eds.) , Towards the Institutionalization of Monitoring
and Evaluation Systems in Latin America and the Caribbean: Proceedings of a
World Bank/Inter-American Development Bank Conference, World Bank,
2006. Available in Spanish
http://www.worldbank.org/ieg/ecd/docs/proceedings_la_eng.pdf