SlideShare une entreprise Scribd logo
1  sur  152
Télécharger pour lire hors ligne
Appendix 2
United States Agency for International Development
Performance Monitoring and Evaluation TIPS
1
ABOUTTIPS
These TIPS provide practical advice and suggestions to USAID managers on issues related to
performance monitoring and evaluation.This publication is a supplemental reference to the
Automated Directive Service (ADS) Chapter 203.
PERFORMANCE MONITORING & EVALUATION
TIPS
CONDUCTING A PARTICIPATORY EVALUATION
NUMBER 1
2011 Printing
USAID is promoting
participation in all as-
pects of its development
work.
This TIPS outlines how
to conduct a participa-
tory
evaluation.
Participatory evaluation provides for active in-
volvement in the evaluation process of those
with a stake in the program: providers, part-
ners, customers (beneficiaries), and any other
interested parties. Participation typically takes
place throughout all phases of the evaluation:
planning and design; gathering and analyzing the
data; identifying the evaluation findings, conclu-
sions, and recommendations; disseminating re-
sults; and preparing an action plan to improve
program performance.
WHAT IS DIRECT
OBSERVATION ?
CHARACTERISTICS OF
PARTICIPATORY
EVALUATION
2
Participatory evaluations typically share several
characteristics that set them apart from trad-
tional evaluation approaches.These include:
Participant focus and ownership. Partici-
patory evaluations are primarily oriented to
the information needs of program stakehold-
ers rather than of the donor agency.The donor
agency simply helps the participants conduct
their own evaluations, thus building their own-
ership and commitment to the results and fa-
cilitating their follow-up action.
Scope of participation.The range of partici-
pants included and the roles they play may vary.
For example, some evaluations may target only
program providers or beneficiaries, while oth-
ers may include the full array of stakeholders.
Participant negotiations. Participating
groups meet to communicate and negotiate to
reach a consensus on evaluation findings, solve
problems, and make plans to improve perfor-
mance.
Diversity of views.Views of all participants are
sought and recognized. More powerful stake-
holders allow participation of the less powerful.
Learning process. The process is a learn-
ing experience for participants. Emphasis is on
identifying lessons learned that will help partici-
pants improve program implementation, as well
as on assessing whether targets were achieved.
Flexible design. While some preliminary
planning for the evaluation may be necessary,
design issues are decided (as much as possible)
in the participatory process. Generally, evalua-
tion questions and data collection and analysis
methods are determined by the participants,
not by outside evaluators.
Empirical orientation. Good participatory
evaluations are based on empirical data. Typi-
cally, rapid appraisal techniques are used to de-
termine what happened and why.
Use of facilitators. Participants actually con-
duct the evaluation, not outside evaluators as is
traditional. However, one or more outside ex-
perts usually serve as facilitator—that is, pro-
vide supporting roles as mentor, trainer, group
processor, negotiator, and/or methodologist.
WHY CONDUCT A
PARTICIPATORY
EVALUATION?
Experience has shown that participatory evalu-
ations improve program performance.Listening
to and learning from program beneficiaries,field
staff, and other stakeholders who know why a
program is or is not working is critical to mak-
ing improvements. Also, the more these insid-
ers are involved in identifying evaluation ques-
tions and in gathering and analyzing data, the
more likely they are to use the information to
improve performance. Participatory evaluation
empowers program providers and beneficiaries
to act on the knowledge gained.
Advantages to participatory evaluations are
that they:
• Examine relevant issues by involving key
players in evaluation design
• Promote participants’ learning about the
program and its performance and enhance
their understanding of other stakeholders’
points of view
• Improve participants’ evaluation skills
• Mobilize stakeholders, enhance teamwork,
and build shared commitment to act on evalua-
3
tion recommendations
• Increase likelihood that evaluation informa-
tion will be used to improve performance
But there may be disadvantages. For example,
participatory evaluations may
• Be viewed as less objective because program
staff, customers, and other stakeholders
with possible vested interests participate
• Be less useful in addressing highly technical
aspects
• Require considerable time and resources to
identify and involve a wide array of stakehold-
ers
• Take participating staff away from ongoing
activities
• Be dominated and misused by some stake-
holders to further their own interests
STEPS IN CONDUCTING A
PARTICIPATORY
EVALUATION
Step 1: Decide if a participatory evalu-
ation approach is appropriate. Participatory
evaluations are especially useful when there are
questions about implementation difficulties or
program effects on beneficiaries,or when infor-
mation is wanted on stakeholders’ knowledge
of program goals or their views of progress.
Traditional evaluation approaches may be more
suitable when there is a need for independent
outside judgment,when specialized information
is needed that only technical experts can pro-
vide, when key stakeholders don’t have time to
participate, or when such serious lack of agree-
ment exists among stakeholders that a collab-
orative approach is likely to fail.
Step 2: Decide on the degree of partici-
pation. What groups will participate and what
roles will they play? Participation may be broad,
with a wide array of program staff,beneficiaries,
partners, and others. It may, alternatively, tar-
get one or two of these groups. For example,
if the aim is to uncover what hinders program
implementation, field staff may need to be in-
volved. If the issue is a program’s effect on lo-
cal communities, beneficiaries may be the most
appropriate participants. If the aim is to know
if all stakeholders understand a program’s goals
and view progress similarly, broad participation
may be best. Roles may range from serving as
a resource or informant to participating fully in
some or all phases of the evaluation.
Step 3: Prepare the evaluation scope of
work. Consider the evaluation approach—the
basic methods, schedule, logistics, and funding.
Special attention should go to defining roles of
the outside facilitator and participating stake-
holders. As much as possible, decisions such as
the evaluation questions to be addressed and
the development of data collection instruments
and analysis plans should be left to the partici-
patory process rather than be predetermined
in the scope of work.
Step 4:Conduct the team planning meet-
ing. Typically, the participatory evaluation pro-
cess begins with a workshop of the facilitator
and participants. The purpose is to build con-
sensus on the aim of the evaluation; refine the
scope of work and clarify roles and responsi-
bilities of the participants and facilitator; review
the schedule, logistical arrangements, and agen-
da; and train participants in basic data collec-
tion and analysis. Assisted by the facilitator,par-
ticipants identify the evaluation questions they
want answered. The approach taken to identify
questions may be open ended or may stipulate
4
broad areas of inquiry. Participants then select
appropriate methods and develop data-gather-
ing instruments and analysis plans needed to
answer the questions.
Step 5: Conduct the evaluation. Participa-
tory evaluations seek to maximize stakehold-
ers’ involvement in conducting the evaluation
in order to promote learning. Participants de-
fine the questions, consider the data collection
skills,methods,and commitment of time and la-
bor required. Participatory evaluations usually
use rapid appraisal techniques, which are sim-
pler, quicker, and less costly than conventional
sample surveys.They include methods such as
those in the box below.Typically, facilitators are
skilled in these methods, and they help train
and guide other participants in their use.
Step 6: Analyze the data and build con-
sensus on results. Once the data are gath-
ered, participatory approaches to analyzing
and interpreting them help participants build a
common body of knowledge. Once the analysis
is complete, facilitators work with participants
to reach consensus on findings,conclusions,and
recommendations. Facilitators may need to ne-
gotiate among stakeholder groups if disagree-
ments emerge. Developing a common under-
standing of the results, on the basis of empirical
evidence, becomes the cornerstone for group
commitment to a plan of action.
Step 7: Prepare an action plan. Facilitators
work with participants to prepare an action
plan to improve program performance. The
knowledge shared by participants about a pro-
gram’s strengths and weaknesses is turned into
action. Empowered by knowledge, participants
become agents of change and apply the lessons
they have learned to improve performance.
Participatory Evaluation
•	 participant focus and ownership of
evaluation
•	 broad range of stakeholders partici-
pate
•	 focus is on learning
•	 flexible design
•	 rapid appraisal methods
•	 outsiders are facilitators
Traditional Evaluation
•	 donor focus and ownership of evalu-
ation
•	 stakeholders often don’t participate
•	 focus is on accountability
•	 predetermined design
•	 formal methods
•	 outsiders are evaluators
WHAT’S DIFFERENT ABOUT PARTICIPATORY
EVALUATIONS?
5
Rapid Appraisal Methods
Key informant interviews. This in-
volves interviewing 15 to 35 individuals
selected for their knowledge and experi-
ence in a topic of interest. Interviews are
qualitative, in-depth, and semistructured.
They rely on interview guides that list
topics or open-ended questions. The in-
terviewer subtly probes the informant to
elicit information, opinions, and experi-
ences.
Focus group interviews. In these,
8 to 12 carefully selected participants
freely discuss issues, ideas, and experi-
ences among themselves. A modera-
tor introduces the subject, keeps the
discussion going, and tries to prevent
domination of the discussion by a few
participants. Focus groups should be
homogeneous, with participants of simi-
lar backgrounds as much as possible.
Community group interviews.
These take place at public meetings
open to all community members. The pri-
mary interaction is between the partici-
pants and the interviewer, who presides
over the meeting and asks questions,
following a carefully prepared question-
naire.
Direct observation. Using a detailed
observation form, observers record what
they see and hear at a program site. The
information may be about physical sur-
roundings or about ongoing activities,
processes, or discussions.
Minisurveys. These are usually
based on a structured questionnaire with
a limited number of mostly closeended
questions. They are usually adminis-
tered to 25 to 50 people. Respondents
may be selected through probability or
nonprobability sampling techniques, or
through “convenience” sampling (inter-
viewing stakeholders at locations where
they’re likely to be, such as a clinic for
a survey on health care programs). The
major advantage of minisurveys is that
the datacan be collected and analyzed
within a few days. It is the only rapid ap-
praisal method that generates quantita-
tive data.
Case studies. Case studies record
anedotes that illustrate a program’s
shortcomings or accomplishments. They
tell about incidents or concrete events,
often from one person’s experience.
Village imaging. This involves
groups of villagers drawing maps or dia-
grams to identify and visualize problems
and solutions.
Selected Further Reading
Aaker, Jerry and Jennifer Shumaker. 1994.
Looking Back and Looking Forward: A Partici-
patory Approach to Evaluation. Heifer Project
International. P.O. Box 808, Little Rock,AK
72203.
Aubel, Judi. 1994. Participatory Program Evalu-
ation: A Manual for Involving Program Stake-
holders in the Evaluation Process. Catholic
Relief Services. USCC, 1011 First Avenue, New
York, NY 10022.
Freeman, Jim. Participatory Evaluations: Making
Projects Work, 1994. Dialogue on Develop-
ment Technical Paper No.TP94/2. International
Centre,The University of Calgary.
Feurstein, Marie-Therese. 1991. Partners in-
Evaluation: Evaluating Development and Com-
munity Programmes with Participants.TALC,
6
Box 49, St.Albans, Herts AL1 4AX, United
Kingdom.
Guba, Egon andYvonna Lincoln. 1989. Fourth
Generation Evaluation. Sage Publications.
Pfohl, Jake. 1986. Participatory Evaluation:A
User’s Guide. PACT Publications. 777 United
Nations Plaza, NewYork, NY 10017.
Rugh, Jim. 1986. Self-Evaluation: Ideas for
Participatory Evaluation of Rural Community
Development Projects.World Neighbors Pub-
lication.
1996, Number 2
CONDUCTING KEY INFORMANT INTERVIEWS
TIPS
Performance Monitoring and Evaluation
USAID Center for Development Information and Evaluation
What Are Key Informant Interviews?
They are qualitative, in-depth interviews of 15 to 35 people selected
for their first-hand knowledge about a topic of interst. The inter-
views are loosely structured, relying on a list of issues to be dis-
cussed. Key informant interviews resemble a conversation among
acquaintances, allowing a free flow of ideas and information. Inter-
viewers frame questions spontaneously, probe for information and
takes notes, which are elaborated on later.
When Are Key Informant Interviews Appropriate?
This method is useful in all phases of development activities—
identification, planning, implementation, and evaluation. For ex-
ample, it can provide information on the setting for a planned activ-
ity that might influence project design. Or, it could reveal why
intended beneficiaries aren’t using services offered by a project.
Specifically, it is useful in the following situations:
1. When qualitative, descriptive information is sufficient for deci-
sion-making.
2. When there is a need to understand motivation, behavior, and
perspectives of our customers and partners. In-depth interviews
of program planners and managers, service providers, host
government officials, and beneficiaries concerning their attitudes
and behaviors about a USAID activity can help explain its
successes and shortcomings.
3. When a main purpose is to generate recommendations. Key
informants can help formulate recommendations that can im-
prove a program’s performance.
4. When quantitative data collected through other methods need to
be interpreted. Key informant interviews can provide the how
and why of what happened. If, for example, a sample survey
showed farmers were failing to make loan repayments, key
informant interviews could uncover the reasons.
USAID reengineering
emphasizes listening
to and consulting
with customers, part-
ners and other stake-
holders as we under-
take development
activities.
Rapid appraisal tech-
niques offer system-
atic ways of getting
such information
quickly and at low
cost. This Tips ad-
vises how to conduct
one such method—
key informant inter-
views.
PN-ABS-541
25. When preliminary information is needed to
design a comprehensive quantitative study.
Key informant interviews can help frame the
issues before the survey is undertaken.
Advantages and Limitations
Advantages of key informant interviews include:
• they provide information directly from
knowledgeable people
• they provide flexibility to explore new ideas
and issues not anticipated during planning
• they are inexpensive and simple to conduct
Some disadvantages:
• they are not appropriate if quantitative data are
needed
• they may be biased if informants are not
carefully selected
• they are susceptible to interviewer biases
• it may be difficult to prove validity of
findings
Once the decision has been made to conduct key
informant interviews, following the step-by-step
advice outlined below will help ensure high-
quality information.
Steps in Conducting the Interviews
Step 1. Formulate study questions.
These relate to specific concerns of the study.
Study questions generally should be limited to five
or fewer.
Step 2. Prepare a short interview guide.
Key informant interviews do not use rigid ques-
tionnaires, which inhibit free discussion. However,
interviewers must have an idea of what questions
to ask. The guide should list major topics and
issues to be covered under each study question.
Because the purpose is to explore a few issues in
depth, guides are usually limited to 12 items.
Different guides may be necessary for interview-
ing different groups of informants.
Step 3. Select key informants.
The number should not normally exceed 35. It is
preferable to start with fewer (say, 25), since often
more people end up being interviewed than is
initially planned.
Key informants should be selected for their spe-
cialized knowledge and unique perspectives on a
topic. Planners should take care to select infor-
mants with various points of view.
Selection consists of two tasks: First, identify the
groups and organizations from which key infor-
mants should be drawn—for example, host gov-
ernment agencies, project implementing agencies,
contractors, beneficiaries. It is best to include all
major stakeholders so that divergent interests and
perceptions can be captured.
Second, select a few people from each category
after consulting with people familiar with the
groups under consideration. In addition, each
informant may be asked to suggest other people
who may be interviewed.
Step 4. Conduct interviews.
Establish rapport. Begin with an explanation of
the purpose of the interview, the intended uses of
the information and assurances of confidentiality.
Often informants will want assurances that the
interview has been approved by relevant officials.
Except when interviewing technical experts,
questioners should avoid jargon.
Sequence questions. Start with factual questions.
Questions requiring opinions and judgments
should follow. In general, begin with the present
and move to questions about the past or future.
Phrase questions carefully to elicit detailed infor-
mation. Avoid questions that can be answered by a
simple yes or no. For example, questions such as
“Please tell me about the vaccination campaign?”
are better than “Do you know about the vaccina-
tion campaign?”
Use probing techniques. Encourage informants to
detail the basis for their conclusions and recom-
mendations. For example, an informant’s com-
ment, such as “The water program has really
changed things around here,” can be probed for
more details, such as “What changes have you
noticed?” “Who seems to have benefitted most?”
“Can you give me some specific examples?”
3Maintain a neutral attitude. Interviewers should be
sympathetic listeners and avoid giving the impres-
sion of having strong views on the subject under
discussion. Neutrality is essential because some
informants, trying to be polite, will say what they
think the interviewer wants to hear.
Minimize translation difficulties. Sometimes it is
necessary to use a translator, which can change the
dynamics and add difficulties. For example,
differences in status between the translator and
informant may inhibit the conversation. Often
information is lost during translation. Difficulties
can be minimized by using translators who are not
known to the informants, briefing translators on
the purposes of the study to reduce misunderstand-
ings, and having translators repeat the informant’s
comments verbatim.
Step 5. Take adequate notes.
Interviewers should take notes and develop them
in detail immediately after each interview to
ensure accuracy. Use a set of common subheadings
for interview texts, selected with an eye to the
major issues being explored. Common subhead-
ings ease data analysis.
Step 6. Analyze interview data.
Interview summary sheets. At the end of each
interview, prepare a 1-2 page interview summary
sheet reducing information into manageable
themes, issues, and recommendations. Each
summary should provide information about the
key informant’s position, reason for inclusion in
the list of informants, main points made, implica-
tions of these observations, and any insights or
ideas the interviewer had during the interview.
Descriptive codes. Coding involves a systematic
recording of data. While numeric codes are not
appropriate, descriptive codes can help organize
responses. These codes may cover key themes,
concepts, questions, or ideas, such as
sustainability, impact on income, and participation
of women. A usual practice is to note the codes or
categories on the left-hand margins of the inter-
view text. Then a summary lists the page numbers
where each item (code) appears. For example,
women’s participation might be given the code
“wom–par,” and the summary sheet might indicate
it is discussed on pages 7, 13, 21, 46, and 67 of the
interview text.
Categories and subcategories for coding (based on
key study questions, hypotheses, or conceptual
frameworks) can be developed before interviews
begin, or after the interviews are completed.
Precoding saves time, but the categories may not
be appropriate. Postcoding helps ensure empiri-
cally relevant categories, but is time consuming. A
compromise is to begin developing coding catego-
ries after 8 to 10 interviews, as it becomes appar-
ent which categories are relevant.
Storage and retrieval. The next step is to develop a
simple storage and retrieval system. Access to a
computer program that sorts text is very helpful.
Relevant parts of interview text can then be orga-
nized according to the codes. The same effect can
be accomplished without computers by preparing
folders for each category, cutting relevant com-
ments from the interview and pasting them onto
index cards according to the coding scheme, then
filing them in the appropriate folder. Each index
card should have an identification mark so the
comment can be attributed to its source.
Presentation of data. Visual displays such as
tables, boxes, and figures can condense informa-
tion, present it in a clear format, and highlight
underlying relationships and trends. This helps
communicate findings to decision-makers more
clearly, quickly, and easily. Three examples below
and on page 4 illustrate how data from key infor-
mant interviews might be displayed.
Table 1. Problems Encountered in
Obtaining Credit
Female Farmers
1. Collateral
requirements
2. Burdensome
paperwork
3. Long delays in
getting loans
4. Land registered under
male's name
5. Difficulty getting to
bank location
Male Farmers
1. Collateral
requirements
2. Burdensome
paperwork
3. Long delays in
getting loans
4
Washington, D.C. 20523U.S. Agency for International Development
Step 7. Check for reliability and validity.
Key informant interviews are susceptible to error,
bias, and misinterpretation, which can lead to
flawed findings and recommendations.
Check representativeness of key informants. Take
a second look at the key informant list to ensure no
significant groups were overlooked.
For further information on this topic, contact Annette
Binnendijk, CDIE Senior Evaluation Advisor, via
phone (703) 875-4235), fax (703) 875-4866), or e-mail.
Copies of TIPS can be ordered from the Development
Information Services Clearinghouse by calling (703)
351-4006 or by faxing (703) 351-4039. Please refer to
the PN number. To order via the Internet, address a
request to docorder@disc.mhs.compuserve.com
Table 3. Recommendations for
Improving Training
Recommendation
Number of
Informants
20
Develop need-based training
courses
Develop more objective selection
procedures
Plan job placement after training
39
11
Table 2. Impacts on Income of a
Microenterprise Activity
“In a survey I did of the participants last year, I
found that a majority felt their living condi-
tions have improved.”
—university professor
“I have doubled my crop and profits this year
as a result of the loan I got.”
—participant
“I believe that women have not benefitted as
much as men because it is more difficult for us
to get loans.”
—female participant
Assess reliability of key informants. Assess infor-
mants’ knowledgeability, credibility, impartiality,
willingness to respond, and presence of outsiders
who may have inhibited their responses. Greater
weight can be given to information provided by
more reliable informants.
Check interviewer or investigator bias. One’s own
biases as an investigator should be examined,
including tendencies to concentrate on information
that confirms preconceived notions and hypoth-
eses, seek consistency too early and overlook
evidence inconsistent with earlier findings, and be
partial to the opinions of elite key informants.
Check for negative evidence. Make a conscious
effort to look for evidence that questions prelimi-
nary findings. This brings out issues that may have
been overlooked.
Get feedback from informants. Ask the key infor-
mants for feedback on major findings. A summary
report of the findings might be shared with them,
along with a request for written comments. Often a
more practical approach is to invite them to a
meeting where key findings are presented and ask
for their feedback.
Selected Further Reading
These tips are drawn from Conducting Key Infor-
mant Interviews in Developing Countries, by
Krishna Kumar (AID Program Design and Evalua-
tion Methodology Report No. 13. December 1986.
PN-AAX-226).
1
PERFORMANCE MONITORING & EVALUATION
TIPS
PREPARING AN EVALUATION STATEMENT OF WORK
ABOUT TIPS
These TIPS provide practical advice and suggestions to USAID managers on issues related to
performance management and evaluation. This publication is a supplemental reference to the
Automated Directive System (ADS) Chapter 203.
PARTICIPATION IS KEY
Use a participatory process to ensure
resulting information will be relevant
and useful. Include a range of staff
and partners that have an interest in
the evaluation to:
 Participate in planning meetings
and review the SOW;
 Elicit input on potential evaluation
questions; and
 Prioritize and narrow the list of
questions as a group.
WHAT IS AN
EVALUATION
STATEMENT OF
WORK (SOW)?
The statement of work (SOW) is
viewed as the single most critical
document in the development of
a good evaluation. The SOW
states (1) the purpose of an
evaluation, (2) the questions that
must be answered, (3) the
expected quality of the evaluation
results, (4) the expertise needed
to do the job and (5) the time
frame and budget available to
support the task.
WHY IS THE SOW IMPORTANT?
The SOW is important because it
is a basic road map of all the
elements of a well-crafted
evaluation. It is the substance of
a contract with external
evaluators, as well as the
framework for guiding an internal
evaluation team. It contains the
information that anyone who
implements the evaluation needs
to know about the purpose of the
evaluation, the background and
history of the program being
evaluated, and the
issues/questions that must be
addressed. Writing a SOW is
about managing the first phase of
the evaluation process. Ideally,
the writer of the SOW will also
exercise management oversight
of the evaluation process.
PREPARATION – KEY
ISSUES
BALANCING FOUR
DIMENSIONS
A well drafted SOW is a critical
first step in ensuring the
credibility and utility of the final
evaluation report. Four key
dimensions of the SOW are
NUMBER 3
2ND
EDITION, 2010
2
interrelated and should be
balanced against one another
(see Figure 1):
 The number and complexity of
the evaluation questions that
need to be addressed;
 Adequacy of the time allotted
to obtain the answers;
 Availability of funding (budget)
to support the level of
evaluation design and rigor
required; and
 Availability of the expertise
needed to complete the job.
The development of the SOW is
an iterative process in which the
writer has to revisit, and
sometimes adjust, each of these
dimensions. Finding the
appropriate balance is the main
challenge faced in developing any
SOW.
ADVANCE PLANNING
It is a truism that good planning
is a necessary – but not the only –
condition for success in any
enterprise. The SOW preparation
process is itself an exercise in
careful and thorough planning.
The writer must consider several
principles when beginning the
process.
 As USAID and other donors
place more emphasis on
rigorous impact evaluation, it is
essential that evaluation
planning form an integral part
of the initial program or project
design. This includes factoring
in baseline data collection,
possible comparison or „control‟
site selection, and the
preliminary design of data
collection protocols and
instruments. Decisions about
evaluation design must be
reflected in implementation
planning and in the budget.
 There will always be un-
anticipated problems and
opportunities that emerge
during an evaluation. It is
helpful to build-in ways to
accommodate necessary
changes.
 The writer of the SOW is, in
essence, the architect of the
evaluation. It is important to
commit adequate time and
energy to the task.
 Adequate time is required to
gather information and to build
productive relationships with
stakeholders (such as program
sponsors, participants, or
partners) as well as the
evaluation team, once selected.
 The sooner that information can
be made available to the
evaluation team, the more
efficient they can be in
providing credible answers to
the important questions
outlined in the SOW.
 The quality of the evaluation is
dependent on providing quality
guidance in the SOW.
WHO SHOULD BE INVOLVED?
Participation in all or some part of
the evaluation is an important
decision for the development of
the SOW. USAID and evaluation
experts strongly recommend that
evaluations maximize stakeholder
participation, especially in the
initial planning process.
Stakeholders may encompass a
wide array of persons and
institutions, including policy
makers, program managers,
implementing partners, host
country organizations, and
beneficiaries. In some cases,
stakeholders may also be
involved throughout the
evaluation and with the
dissemination of results. The
benefits of stakeholder
participation include the
following:
 Learning across a broader
group of decision-makers, thus
increasing the likelihood that
the evaluation findings will be
used to improve development
effectiveness;
 Acceptance of the purpose and
process of evaluation by those
concerned;
 A more inclusive and better
focused list of questions to be
answered;
 Increased acceptance and
ownership of the process,
findings and conclusions; and
 Increased possibility that the
evaluation will be used by
decision makers and other
stakeholders.
USAID operates in an increasingly
complex implementation world
3
with many players, including
other USG agencies such as the
Departments of State, Defense,
Justice and others. If the activity
engages other players, it is
important to include them in the
process.
Within USAID, there are useful
synergies that can emerge when
the SOW development process is
inclusive. For example, a SOW
that focuses on civil society
advocacy might benefit from
input by those who are experts in
rule of law.
Participation by host government
and local organizational leaders
and beneficiaries is less common
among USAID supported
evaluations. It requires sensitivity
and careful management;
however, the benefits to
development practitioners can be
substantial.
Participation of USAID managers
in evaluations is an increasingly
common practice and produces
many benefits. To ensure against
bias or conflict of interest, the
USAID manager‟s role can be
limited to participating in the fact
finding phase and contributing to
the analysis. However, the final
responsibility for analysis,
conclusions and
recommendations will rest with
the independent members and
team leader.
THE ELEMENTS OF A
GOOD EVALUATION
SOW
1. DESCRIBE THE ACTIVITY,
PROGRAM, OR PROCESS TO BE
EVALUATED
Be as specific and complete as
possible in describing what is to
be evaluated. The more
information provided at the
outset, the more time the
evaluation team will have to
develop the data needed to
answer the SOW questions.
If the USAID manager does not
have the time and resources to
bring together all the relevant
information needed to inform the
evaluation in advance, the SOW
might require the evaluation
team to submit a document
review as a first deliverable. This
will, of course, add to the amount
of time and budget needed in the
evaluation contract.
2. PROVIDE A BRIEF
BACKGROUND
Give a brief description of the
context, history and current status
of the activities or programs,
names of implementing agencies
and organizations involved, and
other information to help the
evaluation team understand
background and context. In
addition, this section should state
the development hypothesis(es)
and clearly describe the program
(or project) theory that underlies
the program‟s design. USAID
activities, programs and
strategies, as well as most
policies, are based on a set of “if-
then” propositions that predict
how a set of interventions will
produce intended results. A
development hypothesis is
generally represented in a results
framework (or sometimes a
logical framework at the project
level) and identifies the causal
relationships among various
objectives sought by the program
(see TIPS 13: Building a Results
Framework). That is, if one or
more objectives are achieved,
then the next higher order
objective will be achieved.
Whether the development
hypothesis is the correct one, or
whether it remains valid at the
time of the evaluation, is an
important question for most
evaluation SOWs to consider.
3. STATE THE PURPOSE AND
USE OF THE EVALUATION
Why is an evaluation needed?
The clearer the purpose, the more
likely it is that the evaluation will
FIGURE 2. ELEMENTS OF A
GOOD EVALUATION SOW
1. Describe the activity, program, or
process to be evaluated
2. Provide a brief background on the
development hypothesis and its
implementation
3. State the purpose and use of the
evaluation
4. Clarify the evaluation questions
5. Identify the evaluation method(s)
6. Identify existing performance
information sources, with special
attention to monitoring data
7. Specify the deliverables(s) and the
timeline
8. Identify the composition of the
evaluation team (one team
member should be an evaluation
specialist) and participation of
customers and partners
9. Address schedule and logistics
10. Clarify requirements for reporting
and dissemination
11. Include a budget
4
produce credible and useful
findings, conclusions and
recommendations. In defining
the purpose, several questions
should be considered.
 Who wants the information?
Will higher level decision
makers be part of the intended
audience?
 What do they want to know?
 For what purpose will the
information be used?
 When will it be needed?
 How accurate must it be?
ADS 203.3.6.1 identifies a number
of triggers that may inform the
purpose and use of an evaluation,
as follows:
 A key management decision is
required for which there is
inadequate information;
 Performance information
indicates an unexpected result
(positive or negative) that
should be explained (such as
gender differential results);
 Customer, partner, or other
informed feedback suggests
that there are implementation
problems, unmet needs, or
unintended consequences or
impacts;
 Issues of impact, sustainability,
cost-effectiveness, or relevance
arise;
 The validity of the development
hypotheses or critical
assumptions is questioned, for
example, due to unanticipated
changes in the host country
environment; and
 Periodic portfolio reviews have
identified key questions that
need to be answered or require
consensus.
4. CLARIFY THE EVALUATION
QUESTIONS
The core element of an
evaluation SOW is the list of
questions posed for the
evaluation. One of the most
common problems with
evaluation SOWs is that they
contain a long list of poorly
defined or “difficult to answer”
questions given the time, budget
and resources provided. While a
participatory process ensures
wide ranging input into the initial
list of questions, it is equally
important to reduce this list to a
manageable number of key
questions. Keeping in mind the
relationship between budget,
time, and expertise needed, every
potential question should be
thoughtfully examined by asking
a number of questions.
 Is this question of essential
importance to the purpose and
the users of the evaluation?
 Is this question clear, precise
and „researchable‟?
 What level of reliability and
validity is expected in answering
the question?
 Does determining an answer to
the question require a certain
kind of experience and
expertise?
 Are we prepared to provide the
management commitment,
time and budget to secure a
credible answer to this
question?
If these questions can be
answered yes, then the team
probably has a good list of
questions that will inform the
evaluation team and drive the
evaluation process to a successful
result.
5. IDENTIFY EVALUATION
METHODS
The SOW manager has to decide
whether the evaluation design
and methodology should be
specified in the SOW.1
This
depends on whether the writer
has expertise, or has internal
access to evaluation research
knowledge and experience. If so,
and the writer is confident of the
„on the ground‟ conditions that
will allow for different evaluation
designs, then it is appropriate to
include specific requirements in
the SOW.
If the USAID SOW manager does
not have the kind of evaluation
experience needed, especially for
more formal and rigorous
evaluations, it is good practice to:
1) require that the team (or
bidders, if it is contracted out)
include a description of (or
approach for developing) the
proposed research design and
methodology, or 2) require a
detailed design and evaluation
plan to be submitted as a first
deliverable. In this way, the SOW
manager benefits from external
evaluation expertise. In either
case, the design and
methodology should not be
finalized until the team has an
opportunity to gather detailed
1
See USAID ADS 203.3.6.4 on
Evaluation Methodologies;
5
information and discuss final
issues with USAID.
The selection of the design and
data collection methods must be
a function of the type of
evaluation and the level of
statistical and quantitative data
confidence needed. If the project
is selected for a rigorous impact
evaluation, then the design and
methods used will be more
sophisticated and technically
complex. If external assistance is
necessary, the evaluation SOW
will be issued as part of the initial
RFP/RFA (Request for Proposal or
Request for Application)
solicitation process. All methods
and evaluation designs should be
as rigorous as reasonably
possible. In some cases, a rapid
appraisal is sufficient and
appropriate (see TIPS 5: Using
Rapid Appraisal Methods). At the
other extreme, planning for a
sophisticated and complex
evaluation process requires
greater up-front investment in
baselines, outcome monitoring
processes, and carefully
constructed experimental or
quasi-experimental designs.
6. IDENTIFY EXISTING
PERFORMANCE INFORMATION
Identify the existence and
availability of relevant
performance information sources,
such as performance monitoring
systems and/or previous
evaluation reports. Including a
summary of the types of data
available, the timeframe, and an
indication of their quality and
reliability will help the evaluation
team to build on what is already
available.
7. SPECIFY DELIVERABLES
AND TIMELINE
The SOW must specify the
products, the time frame, and the
content of each deliverable that is
required to complete the
evaluation contract. Some SOWs
simply require delivery of a draft
evaluation report by a certain
date. In other cases, a contract
may require several deliverables,
such as a detailed evaluation
design, a work plan, a document
review, and the evaluation report.
The most important deliverable is
the final evaluation report. TIPS
17: Constructing an Evaluation
Report provides a suggested
outline of an evaluation report
that may be adapted and
incorporated directly into this
section.
The evaluation report should
differentiate between findings,
conclusions, and
recommendations, as outlined in
Figure 3. As evaluators move
beyond the facts, greater
interpretation is required. By
ensuring that the final report is
organized in this manner,
decision makers can clearly
understand the facts on which the
evaluation is based. In addition,
it facilitates greater
understanding of where there
might be disagreements
concerning the interpretation of
those facts. While individuals
may disagree on
recommendations, they should
not disagree on the basic facts.
Another consideration is whether
a section on “lessons learned”
should be included in the final
report. A good evaluation will
produce knowledge about best
practices, point out what works,
what does not, and contribute to
the more general fund of tested
experience on which other
program designers and
implementers can draw.
Because unforeseen obstacles
may emerge, it is helpful to be as
realistic as possible about what
can be accomplished within a
given time frame. Also, include
some wording that allows USAID
and the evaluation team to adjust
schedules in consultation with the
USAID manager should this be
necessary.
8. DISCUSS THE COMPOSITION
OF THE EVALUATION TEAM
USAID evaluation guidance for
team selection strongly
recommends that at least one
team member have credentials
6
and experience in evaluation
design and methods. The team
leader must have strong team
management skills, and sufficient
experience with evaluation
standards and practices to ensure
a credible product. The
appropriate team leader is a
person with whom the SOW
manager can develop a working
partnership as the team moves
through the evaluation research
design and planning process.
He/she must also be a person
who can deal effectively with
senior U.S. and host country
officials and other leaders.
Experience with USAID is often an
important factor, particularly for
management focused
evaluations, and in formative
evaluations designed to establish
the basis for a future USAID
program or the redesign of an
existing program. If the
evaluation entails a high level of
complexity, survey research and
other sophisticated methods, it
may be useful to add a data
collection and analysis expert to
the team.
Generally, evaluation skills will be
supplemented with additional
subject matter experts. As the
level of research competence
increases in many countries
where USAID has programs, it
makes good sense to include
local collaborators, whether
survey research firms or
independents, to be full members
of the evaluation team.
9. ADDRESS SCHEDULING,
LOGISTICS AND OTHER
SUPPORT
Good scheduling and effective
local support contributes greatly
to the efficiency of the evaluation
team. This section defines the
time frame and the support
structure needed to answer the
evaluation questions at the
required level of validity. For
evaluations involving complex
designs and sophisticated survey
research data collection methods,
the schedule must allow enough
time, for example, to develop
sample frames, prepare and
pretest survey instruments,
training interviewers, and analyze
data. New data collection and
analysis technologies can
accelerate this process, but need
to be provided for in the budget.
In some cases, an advance trip to
the field by the team leader
and/or methodology expert may
be justified where extensive
pretesting and revision of
instruments is required or when
preparing for an evaluation in
difficult or complex operational
environments.
Adequate logistical and
administrative support is also
essential. USAID often works in
countries with poor infrastructure,
frequently in conflict/post-conflict
environments where security is an
issue. If the SOW requires the
team to make site visits to distant
or difficult locations, such
planning must be incorporated
into the SOW.
Particularly overseas, teams often
rely on local sources for
administrative support, including
scheduling of appointments,
finding translators and
interpreters, and arranging
transportation. In many countries
where foreign assistance experts
have been active, local consulting
firms have developed this kind of
expertise. Good interpreters are
in high demand, and are essential
to any evaluation team‟s success,
especially when using qualitative
data collection methods.
10. CLARIFY REQUIREMENTS
FOR REPORTING AND
DISSEMINATION
Most evaluations involve several
phases of work, especially for
more complex designs. The
SOW can set up the relationship
between the evaluation team, the
USAID manager and other
stakeholders. If a working group
was established to help define
the SOW questions, continue to
use the group as a forum for
interim reports and briefings
provided by the evaluation team.
The SOW should specify the
timing and details for each
briefing session. Examples of
what might be specified include:
 Due dates for draft and final
reports;
 Dates for oral briefings (such as
a mid-term and final briefing);
 Number of copies needed;
 Language requirements, where
applicable;
7
 Formats and page limits;
 Requirements for datasets, if
primary data has been
collected;
 A requirement to submit all
evaluations to the Development
Experience Clearing house for
archiving - this is the
responsibility of the evaluation
contractor; and
 Other needs for
communicating, marketing and
disseminating results that are
the responsibility of the
evaluation team.
The SOW should specify when
working drafts are to be
submitted for review, the time
frame allowed for USAID review
and comment, and the time
frame to revise and submit the
final report.
11. INCLUDE A BUDGET
With the budget section, the
SOW comes full circle. As stated,
budget considerations have to be
part of the decision making
process from the beginning.
The budget is a product of the
questions asked, human
resources needed, logistical and
administrative support required,
and the time needed to produce
a high quality, rigorous and
useful evaluation report in the
most efficient and timely manner.
It is essential for contractors to
understand the quality, validity
and rigor required so they can
develop a responsive budget that
will meet the standards set forth
in the SOW.
For more information:
TIPS publications are available online at [insert website].
Acknowledgements:
Our thanks to those whose experience and insights helped shape this publication including USAID‟s
Office of Management Policy, Budget and Performance (MPBP). This publication was written by Richard
Blue, Ph.D. of Management Systems International.
Comments regarding this publication can be directed to:
Gerald Britan, Ph.D.
Tel: (202) 712-1158
gbritan@usaid.gov
Contracted under RAN-M-00-04-00049-A-FY0S-84
Integrated Managing for Results II
USAID's
reengineering guid-
ance encourages
the use of rapid, low
cost methods for
collecting informa-
tion on the perfor-
mance of our devel-
opment activities.
Direct observation,
the subject of this
Tips, is one such
method.
PN-ABY-208
1996, Number 4
Performance Monitoring and Evaluation
TIPSUSAID Center for Development Information and Evaluation
USING DIRECT OBSERVATION TECHNIQUES
What is Direct Observation?
Most evaluation teams conduct some fieldwork, observing what's actually going on at
assistance activity sites. Often, this is done informally, without much thought to the
quality of data collection. Direct observation techniques allow for a more systematic,
structured process, using well-designed observation record forms.
Advantages and Limitations
The main advantage of direct observation is that an event, institution, facility, or
process can be studied in its natural setting, thereby providing a richer understanding
of the subject.
For example, an evaluation team that visits microenterprises is likely to better
understand their nature, problems, and successes after directly observing their
products, technologies, employees, and processes, than by relying solely on
documents or key informant interviews. Another advantage is that it may reveal
conditions, problems, or patterns many informants may be unaware of or unable to
describe adequately.
On the negative side, direct observation is susceptible to observer bias. The very act
of observation also can affect the behavior being studied.
When Is Direct Observation Useful?
Direct observation may be useful:
When performance monitoring data indicate results are not being
accomplished as planned, and when implementation problems are suspected,
but not understood. Direct observation can help identify whether the process
is poorly implemented or required inputs are absent.
When details of an activity's process need to be assessed, such as whether
tasks are being implementing according to standards required for
effectiveness.
When an inventory of physical facilities and inputs is needed and not
available from existing sources.
2
OBSERVATION OF GROWTH
MONITORING SESSION
Name of the Observer
Date
Time
Place
Was the scale set to 0 at the beginning of the growth
session?
Yes______ No ______
How was age determined?
By asking______
From growth chart_______
Other_______
When the child was weighed, was it stripped to
practical limit?
Yes______ No______
Was the weight read correctly?
Yes______No______
Process by which weight and age transferred to record
Health Worker wrote it_____
Someone else wrote it______ Other______
Did Health Worker interpret results for the mother?
Yes_______No_______
When interview methods are unlikely to elicit When preparing direct observation forms, consider the
needed information accurately or reliably, either following:
because the respondents don't know or may be
reluctant to say.
Steps in Using Direct Observation
The quality of direct observation can be improved by
following these steps.
Step 1. Determine the focus
Because of typical time and resource constraints, direct
observation has to be selective, looking at a few activities,
events, or phenomena that are central to the evaluation
questions.
For example, suppose an evaluation team intends to study a
few health clinics providing immunization services for
children. Obviously, the team can assess a variety of
areas—physical facilities and surroundings, immunization
activities of health workers, recordkeeping and managerial
services, and community interactions. The team should
narrow its focus to one or two areas likely to generate the
most useful information and insights.
Next, break down each activity, event, or phenomena into
subcomponents. For example, if the team decides to look at
immunization activities of health workers, prepare a list of
the tasks to observe, such as preparation of vaccine,
consultation with mothers, and vaccine administration.
Each task may be further divided into subtasks; for
example, administering vaccine likely includes preparing
the recommended doses, using the correct administration
technique, using sterile syringes, and protecting vaccine
from heat and light during use.
If the team also wants to assess physical facilities and
surroundings, it will prepare an inventory of items to be
observed.
Step 2. Develop direct observation forms
The observation record form should list the items to be
observed and provide spaces to record observations. These
forms are similar to survey questionnaires, but
investigators record their own observations, not
respondents' answers.
Observation record forms help standardize the observation
process and ensure that all important items are covered.
They also facilitate better aggregation of data gathered
from various sites or by various investigators. An excerpt
from a direct observation form used in a study of primary
health care in the Philippines provides an illustration below.
1. Identify in advance the possible response categories for
each item, so that the observer can answer with a simple
yes or no, or by checking the appropriate answer. Closed
response categories help minimize observer variation, and
therefore improve the quality of data.
2. Limit the number of items in a form. Forms should
normally not exceed 40–50 items. If nessary, it is better to
use two or more smaller forms than a single large one that
runs several pages.
3
3. Provide adequate space to record additional observations People and organizations follow daily routines associated
for which response categories were not determined. with set times. For example, credit institutions may accept
4. Use of computer software designed to create forms can
be very helpful. It facilitates a neat, unconfusing form that
can be easily completed.
Step 3. Select the sites
Once the forms are ready, the next step is to decide where
the observations will be carried out and whether it will be
based on one or more sites.
A single site observation may be justified if a site can be
treated as a typical case or if it is unique. Consider a
situation in which all five agricultural extension centers
established by an assistance activity have not been
performing well. Here, observation at a single site may be
justified as a typical case. A single site observation may
also be justified when the case is unique; for example, if
only one of five centers had been having major problems,
and the purpose of the evaluation is trying to discover why. Allow sufficient time for direct observation. Brief visits can
However, single site observations should be avoided be deceptive partly because people tend to behave
generally, because cases the team assumes to be typical or differently in the presence of observers. It is not
unique may not be. As a rule, several sites are necessary to uncommon, for example, for health workers to become
obtain a reasonable understanding of a situation. more caring or for extension workers to be more
In most cases, teams select sites based on experts' advice.
The investigator develops criteria for selecting sites, then
relies on the judgment of knowledgeable people. For
example, if a team evaluating a family planning project
decides to observe three clinics—one highly successful,
one moderately successful, and one struggling clinic—it Use a team approach. If possible, two observers should
may request USAID staff, local experts, or other observe together. A team can develop more
informants to suggest a few clinics for each category. The comprehensive, higher quality data, and avoid individual
team will then choose three after examining their bias.
recommendations. Using more than one expert reduces
individual bias in selection.
Alternatively, sites can be selected based on data from observation forms are clear, straightforward, and mostly
performance monitoring. For example, activity sites closed-ended.
(clinics, schools, credit institutions) can be ranked from
best to worst based on performance measures, and then a
sample drawn from them.
Step 4. Decide on the best timing
Timing is critical in direct observation, especially when conscious or disturb the situation. In these cases, recording
events are to be observed as they occur. Wrong timing can should take place as soon as possible after observation.
distort findings. For example, rural credit
organizations receive most loan applications during the
planting season, when farmers wish to purchase
agricultural inputs. If credit institutions are observed during
the nonplanting season, an inaccurate picture of loan
processing may result.
loan applications in the morning; farmers in tropical
climates may go to their fields early in the morning and
return home by noon. Observation periods should reflect
work rhythms.
Step 5. Conduct the field observation
Establish rapport. Before embarking on direct observation,
a certain level of rapport should be established with the
people, community, or organization to be studied. The
presence of outside observers, especially if officials or
experts, may generate some anxiety among those being
observed. Often informal, friendly conversations can
reduce anxiety levels.
Also, let them know the purpose of the observation is not to
report on individuals' performance, but to find out what
kind of problems in general are being encountered.
persuasive when being watched. However, if observers
stay for relatively longer periods, people become less self-
conscious and gradually start behaving naturally. It is
essential to stay at least two or three days on a site to
gather valid, reliable data.
Train observers. If many sites are to be observed,
nonexperts can be trained as observers, especially if
Step 6. Complete forms
Take notes as inconspicuously as possible. The best time
for recording is during observation. However, this is not
always feasible because it may make some people self-
Step 7. Analyze the data
Data from close-ended questions from the observation
form can be analyzed using basic procedures such as
frequency counts and cross-tabulations. Statistical software
packages such as SAS or SPSS facilitate such statistical
analysis and data display.
4
Direct Observation of Primary
Health Care Services in the Philippines
An example of structured direct observation was an
effort to identify deficiencies in the primary health
care system in the Philippines. It was part of a
larger, multicountry research project, the Primary
Health Care Operations Research Project (PRICOR).
The evaluators prepared direct observation forms
covering the activities, tasks, and subtasks health
workers must carry out in health clinics to
accomplish clinical objectives. These forms were
closed-ended and in most cases observations could
simply be checked to save time. The team looked at
18 health units from a "typical" province, including
samples of units that were high, medium and low
performers in terms of key child survival outcome
indicators.
The evaluation team identified and quantified many
problems that required immediate government
attention. For example, in 40 percent of the cases
where followup treatment was required at home,
health workers failed to tell mothers the timing and
amount of medication required. In 90 percent of
cases, health workers failed to explain to mothers the
results of child weighing and growth plotting, thus
missing the opportunity to involve mothers in the
nutritional care of their child. Moreover, numerous
errors were made in weighing and plotting.
This case illustrates that use of closed-ended
observation instruments promotes the reliability and
consistency of data. The findings are thus more
credible and likely to influence program managers to
make needed improvements.
CDIE's Tips series provide advice and suggestions to
USAID managers on how to plan and conduct
performance monitoring and evaluation activities.
They are supplemental references to the reengineering
automated directives system (ADS), chapter 203. For
further information, contact Annette Binnendijk, CDIE
Senior Evaluation Advisor, phone (703) 875–4235, fax
(703) 875–4866, or e-mail. Tips can be ordered from
the Development Information Services Clearinghouse
by calling (703) 351-4006 or by faxing (703) 351–4039.
Please refer to the PN number. To order via Internet,
address requests to
docorder@disc.mhs.compuserve.com
Analysis of any open-ended interview questions can also sites selected; using closed-ended, unambiguous response
provide extra richness of understanding and insights. Here, categories on the observation forms, recording observations
use of database management software with text storage promptly, and using teams of observers at each site.
capabilities, such as dBase, can be useful.
Step 8. Check for reliability and validity.
Direct observation techniques are susceptible to error and
bias that can affect reliability and validity. These can be
minimized by following some of the procedures suggested,
such as checking the representativeness of the sample of
Selected Further Reading
Information in this Tips is based on "Rapid Data Collection
Methods for Field Assessments" by Krishna Kumar, in
Team Planning Notebook for Field-Based Program
Assessments (USAID PPC/CDIE, 1991).
For more on direct observation techniques applied to the
Philippines health care system, see Stewart N. Blumenfeld,
Manuel Roxas, and Maricor de los Santos, "Systematic
Observation in the Analysis of Primary Health Care
Services," in Rapid Appraisal Methods, edited by Krishna
Kumar (The World Bank:1993)
PERFORMANCE MONITORING & EVALUATION
TIPS
USING RAPID APPRAISAL METHODS
ABOUT TIPS
These TIPS provide practical advice and suggestions to USAID managers on issues related to performance
monitoring and evaluation. This publication is a supplemental reference to the Automated Directive
System (ADS) Chapter 203.
WHAT IS RAPID
APPRAISAL?
Rapid Appraisal (RA) is an approach
that draws on multiple evaluation
methods and techniques to quickly,
yet systematically, collect data when
time in the field is limited. RA
practices are also useful when there
are budget constraints or limited
availability of reliable secondary
data. For example, time and budget
limitations may preclude the option
of using representative sample
surveys.
BENEFITS – WHEN TO USE
RAPID APPRAISAL
METHODS
Rapid appraisals are quick and can
be done at relatively low cost.
Rapid appraisal methods can help
gather, analyze, and report relevant
information for decision-makers
within days or weeks. This is not
possible with sample surveys. RAs
can be used in the following cases:
• for formative evaluations, to make
mid-course corrections in project
design or implementation when
customer or partner feedback
indicates a problem (See ADS
203.3.6.1);
• when a key management decision
is required and there is inadequate
information;
• for performance monitoring, when
data are collected and the
techniques are repeated over time
for measurement purposes;
• to better understand the issues
behind performance monitoring
data; and
• for project pre-design assessment.
LIMITATIONS – WHEN
RAPID APPRAISALS ARE
NOT APPROPRIATE
Findings from rapid appraisals may
have limited reliability and validity,
and cannot be generalized to the
larger population. Accordingly,
rapid appraisal should not be the
sole basis for summative or impact
evaluations. Data can be biased and
inaccurate unless multiple methods
are used to strengthen the validity
of findings and careful preparation is
undertaken prior to beginning field
work.
WHEN ARE RAPID
APPRAISAL
METHODS
APPROPRIATE?
Choosing between rapid appraisal
methods for an assessment or more
time-consuming methods, such as
sample surveys, should depend on
balancing several factors, listed
below.
• Purpose of the study. The
importance and nature of the
decision depending on it.
• Confidence in results. The
accuracy, reliability, and validity of
NUMBER 5
2ND
EDITION, 2010
1
findings needed for management
decisions.
2
• Time frame. When a decision
must be made.
• Resource constraints (budget).
• Evaluation questions to be
answered. (see TIPS 3: Preparing
an Evaluation Statement of Work)
USE IN TYPES OF
EVALUATION
Rapid appraisal methods are often
used in formative evaluations.
Findings are strengthened when
evaluators use triangulation
(employing more than one data
collection method) as a check on
the validity of findings from any one
method.
Rapid appraisal methods are also
used in the context of summative
evaluations. The data from rapid
appraisal methods and techniques
complement the use of quantitative
methods such as surveys based on
representative sampling. For
example, a randomized survey of
small holder farmers may tell you
that farmers have a difficult time
selling their goods at market, but
may not have provide you with the
details of why this is occurring. A
researcher could then use
interviews with farmers to
determine the details necessary to
construct a more complete theory
of why it is difficult for small holder
farmers to sell their goods.
KEY PRINCIPLES
FOR ENSURING
USEFUL RAPID
APPRAISAL DATA
COLLECTION
No set of rules dictates which
methods and techniques should be
used in a given field situation;
however, a number of key principles
can be followed to ensure the
collection of useful data in a rapid
appraisal.
• Preparation is key. As in any
evaluation, the evaluation design
and selection of methods must
begin with a thorough
understanding of the evaluation
questions and the client’s needs
for evaluative information. The
client’s intended uses of data must
guide the evaluation design and
the types of methods that are
used.
• Triangulation increases the validity
of findings. To lessen bias and
strengthen the validity of findings
from rapid appraisal methods and
techniques, it is imperative to use
multiple methods. In this way,
data collected using one method
can be compared to that collected
using other methods, thus giving a
researcher the ability to generate
valid and reliable findings. If, for
example, data collected using Key
Informant Interviews reveal the
same findings as data collected
from Direct Observation and
Focus Group Interviews, there is
less chance that the findings from
the first method were due to
researcher bias or due to the
findings being outliers. Table 1
summarizes common rapid
appraisal methods and suggests
how findings from any one
method can be strengthened by
the use of other methods.
COMMON RAPID
APPRAISAL
METHODS
INTERVIEWS
This method involves one-on-one
interviews with individuals or key
informants selected for their
knowledge or diverse views.
Interviews are qualitative, in-depth
and semi-structured. Interview
guides are usually used and
questions may be further framed
during the interview, using subtle
probing techniques. Individual
interviews may be used to gain
information on a general topic but
cannot provide the in-depth inside
knowledge on evaluation topics that
s
key informants may provide.
quickly.
MINISURVEYS
A minisurvey consists of interviews
with between five to fifty individuals,
usually selected using non-
probability sampling (sampling in
which respondents are chosen based
on their understanding of issues
related to a purpose or specific
questions, usually used when sample
sizes are small and time or access to
areas is limited). Structured
questionnaires are used with a
limited number of close-ended
questions. Minisurveys generate
quantitative data that can often be
collected and analyzed
FOCUS GROUPS
The focus group is a gathering of a
homogeneous body of five to twelve
participants to discuss issues and
experiences among themselves.
These are used to test an idea or to
get a reaction on specific topics. A
moderator introduces the topic,
timulates and focuses the
EVALUATION METHODS
COMMONLY USED IN RAPID
APPRAISAL
• Interviews
• Community Discussions
• Exit Polling
• Transect Walks (see p. 3)
• Focus Groups
• Minisurveys
• Community Mapping
• Secondary Data Collection
• Group Discussions
• Customer Service Surveys
• Direct Observation
COMMUNITY DISCUSSIONS
3
documents the
conversation.
respond
directly to the moderator. community discussions. The
discussion, and prevents domination
of discussion by a few, while another
evaluator
This method takes place at a public
meeting that is open to all
community members; it can be
successfully moderated with as
many as 100 or more people. The
primary interaction is between the
participants while the moderator
leads the discussion and asks
questions following a carefully
prepared interview guide.
GROUP DISCUSSIONS
This method involves the selection
of approximately five participants
who are knowledgeable about a
given topic and are comfortable
enough with one another to freely
discuss the issue as a group. The
moderator introduces the topic and
keeps the discussion going while
another evaluator records the
discussion. Participants talk among
each other rather than
DIRECT OBSERVATION
Teams of observers record what
they hear and see at a program site
using a detailed observation form.
Observation may be of the physical
surrounding or of ongoing activities,
processes, or interactions.
COLLECTING SECONDARY
DATA
This method involves the on-site
collection of existing secondary
data, such as export sales, loan
information, health service statistics,
etc. These data are an important
augmentation to information
collected using qualitative methods
such as interviews, focus groups, and
evaluator must be able to quickly
determine the validity and reliability
of the data. (see TIPS 12: Indicator
and Data Quality)
TRANSECT WALKS
rticipatory
COMMUNITY MAPPING
nique
LOGYTHE ROLE OF TECHNO
IN RAPID APPRAISAL
Certain equipment and technologies
can aid the rapid collection of data
and help to decrease the incidence of
errors. These include, for example,
hand held computers or personal
digital assistants (PDAs) for data
input, cellular phones, digital
recording devices for interviews,
videotaping and photography, and the
use of geographic information syste
The transect walk is a pa
approach in which the evaluator
asks a selected community member
to walk with him or her, for
example, through the center of
town, from one end of a village to
the other, or through a market.
The evaluator asks the individual,
usually a key informant, to point out
and discuss important sites,
neighborhoods, businesses, etc., and
to discuss related issues.
ms
(GIS) data and aerial photographs.
Community mapping is a tech
that requires the participation of
residents on a program site. It can
be used to help locate natural
resources, routes, service delivery
points, regional markets, trouble
spots, etc., on a map of the area, or
to use residents’ feedback to drive
the development of a map that
includes such information.
COMMON RAPID APPRAISAL METHODS
Table 1
Method
Useful for
Providing
Example Advantages Limitations
Further
References
INDIVIDUAL INTERVIEWS
Interviews − A general overview of
the topic from
someone who has a
broad knowledge and
in-depth experience
and understanding
(key informant) or in-
depth information on
a very specific topic or
subtopic (individual)
− Suggestions and
recommendations to
improve key aspects
of a program
Key informant:
Interview with
program
implementation
director
Interview with
director of a regional
trade association
Individual:
Interview with an
activity manager within
an overall
development program
Interview with a local
entrepreneur trying to
enter export trade
− Provides in-depth,
inside information
on specific issues
from the
individuals
perspective and
experience
− Flexibility permits
exploring
unanticipated
topics
− Easy to administer
− Low cost
− Susceptible to
interviewer and
selection biases
− Individual
interviews lack the
broader
understanding and
insight that a key
informant can
provide
TIPS No. 2,
Conducting Key
Informant Interviews
K. Kumar, Conducting
Key Informant Surveys
in Developing
Countries, 1986
Bamberger, Rugh, and
Mabry, Real World
Evaluation, 2006
UNICEF Website: M&E
Training Modules:
Overview of RAP
Techniques
Minisurveys − Quantitative data on
narrowly focused
questions, for a
relatively
homogeneous
population, when
representative
sampling is not
possible or required
− Quick data on
attitudes, beliefs,
behaviors of
beneficiaries or
partners
− A customer service
assessment
− Rapid exit interviews
after voting
− Quantitative data
from multiple
respondents
− Low cost
− Findings are less
generalizable than
those from sample
surveys unless the
universe of the
population is
surveyed
TIPS No. 9,
Conducting Customer
Service Assessments
K. Kumar, Conducting
Mini Surveys in
Developing Countries,
1990
Bamberger, Rugh, and
Mabry, RealWorld
Evaluation, 2006 on
purposeful sampling
GROUP INTERVIEWS
Focus Groups − Customer views on
services, products,
benefits
− Information on
implementation
problems
− Suggestions and
recommendations for
improving specific
activities
− Discussion on
experience related
to a specific program
intervention
− Effects of a new
business regulation
or proposed price
changes
− Group discussion
may reduce
inhibitions,
allowing free
exchange of ideas
− Low cost
− Discussion may be
dominated by a
few individuals
unless the process
is facilitated/
managed well
TIPS No. 10,
Conducting Focus
Group Interviews
K. Kumar, Conducting
Group Interviews in
Developing Countries,
1987
T. Greenbaum,
Moderating Focus
Groups: A Practical
Guide for Group
Facilitation, 2000
4
Group
Discussions
− Understanding of
issues from different
perspectives and
experiences of
participants from a
specific subpopulation
− Discussion with
young women on
access to prenatal
and infant care
− Discussion with
entrepreneurs about
export regulations
− Small group size
allows full
participation
− Allows good
understanding of
specific topics
− Low cost
− Findings cannot be
generalized to a
larger population
Bamberger, Rugh, and
Mabry, RealWorld
Evaluation, 2006
UNICEF Website: M&E
Training Modules:
Community Meetings
Community
Discussions
− Understanding of an
issue or topic from a
wide range of
participants from key
evaluation sites within
a village, town, city, or
city neighborhood
− A Town Hall
meeting
− Yields a wide
range of opinions
on issues
important to
participants
− A great deal of
information can be
obtained at one
point of time
− Findings cannot be
generalized to
larger population
or to
subpopulations of
concern
− Larger groups
difficult to
moderate
Bamberger, Rugh, and
Mabry, RealWorld
Evaluation, 2006
UNICEF Website: M&E
Training Modules:
Community Meetings
ADDITIONAL COMMONLY USED TECHNIQUES
Direct
Observation
− Visual data on physical
infrastructure,
supplies, conditions
− Information about an
agency’s or business’s
delivery systems,
services
− Insights into behaviors
or events
− Market place to
observe goods being
bought and sold,
who is involved,
sales interactions
− Confirms data
from interviews
− Low cost
− Observer bias
unless two to
three evaluators
observe same
place or activity
TIPS No. 4, Using
Direct Observation
Techniques
WFP Website:
Monitoring & Evaluation
Guidelines: What Is
Direct Observation and
When Should It Be Used?
Collecting
Secondary
Data
− Validity to findings
gathered from
interviews and group
discussions
− Microenterprise
bank loan info.
− Value and volume of
exports
− Number of people
served by a health
clinic, social service
provider
− Quick, low cost
way of obtaining
important
quantitative data
− Must be able to
determine
reliability and
validity of data
TIPS No. 12,
Guidelines for
Indicator and Data
Quality
PARTICIPATORY TECHNIQUES
Transect
Walks
− Important visual and
locational information
and a deeper
understanding of
situations and issues
− Walk with key
informant from one
end of a village or
urban neighborhood
to another, through
a market place, etc.
− Insiders viewpoint
− Quick way to find
out location of
places of interest
to the evaluator
− Low cost
− Susceptible to
interviewer and
selection biases
Bamberger, Rugh, and
Mabry, Real World
Evaluation, 2006
UNICEF Website: M&E
Training Modules:
Overview of RAP
Techniques
Community
Mapping
− Info. on locations
important for data
collection that could
be difficult to find
− Quick comprehension
on spatial location of
services/resources in a
region which can give
insight to access issues
− Map of village and
surrounding area
with locations of
markets, water and
fuel sources, conflict
areas, etc.
− Important
locational data
when there are no
detailed maps of
the program site
− Rough locational
information
Bamberger, Rugh, and
Mabry, Real World
Evaluation, 2006
UNICEF Website: M&E
Training Modules:
Overview of RAP
Techniques
5
References Cited
M. Bamberger, J. Rugh, and L. Mabry, Real World Evaluation. Working Under Budget, Time, Data, and Political
Constraints. Sage Publications, Thousand Oaks, CA, 2006.
T. Greenbaum, Moderating Focus Groups: A Practical Guide for Group Facilitation. Sage Publications, Thousand Oaks,
CA, 2000.
K. Kumar, “Conducting Mini Surveys in Developing Countries,” USAID Program Design and Evaluation Methodology
Report No. 15, 1990 (revised 2006).
K. Kumar, “Conducting Group Interviews in Developing Countries,” USAID Program Design and Evaluation
Methodology Report No. 8, 1987.
K. Kumar, “Conducting Key Informant Interviews in Developing Countries,” USAID Program Design and Evaluation
Methodology Report No. 13, 1989.
For more information:
TIPS publications are available online at [insert website].
Acknowledgements:
Our thanks to those whose experience and insights helped shape this publication including USAID’s Office of
Management Policy, Budget and Performance (MPBP). This publication was authored by Patricia Vondal, PhD., of
Management Systems International.
Comments regarding this publication can be directed to:
Gerald Britan, Ph.D.
Tel: (202) 712-1158
gbritan@usaid.gov
Contracted under RAN-M-00-04-00049-A-FY0S-84
Integrated Managing for Results II
6
1
PERFORMANCE MONITORING & EVALUATION
TIPS
SELECTING PERFORMANCE INDICATORS
ABOUT TIPS
These TIPS provide practical advice and suggestions to USAID managers on issues related to
performance monitoring and evaluation. This publication is a supplemental reference to the
Automated Directive System (ADS) Chapter 203.
WHAT ARE
PERFORMANCE
INDICATORS?
Performance indicators define a
measure of change for the
results identified in a Results
Framework (RF). When well-
chosen, they convey whether
key objectives are achieved in a
meaningful way for
performance management.
While a result (such as an
Assistance Objective or an
Intermediate Result) identifies
what we hope to accomplish,
indicators tell us by what
standard that result will be
measured. Targets define
whether there will be an
expected increase or decrease,
and by what magnitude.1
Indicators may be quantitative
or qualitative in nature.
Quantitative indicators are
numerical: an example is a
person’s height or weight. On
the other hand, qualitative
indicators require subjective
evaluation. Qualitative data are
sometimes reported in
numerical form, but those
numbers do not have arithmetic
meaning on their own. Some
examples are a score on an
institutional capacity index or
progress along a milestone
scale. When developing
quantitative or qualitative
indicators, the important point
is that the indicator be
1
For further information, see TIPS 13:
Building a Results Framework and TIPS
8: Baselines and Targets.
constructed in a way that
permits consistent
measurement over time.
USAID has developed many
performance indicators over the
years. Some examples include
the dollar value of non-
traditional exports, private
investment as a percentage of
gross domestic product,
contraceptive prevalence rates,
child mortality rates, and
progress on a legislative reform
index.
Selecting an optimal set of indicators
to track progress against key results
lies at the heart of an effective
performance management system.
This TIPS provides guidance on how to
select effective performance
indicators.
NUMBER 6
2ND
EDITION, 2010
2
WHY ARE
PERFORMANCE
INDICATORS
IMPORTANT?
Performance indicators provide
objective evidence that an
intended change is occurring.
Performance indicators lie at
the heart of developing an
effective performance
management system – they
define the data to be collected
and enable actual results
achieved to be compared with
planned results over time.
Hence, they are an
indispensable management tool
for making evidence-based
decisions about program
strategies and activities.
Performance indicators can also
be used:
 To assist managers in
focusing on the
achievement of
development results.
 To provide objective
evidence that results are
being achieved.
 To orient and motivate staff
and partners toward
achieving results.
 To communicate USAID
achievements to host
country counterparts, other
partners, and customers.
 To more effectively report
results achieved to USAID's
stakeholders, including the
U.S. Congress, Office of
Management and Budget,
and citizens.
FOR WHAT RESULTS
ARE PERFORMANCE
INDICATORS
REQUIRED?
THE PROGRAM LEVEL
USAID’s ADS requires that at
least one indicator be chosen
for each result in the Results
Framework in order to measure
progress (see ADS 203.3.3.1)2
.
This includes the Assistance
Objective (the highest-level
objective in the Results
Framework) as well as
supporting Intermediate Results
(IRs)3
. These indicators should
be included in the Mission or
Office Performance
Management Plan (PMP) (see
TIPS 8: Preparing a PMP).
PROJECT LEVEL
AO teams are required to
collect data regularly for
projects and activities, including
inputs, outputs, and processes,
to ensure they are progressing
as expected and are
contributing to relevant IRs and
AOs. These indicators should
be included in a project-level
monitoring and evaluation
2
For further discussion of AOs and IRs
(which are also termed impact and
outcomes respectively in other
systems) refer to TIPS 13: Building a
Results Framework.
3
Note that some results frameworks
incorporate IRs from other partners if
those results are important for USAID
to achieve the AO. This is discussed in
further detail in TIPS 13: Building a
Results Framework. If these IRs are
included, then it is recommended that
they be monitored, although less
rigorous standards apply.
(M&E) plan. The M&E plan
should be integrated in project
management and reporting
systems (e.g., quarterly, semi-
annual, or annual reports).
TYPES OF
INDICATORS IN
USAID SYSTEMS
Several different types of
indicators are used in USAID
systems. It is important to
understand the different roles
and functions of these
indicators so that managers can
construct a performance
management system that
effectively meets internal
management and Agency
reporting needs.
CUSTOM INDICATORS
Custom Indicators are
performance indicators that
reflect progress within each
unique country or program
context. While they are useful
for managers on the ground,
they often cannot be
aggregated across a number of
programs like standard
indicators.
Example: Progress on a
milestone scale reflecting
legal reform and
implementation to ensure
credible elections, as follows:
 Draft law is developed in
consultation with non-
governmental
organizations (NGOs) and
political parties.
 Public input is elicited.
3
 Draft law is modified based
on feedback.
 The secretariat presents
the draft to the Assembly.
 The law is passed by the
Assembly.
 The appropriate
government body
completes internal policies
or regulations to
implement the law.
The example above would differ
for each country depending on
its unique process for legal
reform.
STANDARD INDICATORS
Standard indicators are used
primarily for Agency reporting
purposes. Standard indicators
produce data that can be
aggregated across many
programs. Optimally, standard
indicators meet both Agency
reporting and on-the-ground
management needs. However,
in many cases, standard
indicators do not substitute for
performance (or custom
indicators) because they are
designed to meet different
needs. There is often a tension
between measuring a standard
across many programs and
selecting indicators that best
reflect true program results and
that can be used for internal
management purposes.
Example: Number of Laws or
Amendments to Ensure
Credible Elections Adopted
with USG Technical
Assistance.
In comparing the standard
indicator above with the
previous example of a custom
indicator, it becomes clear that
the custom indictor is more
likely to be useful as a
management tool, because it
provides greater specificity and
is more sensitive to change.
Standard indicators also tend to
measure change at the output
level, because they are precisely
the types of measures that are,
at face value, more easily
aggregated across many
programs, as the following
example demonstrates.
Example: The number of
people trained in policy and
regulatory practices.
CONTEXTUAL INDICATORS
Contextual indicators are used
to understand the broader
environment in which a
program operates, to track
assumptions, or to examine
externalities that may affect
success, failure, or progress.
They do not represent program
performance, because the
indicator measures very high-
level change.
Example: Score on the
Freedom House Index or
Gross Domestic Product
(GDP).
This sort of indicator may be
important to track to
understand the context for
USAID programming (e.g. a
severe drop in GDP is likely to
affect economic growth
programming), but represents a
level of change that is outside
the manageable interest of
program managers. In most
cases, it would be difficult to
say that USAID programming
has affected the overall level of
freedom within a country or
GDP (given the size of most
USAID programs in comparison
to the host country economy,
for example).
PARTICIPATION IS ESSENTIAL
Experience suggests that
participatory approaches are an
essential aspect of developing and
maintaining effective performance
management systems. Collaboration
with development partners
(including host country institutions,
civil society organizations (CSOs),
and implementing partners) as well
as customers has important benefits.
It allows you to draw on the
experience of others, obtains buy-in
to achieving results and meeting
targets, and provides an opportunity
to ensure that systems are as
streamlined and practical as possible.
INDICATORS AND DATA—SO
WHAT’S THE DIFFERENCE?
Indicators define the particular
characteristic or dimension that will
be used to measure change. Height
is an example of an indicator.
The data are the actual
measurements or factual information
that result from the indicator. Five
feet seven inches is an example of
data.
4
WHAT ARE USAID’S
CRITERIA FOR
SELECTING
INDICATORS?
USAID policies (ADS 203.3.4.2)
identify seven key criteria to
guide the selection of
performance indicators:
 Direct
 Objective
 Useful for Management
 Attributable
 Practical
 Adequate
 Disaggregated, as necessary
These criteria are designed to
assist managers in selecting
optimal indicators. The extent
to which performance
indicators meet each of the
criteria must be consistent with
the requirements of good
management. As managers
consider these criteria, they
should use a healthy measure
of common sense and
reasonableness. While we
always want the ―best‖
indicators, there are inevitably
trade-offs among various
criteria. For example, data for
the most direct or objective
indicators of a given result
might be very expensive to
collect or might be available
too infrequently. Table 1
includes a summary checklist
that can be used during the
selection process to assess
these trade-offs.
Two overarching factors
determine the extent to which
performance indicators function
as useful tools for managers
and decision-makers:
 The degree to which
performance indicators
accurately reflect the
process or phenomenon
they are being used to
measure.
 The level of comparability of
performance indicators over
time: that is, can we
measure results in a
consistent and comparable
manner over time?
1. DIRECT
An indicator is direct to the
extent that it clearly measures
the intended result. This
criterion is, in many ways, the
most important. While this may
appear to be a simple concept,
it is one of the more common
problems with indicators.
Indicators should either be
widely accepted for use by
specialists in a subject area,
exhibit readily understandable
face validity (i.e., be intuitively
understandable), or be
supported by research.
Managers should place greater
confidence in indicators that are
direct. Consider the following
example:
Result: Increased
Transparency of Key Public
Sector Institutions
Indirect Indicator: Passage
of the Freedom of
Information Act (FOIA)
Direct Indicator: Progress
on a milestone scale
demonstrating enactment
and enforcement of policies
that require open hearings
The passage of FOIA, while an
important step, does not
actually measure whether a
target institution is more
transparent. The better
example outlined above is a
more direct measure.
Level
Another dimension of whether
an indicator is direct relates to
whether it measures the right
level of the objective. A
common problem is that there
is often a mismatch between
the stated result and the
indicator. The indicator should
not measure a higher or lower
level than the result.
For example, if a program
measures improved
management practices through
the real value of agricultural
production, the indicator is
measuring a higher-level effect
than is stated (see Figure 1).
Understanding levels is rooted
in understanding the
development hypothesis
inherent in the Results
Framework (see TIPS 13:
Building a Results Framework).
Tracking indicators at each level
facilitates better understanding
and analysis of whether the
5
development hypothesis is
working. For example, if
farmers are aware of how to
implement a new technology,
but the number or percent that
actually use the technology is
not increasing, there may be
other issues that need to be
addressed. Perhaps the
technology is not readily
available in the community, or
there is not enough access to
credit. This flags the issue for
managers and provides an
opportunity to make
programmatic adjustments.
Proxy Indicators
Proxy indicators are linked to
the result by one or more
assumptions. They are often
used when the most direct
indicator is not practical (e.g.,
data collection is too costly or
the program is being
implemented in a conflict zone).
When proxies are used, the
relationship between the
indicator and the result should
be well-understood and clearly
articulated. The more
assumptions the indicator is
based upon, the weaker the
indicator. Consider the
following examples:
Result: Increased Household
Income
Proxy Indicator: Dollar
value of household
expenditures
The proxy indicator above
makes the assumption that an
increase in income will result in
increased household
expenditures; this assumption is
well-grounded in research.
Result: Increased Access to
Justice
Proxy Indicator: Number of
new courts opened
The indicator above is based on
the assumption that physical
access to new courts is the
fundamental development
problem—as opposed to
corruption, the costs associated
with using the court system, or
lack of knowledge of how to
obtain legal assistance and/or
use court systems. Proxies can
be used when assumptions are
clear and when there is research
to support that assumption.
2. OBJECTIVE
An indicator is objective if it is
unambiguous about 1) what is
being measured and 2) what
data are being collected. In
other words, two people should
be able to collect performance
information for the same
indicator and come to the same
conclusion. Objectivity is
critical to collecting comparable
data over time, yet it is one of
the most common problems
noted in audits. As a result,
pay particular attention to the
definition of the indicator to
ensure that each term is clearly
defined, as the following
examples demonstrate:
Poor Indicator: Number of
successful firms
Objective Indicator:
Number of firms with an
annual increase in revenues
of at least 5%
The better example outlines the
exact criteria for how
―successful‖ is defined and
ensures that changes in the
data are not attributable to
differences in what is being
counted.
Objectivity can be particularly
challenging when constructing
qualitative indicators. Good
qualitative indicators permit
regular, systematic judgment
about progress and reduce
subjectivity (to the extent
possible). This means that
there must be clear criteria or
protocols for data collection.
3. USEFUL FOR
MANAGEMENT
An indicator is useful to the
extent that it provides a
RESULT INDICATOR
Increased
Production
Real value of
agricultural
production.
Improved
Management
Practices
Number and
percent of
farmers using a
new technology.
Improved
Knowledge
and
Awareness
Number and
percent of
farmers who can
identify five out
of eight steps
for
implementing a
new technology.
Figure 1. Levels
6
meaningful measure of change
over time for management
decision-making. One aspect of
usefulness is to ensure that the
indicator is measuring the ―right
change‖ in order to achieve
development results. For
example, the number of
meetings between Civil Society
Organizations (CSOs) and
government is something that
can be counted but does not
necessarily reflect meaningful
change. By selecting indicators,
managers are defining program
success in concrete ways.
Managers will focus on
achieving targets for those
indicators, so it is important to
consider the intended and
unintended incentives that
performance indicators create.
As a result, the system may
need to be fine-tuned to ensure
that incentives are focused on
achieving true results.
A second dimension is whether
the indictor measures a rate of
change that is useful for
management purposes. This
means that the indicator is
constructed so that change can
be monitored at a rate that
facilitates management actions
(such as corrections and
improvements). Consider the
following examples:
Result: Targeted legal
reform to promote
investment
Less Useful for
Management: Number of
laws passed to promote
direct investment.
More Useful for
Management: Progress
toward targeted legal reform
based on the following
stages:
Stage 1. Interested groups
propose that legislation is
needed on issue.
Stage 2. Issue is introduced
in the relevant legislative
committee/executive
ministry.
Stage 3. Legislation is
drafted by relevant
committee or executive
ministry.
Stage 4. Legislation is
debated by the legislature.
Stage 5. Legislation is
passed by full approval
process needed in legislature.
Stage 6. Legislation is
approved by the executive
branch (where necessary).
Stage 7. Implementing
actions are taken.
Stage 8. No immediate need
identified for amendments to
the law.
The less useful example may be
useful for reporting; however, it
is so general that it does not
provide a good way to track
progress for performance
management. The process of
passing or implementing laws is
a long-term one, so that over
the course of a year or two the
AO team may only be able to
report that one or two such
laws have passed when, in
reality, a high degree of effort is
invested in the process. In this
case, the more useful example
better articulates the important
steps that must occur for a law
to be passed and implemented
and facilitates management
decision-making. If there is a
problem in meeting interim
milestones, then corrections
can be made along the way.
4. ATTRIBUTABLE
An indicator is attributable if it
can be plausibly associated with
USAID interventions. The
concept of ―plausible
association‖ has been used in
USAID for some time. It does
not mean that X input equals Y
output. Rather, it is based on
the idea that a case can be
made to other development
practitioners that the program
has materially affected
identified change. It is
important to consider the logic
behind what is proposed to
ensure attribution. If a Mission
is piloting a project in three
schools, but claims national
level impact in school
completion, this would not pass
the common sense test.
Consider the following
examples:
Result: Improved Budgeting
Capacity
Less Attributable: Budget
allocation for the Ministry of
Justice (MOJ)
More Attributable: The
extent to which the budget
produced by the MOJ meets
7
established criteria for good
budgeting
If the program works with the
Ministry of Justice to improve
budgeting capacity (by
providing technical assistance
on budget analysis), the quality
of the budget submitted by the
MOJ may improve. However, it
is often difficult to attribute
changes in the overall budget
allocation to USAID
interventions, because there are
a number of externalities that
affect a country’s final budget –
much like in the U.S. For
example, in tough economic
times, the budget for all
government institutions may
decrease. A crisis may emerge
that requires the host country
to reallocate resources. The
better example above is more
attributable (and directly linked)
to USAID’s intervention.
5. PRACTICAL
A practical indicator is one for
which data can be collected on a
timely basis and at a reasonable
cost. There are two dimensions
that determine whether an
indicator is practical. The first is
time and the second is cost.
Time
Consider whether resulting data
are available with enough
frequency for management
purposes (i.e., timely enough to
correspond to USAID
performance management and
reporting purposes). Second,
examine whether data are
current when available. If
reliable data are available each
year, but the data are a year
old, then it may be problematic.
Cost
Performance indicators should
provide data to managers at a
cost that is reasonable and
appropriate as compared with
the management utility of the
data. As a very general rule of
thumb, it is suggested that
between 5% and 10% of
program or project resources
be allocated for monitoring and
evaluation (M&E) purposes.
However, it is also important to
consider priorities and program
context. A program would
likely be willing to invest more
resources in measuring changes
that are central to decision-
making and less resources in
measuring more tangential
results. A more mature
program may have to invest
more in demonstrating higher-
level changes or impacts as
compared to a new program.
6. ADEQUATE
Taken as a group, the indicator
(or set of indicators) should be
sufficient to measure the stated
result. In other words, they
should be the minimum
number necessary and cost-
effective for performance
management. The number of
indicators required to
adequately measure a result
depends on 1) the complexity
of the result being measured, 2)
the amount of information
needed to make reasonably
confident decisions, and 3) the
level of resources available.
Too many indicators create
information overload and
become overly burdensome to
maintain. Too few indicators
are also problematic, because
the data may only provide a
partial or misleading picture of
performance. The following
demonstrates how one
indicator can be adequate to
measure the stated objective:
Result: Increased Traditional
Exports in Targeted Sectors
Adequate Indicator: Value
of traditional exports in
targeted sectors
In contrast, an objective
focusing on improved maternal
health may require two or three
indicators to be adequate. A
general rule of thumb is to
select between two and three
performance indicators per
result. If many more indicators
are needed to adequately cover
the result, then it may signify
that the objective is not
properly focused.
7. DISAGGREGATED, AS
NECESSARY
The disaggregation of data by
gender, age, location, or some
other dimension is often
important from both a
management and reporting
point of view. Development
programs often affect
population cohorts or
institutions in different ways.
For example, it might be
important to know to what
extent youth (up to age 25) or
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series
Usaid tips series

Contenu connexe

Tendances

Learning_Unit_3
Learning_Unit_3Learning_Unit_3
Learning_Unit_3Jack Ong
 
Monitoring and evaluation
Monitoring and evaluationMonitoring and evaluation
Monitoring and evaluationmigom doley
 
Self-Assessment of Organizational Capacity in Monitoring & Evaluation
Self-Assessment of Organizational Capacity in Monitoring & EvaluationSelf-Assessment of Organizational Capacity in Monitoring & Evaluation
Self-Assessment of Organizational Capacity in Monitoring & EvaluationMEASURE Evaluation
 
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2Brent MacKinnon
 
Monitoring and Evaluation of Health Services
Monitoring and Evaluation of Health ServicesMonitoring and Evaluation of Health Services
Monitoring and Evaluation of Health ServicesNayyar Kazmi
 
Clarificative evaluation
Clarificative evaluationClarificative evaluation
Clarificative evaluationCarlo Magno
 
MONITORING & EVALUATION OF EXTENSION PROGRAMMES
MONITORING & EVALUATION OF EXTENSION PROGRAMMESMONITORING & EVALUATION OF EXTENSION PROGRAMMES
MONITORING & EVALUATION OF EXTENSION PROGRAMMESAyush Mishra
 
Building Capacity to Measure, Analyze and Evaluate Government Performance
Building Capacity to Measure, Analyze and Evaluate Government PerformanceBuilding Capacity to Measure, Analyze and Evaluate Government Performance
Building Capacity to Measure, Analyze and Evaluate Government PerformanceWashington Evaluators
 
Stakeholder Engagement in the Context of Conflict Analysis and Intervention
Stakeholder Engagement in the Context of Conflict Analysis and InterventionStakeholder Engagement in the Context of Conflict Analysis and Intervention
Stakeholder Engagement in the Context of Conflict Analysis and InterventionSharon McCarthy
 
Impact assessment , monitoring and evaluation
Impact assessment , monitoring and evaluationImpact assessment , monitoring and evaluation
Impact assessment , monitoring and evaluationSakthivel R
 
Development plannig and management
Development plannig and managementDevelopment plannig and management
Development plannig and managementJessica Bernardino
 
Building Sustainable Accountability into Strategic RI Planning.
Building Sustainable Accountability into Strategic RI Planning. Building Sustainable Accountability into Strategic RI Planning.
Building Sustainable Accountability into Strategic RI Planning. Preston Healthcare Consulting
 

Tendances (20)

Learning_Unit_3
Learning_Unit_3Learning_Unit_3
Learning_Unit_3
 
Engage your stakeholders toolkit
Engage your stakeholders toolkitEngage your stakeholders toolkit
Engage your stakeholders toolkit
 
Monitoring and evaluation
Monitoring and evaluationMonitoring and evaluation
Monitoring and evaluation
 
Self-Assessment of Organizational Capacity in Monitoring & Evaluation
Self-Assessment of Organizational Capacity in Monitoring & EvaluationSelf-Assessment of Organizational Capacity in Monitoring & Evaluation
Self-Assessment of Organizational Capacity in Monitoring & Evaluation
 
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
 
Program Evaluations
Program EvaluationsProgram Evaluations
Program Evaluations
 
Street Jibe Evaluation
Street Jibe EvaluationStreet Jibe Evaluation
Street Jibe Evaluation
 
Monitoring and Evaluation of Health Services
Monitoring and Evaluation of Health ServicesMonitoring and Evaluation of Health Services
Monitoring and Evaluation of Health Services
 
Clarificative evaluation
Clarificative evaluationClarificative evaluation
Clarificative evaluation
 
Gaps Assessment for Human Resources
Gaps Assessment for Human ResourcesGaps Assessment for Human Resources
Gaps Assessment for Human Resources
 
Mentoring 418
Mentoring 418Mentoring 418
Mentoring 418
 
Evaluation seminar1
Evaluation seminar1Evaluation seminar1
Evaluation seminar1
 
MONITORING & EVALUATION OF EXTENSION PROGRAMMES
MONITORING & EVALUATION OF EXTENSION PROGRAMMESMONITORING & EVALUATION OF EXTENSION PROGRAMMES
MONITORING & EVALUATION OF EXTENSION PROGRAMMES
 
Chapter10
Chapter10Chapter10
Chapter10
 
Building Capacity to Measure, Analyze and Evaluate Government Performance
Building Capacity to Measure, Analyze and Evaluate Government PerformanceBuilding Capacity to Measure, Analyze and Evaluate Government Performance
Building Capacity to Measure, Analyze and Evaluate Government Performance
 
Stakeholder Engagement in the Context of Conflict Analysis and Intervention
Stakeholder Engagement in the Context of Conflict Analysis and InterventionStakeholder Engagement in the Context of Conflict Analysis and Intervention
Stakeholder Engagement in the Context of Conflict Analysis and Intervention
 
Impact assessment , monitoring and evaluation
Impact assessment , monitoring and evaluationImpact assessment , monitoring and evaluation
Impact assessment , monitoring and evaluation
 
Development plannig and management
Development plannig and managementDevelopment plannig and management
Development plannig and management
 
Building Sustainable Accountability into Strategic RI Planning.
Building Sustainable Accountability into Strategic RI Planning. Building Sustainable Accountability into Strategic RI Planning.
Building Sustainable Accountability into Strategic RI Planning.
 
How to Increase Extension Program Outcomes and Impacts (Slides for Educators)
How to Increase Extension Program Outcomes and Impacts (Slides for Educators)How to Increase Extension Program Outcomes and Impacts (Slides for Educators)
How to Increase Extension Program Outcomes and Impacts (Slides for Educators)
 

En vedette

En vedette (17)

USAID Goals and Objectives for MEASURE Evaluation Phase II (2004-2008)
USAID Goals and Objectives for MEASURE Evaluation Phase II (2004-2008)USAID Goals and Objectives for MEASURE Evaluation Phase II (2004-2008)
USAID Goals and Objectives for MEASURE Evaluation Phase II (2004-2008)
 
USAID Career Development Centres
USAID Career Development CentresUSAID Career Development Centres
USAID Career Development Centres
 
Employment opportunity driver ceed ii
Employment opportunity driver ceed iiEmployment opportunity driver ceed ii
Employment opportunity driver ceed ii
 
USAID Module 2: System Structure and Function Presentation
USAID Module 2: System Structure and Function PresentationUSAID Module 2: System Structure and Function Presentation
USAID Module 2: System Structure and Function Presentation
 
Usaid guide to grantand contract process
Usaid guide to grantand contract processUsaid guide to grantand contract process
Usaid guide to grantand contract process
 
USAID's MCH Portfolio_John Borrazzo_10.14.11
USAID's MCH Portfolio_John Borrazzo_10.14.11USAID's MCH Portfolio_John Borrazzo_10.14.11
USAID's MCH Portfolio_John Borrazzo_10.14.11
 
Mercantilism By Mubashar
Mercantilism By MubasharMercantilism By Mubashar
Mercantilism By Mubashar
 
International Trade Theory : Mercantilism
International Trade Theory : MercantilismInternational Trade Theory : Mercantilism
International Trade Theory : Mercantilism
 
Mercantilism
MercantilismMercantilism
Mercantilism
 
Mercantilism
MercantilismMercantilism
Mercantilism
 
USAID Small Business Expansion Project
USAID Small Business Expansion ProjectUSAID Small Business Expansion Project
USAID Small Business Expansion Project
 
USAID
USAIDUSAID
USAID
 
Unesco
UnescoUnesco
Unesco
 
Ppt mercantilism
Ppt mercantilismPpt mercantilism
Ppt mercantilism
 
UNESCO
UNESCOUNESCO
UNESCO
 
Mercantilism
Mercantilism Mercantilism
Mercantilism
 
Focus group discussion
Focus group discussionFocus group discussion
Focus group discussion
 

Similaire à Usaid tips series

Usaid tips 01 conducting a participatory evaluation-2011 05
Usaid   tips 01 conducting a participatory evaluation-2011 05Usaid   tips 01 conducting a participatory evaluation-2011 05
Usaid tips 01 conducting a participatory evaluation-2011 05Fida Karim 🇵🇰
 
A Good Program Can Improve Educational Outcomes.pdf
A Good Program Can Improve Educational Outcomes.pdfA Good Program Can Improve Educational Outcomes.pdf
A Good Program Can Improve Educational Outcomes.pdfnoblex1
 
Strategies in curriculum evaluation
Strategies in curriculum evaluation Strategies in curriculum evaluation
Strategies in curriculum evaluation Eko Priyanto
 
Collaborative 2 ingrid margarita and sandra
Collaborative 2 ingrid margarita and sandraCollaborative 2 ingrid margarita and sandra
Collaborative 2 ingrid margarita and sandraSandra Guevara
 
Planning for Monitoring, Learning and Evaluation
Planning for Monitoring, Learning and EvaluationPlanning for Monitoring, Learning and Evaluation
Planning for Monitoring, Learning and EvaluationNora Ferm Nickum
 
Program Evaluation: Forms and Approaches by Helen A. Casimiro
Program Evaluation: Forms and Approaches by Helen A. CasimiroProgram Evaluation: Forms and Approaches by Helen A. Casimiro
Program Evaluation: Forms and Approaches by Helen A. CasimiroHelen Casimiro
 
Conducting Programme Evaluation
Conducting Programme EvaluationConducting Programme Evaluation
Conducting Programme EvaluationPuja Shrivastav
 
PROJECTPROGRAM IMPLEMENTATION, MONITORING AND EVALUATION.pptx
PROJECTPROGRAM IMPLEMENTATION, MONITORING AND EVALUATION.pptxPROJECTPROGRAM IMPLEMENTATION, MONITORING AND EVALUATION.pptx
PROJECTPROGRAM IMPLEMENTATION, MONITORING AND EVALUATION.pptxleamangaring12
 
14Milestone 4Student’s NameUni
14Milestone 4Student’s NameUni14Milestone 4Student’s NameUni
14Milestone 4Student’s NameUniEttaBenton28
 
14Milestone 4Student’s NameUni
14Milestone 4Student’s NameUni14Milestone 4Student’s NameUni
14Milestone 4Student’s NameUniMatthewTennant613
 
Training on Evaluation.pptx
Training on Evaluation.pptxTraining on Evaluation.pptx
Training on Evaluation.pptxssusere0ee1d
 
COMMUNITY EVALUATION 2023.pptx
COMMUNITY  EVALUATION 2023.pptxCOMMUNITY  EVALUATION 2023.pptx
COMMUNITY EVALUATION 2023.pptxgggadiel
 
USER GUIDE M&E 2014 LENNY HIDAYAT
USER GUIDE M&E 2014 LENNY HIDAYATUSER GUIDE M&E 2014 LENNY HIDAYAT
USER GUIDE M&E 2014 LENNY HIDAYATLenny Hidayat
 
Administration and Supervision in Evaluation
Administration and Supervision in EvaluationAdministration and Supervision in Evaluation
Administration and Supervision in EvaluationSharon Geroquia
 
Planning and Designing Evaluation
Planning and Designing EvaluationPlanning and Designing Evaluation
Planning and Designing EvaluationMarlin Dwinastiti
 
EDUCATIONAL PLANNING AND MANAGEMENT EDUC 712.pptx
EDUCATIONAL PLANNING AND MANAGEMENT EDUC 712.pptxEDUCATIONAL PLANNING AND MANAGEMENT EDUC 712.pptx
EDUCATIONAL PLANNING AND MANAGEMENT EDUC 712.pptxwelfredoyu2
 
Participatory Monitoring- WG6.ppt
Participatory Monitoring- WG6.pptParticipatory Monitoring- WG6.ppt
Participatory Monitoring- WG6.pptMdFarhanShahriar3
 
Organizational Capacity-Building Series - Session 6: Program Evaluation
Organizational Capacity-Building Series - Session 6: Program EvaluationOrganizational Capacity-Building Series - Session 6: Program Evaluation
Organizational Capacity-Building Series - Session 6: Program EvaluationINGENAES
 

Similaire à Usaid tips series (20)

Usaid tips 01 conducting a participatory evaluation-2011 05
Usaid   tips 01 conducting a participatory evaluation-2011 05Usaid   tips 01 conducting a participatory evaluation-2011 05
Usaid tips 01 conducting a participatory evaluation-2011 05
 
A Good Program Can Improve Educational Outcomes.pdf
A Good Program Can Improve Educational Outcomes.pdfA Good Program Can Improve Educational Outcomes.pdf
A Good Program Can Improve Educational Outcomes.pdf
 
Strategies in curriculum evaluation
Strategies in curriculum evaluation Strategies in curriculum evaluation
Strategies in curriculum evaluation
 
Collaborative 2 ingrid margarita and sandra
Collaborative 2 ingrid margarita and sandraCollaborative 2 ingrid margarita and sandra
Collaborative 2 ingrid margarita and sandra
 
Planning for Monitoring, Learning and Evaluation
Planning for Monitoring, Learning and EvaluationPlanning for Monitoring, Learning and Evaluation
Planning for Monitoring, Learning and Evaluation
 
Program Evaluation: Forms and Approaches by Helen A. Casimiro
Program Evaluation: Forms and Approaches by Helen A. CasimiroProgram Evaluation: Forms and Approaches by Helen A. Casimiro
Program Evaluation: Forms and Approaches by Helen A. Casimiro
 
Conducting Programme Evaluation
Conducting Programme EvaluationConducting Programme Evaluation
Conducting Programme Evaluation
 
PROJECTPROGRAM IMPLEMENTATION, MONITORING AND EVALUATION.pptx
PROJECTPROGRAM IMPLEMENTATION, MONITORING AND EVALUATION.pptxPROJECTPROGRAM IMPLEMENTATION, MONITORING AND EVALUATION.pptx
PROJECTPROGRAM IMPLEMENTATION, MONITORING AND EVALUATION.pptx
 
14Milestone 4Student’s NameUni
14Milestone 4Student’s NameUni14Milestone 4Student’s NameUni
14Milestone 4Student’s NameUni
 
14Milestone 4Student’s NameUni
14Milestone 4Student’s NameUni14Milestone 4Student’s NameUni
14Milestone 4Student’s NameUni
 
Curriculum monitoring
Curriculum monitoringCurriculum monitoring
Curriculum monitoring
 
Training on Evaluation.pptx
Training on Evaluation.pptxTraining on Evaluation.pptx
Training on Evaluation.pptx
 
COMMUNITY EVALUATION 2023.pptx
COMMUNITY  EVALUATION 2023.pptxCOMMUNITY  EVALUATION 2023.pptx
COMMUNITY EVALUATION 2023.pptx
 
USER GUIDE M&E 2014 LENNY HIDAYAT
USER GUIDE M&E 2014 LENNY HIDAYATUSER GUIDE M&E 2014 LENNY HIDAYAT
USER GUIDE M&E 2014 LENNY HIDAYAT
 
Administration and Supervision in Evaluation
Administration and Supervision in EvaluationAdministration and Supervision in Evaluation
Administration and Supervision in Evaluation
 
Evaluation Plan Workbook
Evaluation Plan WorkbookEvaluation Plan Workbook
Evaluation Plan Workbook
 
Planning and Designing Evaluation
Planning and Designing EvaluationPlanning and Designing Evaluation
Planning and Designing Evaluation
 
EDUCATIONAL PLANNING AND MANAGEMENT EDUC 712.pptx
EDUCATIONAL PLANNING AND MANAGEMENT EDUC 712.pptxEDUCATIONAL PLANNING AND MANAGEMENT EDUC 712.pptx
EDUCATIONAL PLANNING AND MANAGEMENT EDUC 712.pptx
 
Participatory Monitoring- WG6.ppt
Participatory Monitoring- WG6.pptParticipatory Monitoring- WG6.ppt
Participatory Monitoring- WG6.ppt
 
Organizational Capacity-Building Series - Session 6: Program Evaluation
Organizational Capacity-Building Series - Session 6: Program EvaluationOrganizational Capacity-Building Series - Session 6: Program Evaluation
Organizational Capacity-Building Series - Session 6: Program Evaluation
 

Plus de Achint Kumar

Immunization Session monitoring format_hindi
Immunization Session monitoring format_hindiImmunization Session monitoring format_hindi
Immunization Session monitoring format_hindiAchint Kumar
 
Golden light meditation osho
Golden light meditation oshoGolden light meditation osho
Golden light meditation oshoAchint Kumar
 
Jeevan ke prasn osho
Jeevan ke prasn oshoJeevan ke prasn osho
Jeevan ke prasn oshoAchint Kumar
 
Malnutrition in india
Malnutrition in indiaMalnutrition in india
Malnutrition in indiaAchint Kumar
 
Great britain health system
Great britain health systemGreat britain health system
Great britain health systemAchint Kumar
 
Anandi scheme for sex ratio (1)
Anandi scheme for sex ratio (1)Anandi scheme for sex ratio (1)
Anandi scheme for sex ratio (1)Achint Kumar
 
Blood groups & blood donations
Blood groups & blood donationsBlood groups & blood donations
Blood groups & blood donationsAchint Kumar
 
Management in global enviornment1
Management in global enviornment1Management in global enviornment1
Management in global enviornment1Achint Kumar
 

Plus de Achint Kumar (10)

Immunization Session monitoring format_hindi
Immunization Session monitoring format_hindiImmunization Session monitoring format_hindi
Immunization Session monitoring format_hindi
 
Golden light meditation osho
Golden light meditation oshoGolden light meditation osho
Golden light meditation osho
 
Jeevan ke prasn osho
Jeevan ke prasn oshoJeevan ke prasn osho
Jeevan ke prasn osho
 
Malnutrition in india
Malnutrition in indiaMalnutrition in india
Malnutrition in india
 
Great britain health system
Great britain health systemGreat britain health system
Great britain health system
 
Anandi scheme for sex ratio (1)
Anandi scheme for sex ratio (1)Anandi scheme for sex ratio (1)
Anandi scheme for sex ratio (1)
 
Blood groups & blood donations
Blood groups & blood donationsBlood groups & blood donations
Blood groups & blood donations
 
population census
population censuspopulation census
population census
 
Aesculaepius
Aesculaepius Aesculaepius
Aesculaepius
 
Management in global enviornment1
Management in global enviornment1Management in global enviornment1
Management in global enviornment1
 

Dernier

Call Girls Service in Bommanahalli - 7001305949 with real photos and phone nu...
Call Girls Service in Bommanahalli - 7001305949 with real photos and phone nu...Call Girls Service in Bommanahalli - 7001305949 with real photos and phone nu...
Call Girls Service in Bommanahalli - 7001305949 with real photos and phone nu...narwatsonia7
 
Book Call Girls in Yelahanka - For 7001305949 Cheap & Best with original Photos
Book Call Girls in Yelahanka - For 7001305949 Cheap & Best with original PhotosBook Call Girls in Yelahanka - For 7001305949 Cheap & Best with original Photos
Book Call Girls in Yelahanka - For 7001305949 Cheap & Best with original Photosnarwatsonia7
 
Kolkata Call Girls Services 9907093804 @24x7 High Class Babes Here Call Now
Kolkata Call Girls Services 9907093804 @24x7 High Class Babes Here Call NowKolkata Call Girls Services 9907093804 @24x7 High Class Babes Here Call Now
Kolkata Call Girls Services 9907093804 @24x7 High Class Babes Here Call NowNehru place Escorts
 
Call Girls Whitefield Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Whitefield Just Call 7001305949 Top Class Call Girl Service AvailableCall Girls Whitefield Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Whitefield Just Call 7001305949 Top Class Call Girl Service Availablenarwatsonia7
 
Call Girls Kanakapura Road Just Call 7001305949 Top Class Call Girl Service A...
Call Girls Kanakapura Road Just Call 7001305949 Top Class Call Girl Service A...Call Girls Kanakapura Road Just Call 7001305949 Top Class Call Girl Service A...
Call Girls Kanakapura Road Just Call 7001305949 Top Class Call Girl Service A...narwatsonia7
 
VIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service Mumbai
VIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service MumbaiVIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service Mumbai
VIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service Mumbaisonalikaur4
 
Bangalore Call Girls Marathahalli 📞 9907093804 High Profile Service 100% Safe
Bangalore Call Girls Marathahalli 📞 9907093804 High Profile Service 100% SafeBangalore Call Girls Marathahalli 📞 9907093804 High Profile Service 100% Safe
Bangalore Call Girls Marathahalli 📞 9907093804 High Profile Service 100% Safenarwatsonia7
 
High Profile Call Girls Jaipur Vani 8445551418 Independent Escort Service Jaipur
High Profile Call Girls Jaipur Vani 8445551418 Independent Escort Service JaipurHigh Profile Call Girls Jaipur Vani 8445551418 Independent Escort Service Jaipur
High Profile Call Girls Jaipur Vani 8445551418 Independent Escort Service Jaipurparulsinha
 
Glomerular Filtration rate and its determinants.pptx
Glomerular Filtration rate and its determinants.pptxGlomerular Filtration rate and its determinants.pptx
Glomerular Filtration rate and its determinants.pptxDr.Nusrat Tariq
 
Russian Call Girls in Pune Riya 9907093804 Short 1500 Night 6000 Best call gi...
Russian Call Girls in Pune Riya 9907093804 Short 1500 Night 6000 Best call gi...Russian Call Girls in Pune Riya 9907093804 Short 1500 Night 6000 Best call gi...
Russian Call Girls in Pune Riya 9907093804 Short 1500 Night 6000 Best call gi...Miss joya
 
Call Girls Service Chennai Jiya 7001305949 Independent Escort Service Chennai
Call Girls Service Chennai Jiya 7001305949 Independent Escort Service ChennaiCall Girls Service Chennai Jiya 7001305949 Independent Escort Service Chennai
Call Girls Service Chennai Jiya 7001305949 Independent Escort Service ChennaiNehru place Escorts
 
Call Girl Koramangala | 7001305949 At Low Cost Cash Payment Booking
Call Girl Koramangala | 7001305949 At Low Cost Cash Payment BookingCall Girl Koramangala | 7001305949 At Low Cost Cash Payment Booking
Call Girl Koramangala | 7001305949 At Low Cost Cash Payment Bookingnarwatsonia7
 
Book Call Girls in Kasavanahalli - 7001305949 with real photos and phone numbers
Book Call Girls in Kasavanahalli - 7001305949 with real photos and phone numbersBook Call Girls in Kasavanahalli - 7001305949 with real photos and phone numbers
Book Call Girls in Kasavanahalli - 7001305949 with real photos and phone numbersnarwatsonia7
 
97111 47426 Call Girls In Delhi MUNIRKAA
97111 47426 Call Girls In Delhi MUNIRKAA97111 47426 Call Girls In Delhi MUNIRKAA
97111 47426 Call Girls In Delhi MUNIRKAAjennyeacort
 
Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...
Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...
Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...narwatsonia7
 
call girls in Connaught Place DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...
call girls in Connaught Place  DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...call girls in Connaught Place  DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...
call girls in Connaught Place DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...saminamagar
 
Call Girls Electronic City Just Call 7001305949 Top Class Call Girl Service A...
Call Girls Electronic City Just Call 7001305949 Top Class Call Girl Service A...Call Girls Electronic City Just Call 7001305949 Top Class Call Girl Service A...
Call Girls Electronic City Just Call 7001305949 Top Class Call Girl Service A...narwatsonia7
 
Call Girls Thane Just Call 9910780858 Get High Class Call Girls Service
Call Girls Thane Just Call 9910780858 Get High Class Call Girls ServiceCall Girls Thane Just Call 9910780858 Get High Class Call Girls Service
Call Girls Thane Just Call 9910780858 Get High Class Call Girls Servicesonalikaur4
 
Mumbai Call Girls Service 9910780858 Real Russian Girls Looking Models
Mumbai Call Girls Service 9910780858 Real Russian Girls Looking ModelsMumbai Call Girls Service 9910780858 Real Russian Girls Looking Models
Mumbai Call Girls Service 9910780858 Real Russian Girls Looking Modelssonalikaur4
 

Dernier (20)

Call Girls Service in Bommanahalli - 7001305949 with real photos and phone nu...
Call Girls Service in Bommanahalli - 7001305949 with real photos and phone nu...Call Girls Service in Bommanahalli - 7001305949 with real photos and phone nu...
Call Girls Service in Bommanahalli - 7001305949 with real photos and phone nu...
 
Book Call Girls in Yelahanka - For 7001305949 Cheap & Best with original Photos
Book Call Girls in Yelahanka - For 7001305949 Cheap & Best with original PhotosBook Call Girls in Yelahanka - For 7001305949 Cheap & Best with original Photos
Book Call Girls in Yelahanka - For 7001305949 Cheap & Best with original Photos
 
Kolkata Call Girls Services 9907093804 @24x7 High Class Babes Here Call Now
Kolkata Call Girls Services 9907093804 @24x7 High Class Babes Here Call NowKolkata Call Girls Services 9907093804 @24x7 High Class Babes Here Call Now
Kolkata Call Girls Services 9907093804 @24x7 High Class Babes Here Call Now
 
Call Girls Whitefield Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Whitefield Just Call 7001305949 Top Class Call Girl Service AvailableCall Girls Whitefield Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Whitefield Just Call 7001305949 Top Class Call Girl Service Available
 
Call Girls Kanakapura Road Just Call 7001305949 Top Class Call Girl Service A...
Call Girls Kanakapura Road Just Call 7001305949 Top Class Call Girl Service A...Call Girls Kanakapura Road Just Call 7001305949 Top Class Call Girl Service A...
Call Girls Kanakapura Road Just Call 7001305949 Top Class Call Girl Service A...
 
VIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service Mumbai
VIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service MumbaiVIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service Mumbai
VIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service Mumbai
 
Bangalore Call Girls Marathahalli 📞 9907093804 High Profile Service 100% Safe
Bangalore Call Girls Marathahalli 📞 9907093804 High Profile Service 100% SafeBangalore Call Girls Marathahalli 📞 9907093804 High Profile Service 100% Safe
Bangalore Call Girls Marathahalli 📞 9907093804 High Profile Service 100% Safe
 
High Profile Call Girls Jaipur Vani 8445551418 Independent Escort Service Jaipur
High Profile Call Girls Jaipur Vani 8445551418 Independent Escort Service JaipurHigh Profile Call Girls Jaipur Vani 8445551418 Independent Escort Service Jaipur
High Profile Call Girls Jaipur Vani 8445551418 Independent Escort Service Jaipur
 
Glomerular Filtration rate and its determinants.pptx
Glomerular Filtration rate and its determinants.pptxGlomerular Filtration rate and its determinants.pptx
Glomerular Filtration rate and its determinants.pptx
 
sauth delhi call girls in Bhajanpura 🔝 9953056974 🔝 escort Service
sauth delhi call girls in Bhajanpura 🔝 9953056974 🔝 escort Servicesauth delhi call girls in Bhajanpura 🔝 9953056974 🔝 escort Service
sauth delhi call girls in Bhajanpura 🔝 9953056974 🔝 escort Service
 
Russian Call Girls in Pune Riya 9907093804 Short 1500 Night 6000 Best call gi...
Russian Call Girls in Pune Riya 9907093804 Short 1500 Night 6000 Best call gi...Russian Call Girls in Pune Riya 9907093804 Short 1500 Night 6000 Best call gi...
Russian Call Girls in Pune Riya 9907093804 Short 1500 Night 6000 Best call gi...
 
Call Girls Service Chennai Jiya 7001305949 Independent Escort Service Chennai
Call Girls Service Chennai Jiya 7001305949 Independent Escort Service ChennaiCall Girls Service Chennai Jiya 7001305949 Independent Escort Service Chennai
Call Girls Service Chennai Jiya 7001305949 Independent Escort Service Chennai
 
Call Girl Koramangala | 7001305949 At Low Cost Cash Payment Booking
Call Girl Koramangala | 7001305949 At Low Cost Cash Payment BookingCall Girl Koramangala | 7001305949 At Low Cost Cash Payment Booking
Call Girl Koramangala | 7001305949 At Low Cost Cash Payment Booking
 
Book Call Girls in Kasavanahalli - 7001305949 with real photos and phone numbers
Book Call Girls in Kasavanahalli - 7001305949 with real photos and phone numbersBook Call Girls in Kasavanahalli - 7001305949 with real photos and phone numbers
Book Call Girls in Kasavanahalli - 7001305949 with real photos and phone numbers
 
97111 47426 Call Girls In Delhi MUNIRKAA
97111 47426 Call Girls In Delhi MUNIRKAA97111 47426 Call Girls In Delhi MUNIRKAA
97111 47426 Call Girls In Delhi MUNIRKAA
 
Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...
Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...
Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...
 
call girls in Connaught Place DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...
call girls in Connaught Place  DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...call girls in Connaught Place  DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...
call girls in Connaught Place DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...
 
Call Girls Electronic City Just Call 7001305949 Top Class Call Girl Service A...
Call Girls Electronic City Just Call 7001305949 Top Class Call Girl Service A...Call Girls Electronic City Just Call 7001305949 Top Class Call Girl Service A...
Call Girls Electronic City Just Call 7001305949 Top Class Call Girl Service A...
 
Call Girls Thane Just Call 9910780858 Get High Class Call Girls Service
Call Girls Thane Just Call 9910780858 Get High Class Call Girls ServiceCall Girls Thane Just Call 9910780858 Get High Class Call Girls Service
Call Girls Thane Just Call 9910780858 Get High Class Call Girls Service
 
Mumbai Call Girls Service 9910780858 Real Russian Girls Looking Models
Mumbai Call Girls Service 9910780858 Real Russian Girls Looking ModelsMumbai Call Girls Service 9910780858 Real Russian Girls Looking Models
Mumbai Call Girls Service 9910780858 Real Russian Girls Looking Models
 

Usaid tips series

  • 1. Appendix 2 United States Agency for International Development Performance Monitoring and Evaluation TIPS
  • 2. 1 ABOUTTIPS These TIPS provide practical advice and suggestions to USAID managers on issues related to performance monitoring and evaluation.This publication is a supplemental reference to the Automated Directive Service (ADS) Chapter 203. PERFORMANCE MONITORING & EVALUATION TIPS CONDUCTING A PARTICIPATORY EVALUATION NUMBER 1 2011 Printing USAID is promoting participation in all as- pects of its development work. This TIPS outlines how to conduct a participa- tory evaluation. Participatory evaluation provides for active in- volvement in the evaluation process of those with a stake in the program: providers, part- ners, customers (beneficiaries), and any other interested parties. Participation typically takes place throughout all phases of the evaluation: planning and design; gathering and analyzing the data; identifying the evaluation findings, conclu- sions, and recommendations; disseminating re- sults; and preparing an action plan to improve program performance. WHAT IS DIRECT OBSERVATION ? CHARACTERISTICS OF PARTICIPATORY EVALUATION
  • 3. 2 Participatory evaluations typically share several characteristics that set them apart from trad- tional evaluation approaches.These include: Participant focus and ownership. Partici- patory evaluations are primarily oriented to the information needs of program stakehold- ers rather than of the donor agency.The donor agency simply helps the participants conduct their own evaluations, thus building their own- ership and commitment to the results and fa- cilitating their follow-up action. Scope of participation.The range of partici- pants included and the roles they play may vary. For example, some evaluations may target only program providers or beneficiaries, while oth- ers may include the full array of stakeholders. Participant negotiations. Participating groups meet to communicate and negotiate to reach a consensus on evaluation findings, solve problems, and make plans to improve perfor- mance. Diversity of views.Views of all participants are sought and recognized. More powerful stake- holders allow participation of the less powerful. Learning process. The process is a learn- ing experience for participants. Emphasis is on identifying lessons learned that will help partici- pants improve program implementation, as well as on assessing whether targets were achieved. Flexible design. While some preliminary planning for the evaluation may be necessary, design issues are decided (as much as possible) in the participatory process. Generally, evalua- tion questions and data collection and analysis methods are determined by the participants, not by outside evaluators. Empirical orientation. Good participatory evaluations are based on empirical data. Typi- cally, rapid appraisal techniques are used to de- termine what happened and why. Use of facilitators. Participants actually con- duct the evaluation, not outside evaluators as is traditional. However, one or more outside ex- perts usually serve as facilitator—that is, pro- vide supporting roles as mentor, trainer, group processor, negotiator, and/or methodologist. WHY CONDUCT A PARTICIPATORY EVALUATION? Experience has shown that participatory evalu- ations improve program performance.Listening to and learning from program beneficiaries,field staff, and other stakeholders who know why a program is or is not working is critical to mak- ing improvements. Also, the more these insid- ers are involved in identifying evaluation ques- tions and in gathering and analyzing data, the more likely they are to use the information to improve performance. Participatory evaluation empowers program providers and beneficiaries to act on the knowledge gained. Advantages to participatory evaluations are that they: • Examine relevant issues by involving key players in evaluation design • Promote participants’ learning about the program and its performance and enhance their understanding of other stakeholders’ points of view • Improve participants’ evaluation skills • Mobilize stakeholders, enhance teamwork, and build shared commitment to act on evalua-
  • 4. 3 tion recommendations • Increase likelihood that evaluation informa- tion will be used to improve performance But there may be disadvantages. For example, participatory evaluations may • Be viewed as less objective because program staff, customers, and other stakeholders with possible vested interests participate • Be less useful in addressing highly technical aspects • Require considerable time and resources to identify and involve a wide array of stakehold- ers • Take participating staff away from ongoing activities • Be dominated and misused by some stake- holders to further their own interests STEPS IN CONDUCTING A PARTICIPATORY EVALUATION Step 1: Decide if a participatory evalu- ation approach is appropriate. Participatory evaluations are especially useful when there are questions about implementation difficulties or program effects on beneficiaries,or when infor- mation is wanted on stakeholders’ knowledge of program goals or their views of progress. Traditional evaluation approaches may be more suitable when there is a need for independent outside judgment,when specialized information is needed that only technical experts can pro- vide, when key stakeholders don’t have time to participate, or when such serious lack of agree- ment exists among stakeholders that a collab- orative approach is likely to fail. Step 2: Decide on the degree of partici- pation. What groups will participate and what roles will they play? Participation may be broad, with a wide array of program staff,beneficiaries, partners, and others. It may, alternatively, tar- get one or two of these groups. For example, if the aim is to uncover what hinders program implementation, field staff may need to be in- volved. If the issue is a program’s effect on lo- cal communities, beneficiaries may be the most appropriate participants. If the aim is to know if all stakeholders understand a program’s goals and view progress similarly, broad participation may be best. Roles may range from serving as a resource or informant to participating fully in some or all phases of the evaluation. Step 3: Prepare the evaluation scope of work. Consider the evaluation approach—the basic methods, schedule, logistics, and funding. Special attention should go to defining roles of the outside facilitator and participating stake- holders. As much as possible, decisions such as the evaluation questions to be addressed and the development of data collection instruments and analysis plans should be left to the partici- patory process rather than be predetermined in the scope of work. Step 4:Conduct the team planning meet- ing. Typically, the participatory evaluation pro- cess begins with a workshop of the facilitator and participants. The purpose is to build con- sensus on the aim of the evaluation; refine the scope of work and clarify roles and responsi- bilities of the participants and facilitator; review the schedule, logistical arrangements, and agen- da; and train participants in basic data collec- tion and analysis. Assisted by the facilitator,par- ticipants identify the evaluation questions they want answered. The approach taken to identify questions may be open ended or may stipulate
  • 5. 4 broad areas of inquiry. Participants then select appropriate methods and develop data-gather- ing instruments and analysis plans needed to answer the questions. Step 5: Conduct the evaluation. Participa- tory evaluations seek to maximize stakehold- ers’ involvement in conducting the evaluation in order to promote learning. Participants de- fine the questions, consider the data collection skills,methods,and commitment of time and la- bor required. Participatory evaluations usually use rapid appraisal techniques, which are sim- pler, quicker, and less costly than conventional sample surveys.They include methods such as those in the box below.Typically, facilitators are skilled in these methods, and they help train and guide other participants in their use. Step 6: Analyze the data and build con- sensus on results. Once the data are gath- ered, participatory approaches to analyzing and interpreting them help participants build a common body of knowledge. Once the analysis is complete, facilitators work with participants to reach consensus on findings,conclusions,and recommendations. Facilitators may need to ne- gotiate among stakeholder groups if disagree- ments emerge. Developing a common under- standing of the results, on the basis of empirical evidence, becomes the cornerstone for group commitment to a plan of action. Step 7: Prepare an action plan. Facilitators work with participants to prepare an action plan to improve program performance. The knowledge shared by participants about a pro- gram’s strengths and weaknesses is turned into action. Empowered by knowledge, participants become agents of change and apply the lessons they have learned to improve performance. Participatory Evaluation • participant focus and ownership of evaluation • broad range of stakeholders partici- pate • focus is on learning • flexible design • rapid appraisal methods • outsiders are facilitators Traditional Evaluation • donor focus and ownership of evalu- ation • stakeholders often don’t participate • focus is on accountability • predetermined design • formal methods • outsiders are evaluators WHAT’S DIFFERENT ABOUT PARTICIPATORY EVALUATIONS?
  • 6. 5 Rapid Appraisal Methods Key informant interviews. This in- volves interviewing 15 to 35 individuals selected for their knowledge and experi- ence in a topic of interest. Interviews are qualitative, in-depth, and semistructured. They rely on interview guides that list topics or open-ended questions. The in- terviewer subtly probes the informant to elicit information, opinions, and experi- ences. Focus group interviews. In these, 8 to 12 carefully selected participants freely discuss issues, ideas, and experi- ences among themselves. A modera- tor introduces the subject, keeps the discussion going, and tries to prevent domination of the discussion by a few participants. Focus groups should be homogeneous, with participants of simi- lar backgrounds as much as possible. Community group interviews. These take place at public meetings open to all community members. The pri- mary interaction is between the partici- pants and the interviewer, who presides over the meeting and asks questions, following a carefully prepared question- naire. Direct observation. Using a detailed observation form, observers record what they see and hear at a program site. The information may be about physical sur- roundings or about ongoing activities, processes, or discussions. Minisurveys. These are usually based on a structured questionnaire with a limited number of mostly closeended questions. They are usually adminis- tered to 25 to 50 people. Respondents may be selected through probability or nonprobability sampling techniques, or through “convenience” sampling (inter- viewing stakeholders at locations where they’re likely to be, such as a clinic for a survey on health care programs). The major advantage of minisurveys is that the datacan be collected and analyzed within a few days. It is the only rapid ap- praisal method that generates quantita- tive data. Case studies. Case studies record anedotes that illustrate a program’s shortcomings or accomplishments. They tell about incidents or concrete events, often from one person’s experience. Village imaging. This involves groups of villagers drawing maps or dia- grams to identify and visualize problems and solutions. Selected Further Reading Aaker, Jerry and Jennifer Shumaker. 1994. Looking Back and Looking Forward: A Partici- patory Approach to Evaluation. Heifer Project International. P.O. Box 808, Little Rock,AK 72203. Aubel, Judi. 1994. Participatory Program Evalu- ation: A Manual for Involving Program Stake- holders in the Evaluation Process. Catholic Relief Services. USCC, 1011 First Avenue, New York, NY 10022. Freeman, Jim. Participatory Evaluations: Making Projects Work, 1994. Dialogue on Develop- ment Technical Paper No.TP94/2. International Centre,The University of Calgary. Feurstein, Marie-Therese. 1991. Partners in- Evaluation: Evaluating Development and Com- munity Programmes with Participants.TALC,
  • 7. 6 Box 49, St.Albans, Herts AL1 4AX, United Kingdom. Guba, Egon andYvonna Lincoln. 1989. Fourth Generation Evaluation. Sage Publications. Pfohl, Jake. 1986. Participatory Evaluation:A User’s Guide. PACT Publications. 777 United Nations Plaza, NewYork, NY 10017. Rugh, Jim. 1986. Self-Evaluation: Ideas for Participatory Evaluation of Rural Community Development Projects.World Neighbors Pub- lication.
  • 8. 1996, Number 2 CONDUCTING KEY INFORMANT INTERVIEWS TIPS Performance Monitoring and Evaluation USAID Center for Development Information and Evaluation What Are Key Informant Interviews? They are qualitative, in-depth interviews of 15 to 35 people selected for their first-hand knowledge about a topic of interst. The inter- views are loosely structured, relying on a list of issues to be dis- cussed. Key informant interviews resemble a conversation among acquaintances, allowing a free flow of ideas and information. Inter- viewers frame questions spontaneously, probe for information and takes notes, which are elaborated on later. When Are Key Informant Interviews Appropriate? This method is useful in all phases of development activities— identification, planning, implementation, and evaluation. For ex- ample, it can provide information on the setting for a planned activ- ity that might influence project design. Or, it could reveal why intended beneficiaries aren’t using services offered by a project. Specifically, it is useful in the following situations: 1. When qualitative, descriptive information is sufficient for deci- sion-making. 2. When there is a need to understand motivation, behavior, and perspectives of our customers and partners. In-depth interviews of program planners and managers, service providers, host government officials, and beneficiaries concerning their attitudes and behaviors about a USAID activity can help explain its successes and shortcomings. 3. When a main purpose is to generate recommendations. Key informants can help formulate recommendations that can im- prove a program’s performance. 4. When quantitative data collected through other methods need to be interpreted. Key informant interviews can provide the how and why of what happened. If, for example, a sample survey showed farmers were failing to make loan repayments, key informant interviews could uncover the reasons. USAID reengineering emphasizes listening to and consulting with customers, part- ners and other stake- holders as we under- take development activities. Rapid appraisal tech- niques offer system- atic ways of getting such information quickly and at low cost. This Tips ad- vises how to conduct one such method— key informant inter- views. PN-ABS-541
  • 9. 25. When preliminary information is needed to design a comprehensive quantitative study. Key informant interviews can help frame the issues before the survey is undertaken. Advantages and Limitations Advantages of key informant interviews include: • they provide information directly from knowledgeable people • they provide flexibility to explore new ideas and issues not anticipated during planning • they are inexpensive and simple to conduct Some disadvantages: • they are not appropriate if quantitative data are needed • they may be biased if informants are not carefully selected • they are susceptible to interviewer biases • it may be difficult to prove validity of findings Once the decision has been made to conduct key informant interviews, following the step-by-step advice outlined below will help ensure high- quality information. Steps in Conducting the Interviews Step 1. Formulate study questions. These relate to specific concerns of the study. Study questions generally should be limited to five or fewer. Step 2. Prepare a short interview guide. Key informant interviews do not use rigid ques- tionnaires, which inhibit free discussion. However, interviewers must have an idea of what questions to ask. The guide should list major topics and issues to be covered under each study question. Because the purpose is to explore a few issues in depth, guides are usually limited to 12 items. Different guides may be necessary for interview- ing different groups of informants. Step 3. Select key informants. The number should not normally exceed 35. It is preferable to start with fewer (say, 25), since often more people end up being interviewed than is initially planned. Key informants should be selected for their spe- cialized knowledge and unique perspectives on a topic. Planners should take care to select infor- mants with various points of view. Selection consists of two tasks: First, identify the groups and organizations from which key infor- mants should be drawn—for example, host gov- ernment agencies, project implementing agencies, contractors, beneficiaries. It is best to include all major stakeholders so that divergent interests and perceptions can be captured. Second, select a few people from each category after consulting with people familiar with the groups under consideration. In addition, each informant may be asked to suggest other people who may be interviewed. Step 4. Conduct interviews. Establish rapport. Begin with an explanation of the purpose of the interview, the intended uses of the information and assurances of confidentiality. Often informants will want assurances that the interview has been approved by relevant officials. Except when interviewing technical experts, questioners should avoid jargon. Sequence questions. Start with factual questions. Questions requiring opinions and judgments should follow. In general, begin with the present and move to questions about the past or future. Phrase questions carefully to elicit detailed infor- mation. Avoid questions that can be answered by a simple yes or no. For example, questions such as “Please tell me about the vaccination campaign?” are better than “Do you know about the vaccina- tion campaign?” Use probing techniques. Encourage informants to detail the basis for their conclusions and recom- mendations. For example, an informant’s com- ment, such as “The water program has really changed things around here,” can be probed for more details, such as “What changes have you noticed?” “Who seems to have benefitted most?” “Can you give me some specific examples?”
  • 10. 3Maintain a neutral attitude. Interviewers should be sympathetic listeners and avoid giving the impres- sion of having strong views on the subject under discussion. Neutrality is essential because some informants, trying to be polite, will say what they think the interviewer wants to hear. Minimize translation difficulties. Sometimes it is necessary to use a translator, which can change the dynamics and add difficulties. For example, differences in status between the translator and informant may inhibit the conversation. Often information is lost during translation. Difficulties can be minimized by using translators who are not known to the informants, briefing translators on the purposes of the study to reduce misunderstand- ings, and having translators repeat the informant’s comments verbatim. Step 5. Take adequate notes. Interviewers should take notes and develop them in detail immediately after each interview to ensure accuracy. Use a set of common subheadings for interview texts, selected with an eye to the major issues being explored. Common subhead- ings ease data analysis. Step 6. Analyze interview data. Interview summary sheets. At the end of each interview, prepare a 1-2 page interview summary sheet reducing information into manageable themes, issues, and recommendations. Each summary should provide information about the key informant’s position, reason for inclusion in the list of informants, main points made, implica- tions of these observations, and any insights or ideas the interviewer had during the interview. Descriptive codes. Coding involves a systematic recording of data. While numeric codes are not appropriate, descriptive codes can help organize responses. These codes may cover key themes, concepts, questions, or ideas, such as sustainability, impact on income, and participation of women. A usual practice is to note the codes or categories on the left-hand margins of the inter- view text. Then a summary lists the page numbers where each item (code) appears. For example, women’s participation might be given the code “wom–par,” and the summary sheet might indicate it is discussed on pages 7, 13, 21, 46, and 67 of the interview text. Categories and subcategories for coding (based on key study questions, hypotheses, or conceptual frameworks) can be developed before interviews begin, or after the interviews are completed. Precoding saves time, but the categories may not be appropriate. Postcoding helps ensure empiri- cally relevant categories, but is time consuming. A compromise is to begin developing coding catego- ries after 8 to 10 interviews, as it becomes appar- ent which categories are relevant. Storage and retrieval. The next step is to develop a simple storage and retrieval system. Access to a computer program that sorts text is very helpful. Relevant parts of interview text can then be orga- nized according to the codes. The same effect can be accomplished without computers by preparing folders for each category, cutting relevant com- ments from the interview and pasting them onto index cards according to the coding scheme, then filing them in the appropriate folder. Each index card should have an identification mark so the comment can be attributed to its source. Presentation of data. Visual displays such as tables, boxes, and figures can condense informa- tion, present it in a clear format, and highlight underlying relationships and trends. This helps communicate findings to decision-makers more clearly, quickly, and easily. Three examples below and on page 4 illustrate how data from key infor- mant interviews might be displayed. Table 1. Problems Encountered in Obtaining Credit Female Farmers 1. Collateral requirements 2. Burdensome paperwork 3. Long delays in getting loans 4. Land registered under male's name 5. Difficulty getting to bank location Male Farmers 1. Collateral requirements 2. Burdensome paperwork 3. Long delays in getting loans
  • 11. 4 Washington, D.C. 20523U.S. Agency for International Development Step 7. Check for reliability and validity. Key informant interviews are susceptible to error, bias, and misinterpretation, which can lead to flawed findings and recommendations. Check representativeness of key informants. Take a second look at the key informant list to ensure no significant groups were overlooked. For further information on this topic, contact Annette Binnendijk, CDIE Senior Evaluation Advisor, via phone (703) 875-4235), fax (703) 875-4866), or e-mail. Copies of TIPS can be ordered from the Development Information Services Clearinghouse by calling (703) 351-4006 or by faxing (703) 351-4039. Please refer to the PN number. To order via the Internet, address a request to docorder@disc.mhs.compuserve.com Table 3. Recommendations for Improving Training Recommendation Number of Informants 20 Develop need-based training courses Develop more objective selection procedures Plan job placement after training 39 11 Table 2. Impacts on Income of a Microenterprise Activity “In a survey I did of the participants last year, I found that a majority felt their living condi- tions have improved.” —university professor “I have doubled my crop and profits this year as a result of the loan I got.” —participant “I believe that women have not benefitted as much as men because it is more difficult for us to get loans.” —female participant Assess reliability of key informants. Assess infor- mants’ knowledgeability, credibility, impartiality, willingness to respond, and presence of outsiders who may have inhibited their responses. Greater weight can be given to information provided by more reliable informants. Check interviewer or investigator bias. One’s own biases as an investigator should be examined, including tendencies to concentrate on information that confirms preconceived notions and hypoth- eses, seek consistency too early and overlook evidence inconsistent with earlier findings, and be partial to the opinions of elite key informants. Check for negative evidence. Make a conscious effort to look for evidence that questions prelimi- nary findings. This brings out issues that may have been overlooked. Get feedback from informants. Ask the key infor- mants for feedback on major findings. A summary report of the findings might be shared with them, along with a request for written comments. Often a more practical approach is to invite them to a meeting where key findings are presented and ask for their feedback. Selected Further Reading These tips are drawn from Conducting Key Infor- mant Interviews in Developing Countries, by Krishna Kumar (AID Program Design and Evalua- tion Methodology Report No. 13. December 1986. PN-AAX-226).
  • 12. 1 PERFORMANCE MONITORING & EVALUATION TIPS PREPARING AN EVALUATION STATEMENT OF WORK ABOUT TIPS These TIPS provide practical advice and suggestions to USAID managers on issues related to performance management and evaluation. This publication is a supplemental reference to the Automated Directive System (ADS) Chapter 203. PARTICIPATION IS KEY Use a participatory process to ensure resulting information will be relevant and useful. Include a range of staff and partners that have an interest in the evaluation to:  Participate in planning meetings and review the SOW;  Elicit input on potential evaluation questions; and  Prioritize and narrow the list of questions as a group. WHAT IS AN EVALUATION STATEMENT OF WORK (SOW)? The statement of work (SOW) is viewed as the single most critical document in the development of a good evaluation. The SOW states (1) the purpose of an evaluation, (2) the questions that must be answered, (3) the expected quality of the evaluation results, (4) the expertise needed to do the job and (5) the time frame and budget available to support the task. WHY IS THE SOW IMPORTANT? The SOW is important because it is a basic road map of all the elements of a well-crafted evaluation. It is the substance of a contract with external evaluators, as well as the framework for guiding an internal evaluation team. It contains the information that anyone who implements the evaluation needs to know about the purpose of the evaluation, the background and history of the program being evaluated, and the issues/questions that must be addressed. Writing a SOW is about managing the first phase of the evaluation process. Ideally, the writer of the SOW will also exercise management oversight of the evaluation process. PREPARATION – KEY ISSUES BALANCING FOUR DIMENSIONS A well drafted SOW is a critical first step in ensuring the credibility and utility of the final evaluation report. Four key dimensions of the SOW are NUMBER 3 2ND EDITION, 2010
  • 13. 2 interrelated and should be balanced against one another (see Figure 1):  The number and complexity of the evaluation questions that need to be addressed;  Adequacy of the time allotted to obtain the answers;  Availability of funding (budget) to support the level of evaluation design and rigor required; and  Availability of the expertise needed to complete the job. The development of the SOW is an iterative process in which the writer has to revisit, and sometimes adjust, each of these dimensions. Finding the appropriate balance is the main challenge faced in developing any SOW. ADVANCE PLANNING It is a truism that good planning is a necessary – but not the only – condition for success in any enterprise. The SOW preparation process is itself an exercise in careful and thorough planning. The writer must consider several principles when beginning the process.  As USAID and other donors place more emphasis on rigorous impact evaluation, it is essential that evaluation planning form an integral part of the initial program or project design. This includes factoring in baseline data collection, possible comparison or „control‟ site selection, and the preliminary design of data collection protocols and instruments. Decisions about evaluation design must be reflected in implementation planning and in the budget.  There will always be un- anticipated problems and opportunities that emerge during an evaluation. It is helpful to build-in ways to accommodate necessary changes.  The writer of the SOW is, in essence, the architect of the evaluation. It is important to commit adequate time and energy to the task.  Adequate time is required to gather information and to build productive relationships with stakeholders (such as program sponsors, participants, or partners) as well as the evaluation team, once selected.  The sooner that information can be made available to the evaluation team, the more efficient they can be in providing credible answers to the important questions outlined in the SOW.  The quality of the evaluation is dependent on providing quality guidance in the SOW. WHO SHOULD BE INVOLVED? Participation in all or some part of the evaluation is an important decision for the development of the SOW. USAID and evaluation experts strongly recommend that evaluations maximize stakeholder participation, especially in the initial planning process. Stakeholders may encompass a wide array of persons and institutions, including policy makers, program managers, implementing partners, host country organizations, and beneficiaries. In some cases, stakeholders may also be involved throughout the evaluation and with the dissemination of results. The benefits of stakeholder participation include the following:  Learning across a broader group of decision-makers, thus increasing the likelihood that the evaluation findings will be used to improve development effectiveness;  Acceptance of the purpose and process of evaluation by those concerned;  A more inclusive and better focused list of questions to be answered;  Increased acceptance and ownership of the process, findings and conclusions; and  Increased possibility that the evaluation will be used by decision makers and other stakeholders. USAID operates in an increasingly complex implementation world
  • 14. 3 with many players, including other USG agencies such as the Departments of State, Defense, Justice and others. If the activity engages other players, it is important to include them in the process. Within USAID, there are useful synergies that can emerge when the SOW development process is inclusive. For example, a SOW that focuses on civil society advocacy might benefit from input by those who are experts in rule of law. Participation by host government and local organizational leaders and beneficiaries is less common among USAID supported evaluations. It requires sensitivity and careful management; however, the benefits to development practitioners can be substantial. Participation of USAID managers in evaluations is an increasingly common practice and produces many benefits. To ensure against bias or conflict of interest, the USAID manager‟s role can be limited to participating in the fact finding phase and contributing to the analysis. However, the final responsibility for analysis, conclusions and recommendations will rest with the independent members and team leader. THE ELEMENTS OF A GOOD EVALUATION SOW 1. DESCRIBE THE ACTIVITY, PROGRAM, OR PROCESS TO BE EVALUATED Be as specific and complete as possible in describing what is to be evaluated. The more information provided at the outset, the more time the evaluation team will have to develop the data needed to answer the SOW questions. If the USAID manager does not have the time and resources to bring together all the relevant information needed to inform the evaluation in advance, the SOW might require the evaluation team to submit a document review as a first deliverable. This will, of course, add to the amount of time and budget needed in the evaluation contract. 2. PROVIDE A BRIEF BACKGROUND Give a brief description of the context, history and current status of the activities or programs, names of implementing agencies and organizations involved, and other information to help the evaluation team understand background and context. In addition, this section should state the development hypothesis(es) and clearly describe the program (or project) theory that underlies the program‟s design. USAID activities, programs and strategies, as well as most policies, are based on a set of “if- then” propositions that predict how a set of interventions will produce intended results. A development hypothesis is generally represented in a results framework (or sometimes a logical framework at the project level) and identifies the causal relationships among various objectives sought by the program (see TIPS 13: Building a Results Framework). That is, if one or more objectives are achieved, then the next higher order objective will be achieved. Whether the development hypothesis is the correct one, or whether it remains valid at the time of the evaluation, is an important question for most evaluation SOWs to consider. 3. STATE THE PURPOSE AND USE OF THE EVALUATION Why is an evaluation needed? The clearer the purpose, the more likely it is that the evaluation will FIGURE 2. ELEMENTS OF A GOOD EVALUATION SOW 1. Describe the activity, program, or process to be evaluated 2. Provide a brief background on the development hypothesis and its implementation 3. State the purpose and use of the evaluation 4. Clarify the evaluation questions 5. Identify the evaluation method(s) 6. Identify existing performance information sources, with special attention to monitoring data 7. Specify the deliverables(s) and the timeline 8. Identify the composition of the evaluation team (one team member should be an evaluation specialist) and participation of customers and partners 9. Address schedule and logistics 10. Clarify requirements for reporting and dissemination 11. Include a budget
  • 15. 4 produce credible and useful findings, conclusions and recommendations. In defining the purpose, several questions should be considered.  Who wants the information? Will higher level decision makers be part of the intended audience?  What do they want to know?  For what purpose will the information be used?  When will it be needed?  How accurate must it be? ADS 203.3.6.1 identifies a number of triggers that may inform the purpose and use of an evaluation, as follows:  A key management decision is required for which there is inadequate information;  Performance information indicates an unexpected result (positive or negative) that should be explained (such as gender differential results);  Customer, partner, or other informed feedback suggests that there are implementation problems, unmet needs, or unintended consequences or impacts;  Issues of impact, sustainability, cost-effectiveness, or relevance arise;  The validity of the development hypotheses or critical assumptions is questioned, for example, due to unanticipated changes in the host country environment; and  Periodic portfolio reviews have identified key questions that need to be answered or require consensus. 4. CLARIFY THE EVALUATION QUESTIONS The core element of an evaluation SOW is the list of questions posed for the evaluation. One of the most common problems with evaluation SOWs is that they contain a long list of poorly defined or “difficult to answer” questions given the time, budget and resources provided. While a participatory process ensures wide ranging input into the initial list of questions, it is equally important to reduce this list to a manageable number of key questions. Keeping in mind the relationship between budget, time, and expertise needed, every potential question should be thoughtfully examined by asking a number of questions.  Is this question of essential importance to the purpose and the users of the evaluation?  Is this question clear, precise and „researchable‟?  What level of reliability and validity is expected in answering the question?  Does determining an answer to the question require a certain kind of experience and expertise?  Are we prepared to provide the management commitment, time and budget to secure a credible answer to this question? If these questions can be answered yes, then the team probably has a good list of questions that will inform the evaluation team and drive the evaluation process to a successful result. 5. IDENTIFY EVALUATION METHODS The SOW manager has to decide whether the evaluation design and methodology should be specified in the SOW.1 This depends on whether the writer has expertise, or has internal access to evaluation research knowledge and experience. If so, and the writer is confident of the „on the ground‟ conditions that will allow for different evaluation designs, then it is appropriate to include specific requirements in the SOW. If the USAID SOW manager does not have the kind of evaluation experience needed, especially for more formal and rigorous evaluations, it is good practice to: 1) require that the team (or bidders, if it is contracted out) include a description of (or approach for developing) the proposed research design and methodology, or 2) require a detailed design and evaluation plan to be submitted as a first deliverable. In this way, the SOW manager benefits from external evaluation expertise. In either case, the design and methodology should not be finalized until the team has an opportunity to gather detailed 1 See USAID ADS 203.3.6.4 on Evaluation Methodologies;
  • 16. 5 information and discuss final issues with USAID. The selection of the design and data collection methods must be a function of the type of evaluation and the level of statistical and quantitative data confidence needed. If the project is selected for a rigorous impact evaluation, then the design and methods used will be more sophisticated and technically complex. If external assistance is necessary, the evaluation SOW will be issued as part of the initial RFP/RFA (Request for Proposal or Request for Application) solicitation process. All methods and evaluation designs should be as rigorous as reasonably possible. In some cases, a rapid appraisal is sufficient and appropriate (see TIPS 5: Using Rapid Appraisal Methods). At the other extreme, planning for a sophisticated and complex evaluation process requires greater up-front investment in baselines, outcome monitoring processes, and carefully constructed experimental or quasi-experimental designs. 6. IDENTIFY EXISTING PERFORMANCE INFORMATION Identify the existence and availability of relevant performance information sources, such as performance monitoring systems and/or previous evaluation reports. Including a summary of the types of data available, the timeframe, and an indication of their quality and reliability will help the evaluation team to build on what is already available. 7. SPECIFY DELIVERABLES AND TIMELINE The SOW must specify the products, the time frame, and the content of each deliverable that is required to complete the evaluation contract. Some SOWs simply require delivery of a draft evaluation report by a certain date. In other cases, a contract may require several deliverables, such as a detailed evaluation design, a work plan, a document review, and the evaluation report. The most important deliverable is the final evaluation report. TIPS 17: Constructing an Evaluation Report provides a suggested outline of an evaluation report that may be adapted and incorporated directly into this section. The evaluation report should differentiate between findings, conclusions, and recommendations, as outlined in Figure 3. As evaluators move beyond the facts, greater interpretation is required. By ensuring that the final report is organized in this manner, decision makers can clearly understand the facts on which the evaluation is based. In addition, it facilitates greater understanding of where there might be disagreements concerning the interpretation of those facts. While individuals may disagree on recommendations, they should not disagree on the basic facts. Another consideration is whether a section on “lessons learned” should be included in the final report. A good evaluation will produce knowledge about best practices, point out what works, what does not, and contribute to the more general fund of tested experience on which other program designers and implementers can draw. Because unforeseen obstacles may emerge, it is helpful to be as realistic as possible about what can be accomplished within a given time frame. Also, include some wording that allows USAID and the evaluation team to adjust schedules in consultation with the USAID manager should this be necessary. 8. DISCUSS THE COMPOSITION OF THE EVALUATION TEAM USAID evaluation guidance for team selection strongly recommends that at least one team member have credentials
  • 17. 6 and experience in evaluation design and methods. The team leader must have strong team management skills, and sufficient experience with evaluation standards and practices to ensure a credible product. The appropriate team leader is a person with whom the SOW manager can develop a working partnership as the team moves through the evaluation research design and planning process. He/she must also be a person who can deal effectively with senior U.S. and host country officials and other leaders. Experience with USAID is often an important factor, particularly for management focused evaluations, and in formative evaluations designed to establish the basis for a future USAID program or the redesign of an existing program. If the evaluation entails a high level of complexity, survey research and other sophisticated methods, it may be useful to add a data collection and analysis expert to the team. Generally, evaluation skills will be supplemented with additional subject matter experts. As the level of research competence increases in many countries where USAID has programs, it makes good sense to include local collaborators, whether survey research firms or independents, to be full members of the evaluation team. 9. ADDRESS SCHEDULING, LOGISTICS AND OTHER SUPPORT Good scheduling and effective local support contributes greatly to the efficiency of the evaluation team. This section defines the time frame and the support structure needed to answer the evaluation questions at the required level of validity. For evaluations involving complex designs and sophisticated survey research data collection methods, the schedule must allow enough time, for example, to develop sample frames, prepare and pretest survey instruments, training interviewers, and analyze data. New data collection and analysis technologies can accelerate this process, but need to be provided for in the budget. In some cases, an advance trip to the field by the team leader and/or methodology expert may be justified where extensive pretesting and revision of instruments is required or when preparing for an evaluation in difficult or complex operational environments. Adequate logistical and administrative support is also essential. USAID often works in countries with poor infrastructure, frequently in conflict/post-conflict environments where security is an issue. If the SOW requires the team to make site visits to distant or difficult locations, such planning must be incorporated into the SOW. Particularly overseas, teams often rely on local sources for administrative support, including scheduling of appointments, finding translators and interpreters, and arranging transportation. In many countries where foreign assistance experts have been active, local consulting firms have developed this kind of expertise. Good interpreters are in high demand, and are essential to any evaluation team‟s success, especially when using qualitative data collection methods. 10. CLARIFY REQUIREMENTS FOR REPORTING AND DISSEMINATION Most evaluations involve several phases of work, especially for more complex designs. The SOW can set up the relationship between the evaluation team, the USAID manager and other stakeholders. If a working group was established to help define the SOW questions, continue to use the group as a forum for interim reports and briefings provided by the evaluation team. The SOW should specify the timing and details for each briefing session. Examples of what might be specified include:  Due dates for draft and final reports;  Dates for oral briefings (such as a mid-term and final briefing);  Number of copies needed;  Language requirements, where applicable;
  • 18. 7  Formats and page limits;  Requirements for datasets, if primary data has been collected;  A requirement to submit all evaluations to the Development Experience Clearing house for archiving - this is the responsibility of the evaluation contractor; and  Other needs for communicating, marketing and disseminating results that are the responsibility of the evaluation team. The SOW should specify when working drafts are to be submitted for review, the time frame allowed for USAID review and comment, and the time frame to revise and submit the final report. 11. INCLUDE A BUDGET With the budget section, the SOW comes full circle. As stated, budget considerations have to be part of the decision making process from the beginning. The budget is a product of the questions asked, human resources needed, logistical and administrative support required, and the time needed to produce a high quality, rigorous and useful evaluation report in the most efficient and timely manner. It is essential for contractors to understand the quality, validity and rigor required so they can develop a responsive budget that will meet the standards set forth in the SOW. For more information: TIPS publications are available online at [insert website]. Acknowledgements: Our thanks to those whose experience and insights helped shape this publication including USAID‟s Office of Management Policy, Budget and Performance (MPBP). This publication was written by Richard Blue, Ph.D. of Management Systems International. Comments regarding this publication can be directed to: Gerald Britan, Ph.D. Tel: (202) 712-1158 gbritan@usaid.gov Contracted under RAN-M-00-04-00049-A-FY0S-84 Integrated Managing for Results II
  • 19. USAID's reengineering guid- ance encourages the use of rapid, low cost methods for collecting informa- tion on the perfor- mance of our devel- opment activities. Direct observation, the subject of this Tips, is one such method. PN-ABY-208 1996, Number 4 Performance Monitoring and Evaluation TIPSUSAID Center for Development Information and Evaluation USING DIRECT OBSERVATION TECHNIQUES What is Direct Observation? Most evaluation teams conduct some fieldwork, observing what's actually going on at assistance activity sites. Often, this is done informally, without much thought to the quality of data collection. Direct observation techniques allow for a more systematic, structured process, using well-designed observation record forms. Advantages and Limitations The main advantage of direct observation is that an event, institution, facility, or process can be studied in its natural setting, thereby providing a richer understanding of the subject. For example, an evaluation team that visits microenterprises is likely to better understand their nature, problems, and successes after directly observing their products, technologies, employees, and processes, than by relying solely on documents or key informant interviews. Another advantage is that it may reveal conditions, problems, or patterns many informants may be unaware of or unable to describe adequately. On the negative side, direct observation is susceptible to observer bias. The very act of observation also can affect the behavior being studied. When Is Direct Observation Useful? Direct observation may be useful: When performance monitoring data indicate results are not being accomplished as planned, and when implementation problems are suspected, but not understood. Direct observation can help identify whether the process is poorly implemented or required inputs are absent. When details of an activity's process need to be assessed, such as whether tasks are being implementing according to standards required for effectiveness. When an inventory of physical facilities and inputs is needed and not available from existing sources.
  • 20. 2 OBSERVATION OF GROWTH MONITORING SESSION Name of the Observer Date Time Place Was the scale set to 0 at the beginning of the growth session? Yes______ No ______ How was age determined? By asking______ From growth chart_______ Other_______ When the child was weighed, was it stripped to practical limit? Yes______ No______ Was the weight read correctly? Yes______No______ Process by which weight and age transferred to record Health Worker wrote it_____ Someone else wrote it______ Other______ Did Health Worker interpret results for the mother? Yes_______No_______ When interview methods are unlikely to elicit When preparing direct observation forms, consider the needed information accurately or reliably, either following: because the respondents don't know or may be reluctant to say. Steps in Using Direct Observation The quality of direct observation can be improved by following these steps. Step 1. Determine the focus Because of typical time and resource constraints, direct observation has to be selective, looking at a few activities, events, or phenomena that are central to the evaluation questions. For example, suppose an evaluation team intends to study a few health clinics providing immunization services for children. Obviously, the team can assess a variety of areas—physical facilities and surroundings, immunization activities of health workers, recordkeeping and managerial services, and community interactions. The team should narrow its focus to one or two areas likely to generate the most useful information and insights. Next, break down each activity, event, or phenomena into subcomponents. For example, if the team decides to look at immunization activities of health workers, prepare a list of the tasks to observe, such as preparation of vaccine, consultation with mothers, and vaccine administration. Each task may be further divided into subtasks; for example, administering vaccine likely includes preparing the recommended doses, using the correct administration technique, using sterile syringes, and protecting vaccine from heat and light during use. If the team also wants to assess physical facilities and surroundings, it will prepare an inventory of items to be observed. Step 2. Develop direct observation forms The observation record form should list the items to be observed and provide spaces to record observations. These forms are similar to survey questionnaires, but investigators record their own observations, not respondents' answers. Observation record forms help standardize the observation process and ensure that all important items are covered. They also facilitate better aggregation of data gathered from various sites or by various investigators. An excerpt from a direct observation form used in a study of primary health care in the Philippines provides an illustration below. 1. Identify in advance the possible response categories for each item, so that the observer can answer with a simple yes or no, or by checking the appropriate answer. Closed response categories help minimize observer variation, and therefore improve the quality of data. 2. Limit the number of items in a form. Forms should normally not exceed 40–50 items. If nessary, it is better to use two or more smaller forms than a single large one that runs several pages.
  • 21. 3 3. Provide adequate space to record additional observations People and organizations follow daily routines associated for which response categories were not determined. with set times. For example, credit institutions may accept 4. Use of computer software designed to create forms can be very helpful. It facilitates a neat, unconfusing form that can be easily completed. Step 3. Select the sites Once the forms are ready, the next step is to decide where the observations will be carried out and whether it will be based on one or more sites. A single site observation may be justified if a site can be treated as a typical case or if it is unique. Consider a situation in which all five agricultural extension centers established by an assistance activity have not been performing well. Here, observation at a single site may be justified as a typical case. A single site observation may also be justified when the case is unique; for example, if only one of five centers had been having major problems, and the purpose of the evaluation is trying to discover why. Allow sufficient time for direct observation. Brief visits can However, single site observations should be avoided be deceptive partly because people tend to behave generally, because cases the team assumes to be typical or differently in the presence of observers. It is not unique may not be. As a rule, several sites are necessary to uncommon, for example, for health workers to become obtain a reasonable understanding of a situation. more caring or for extension workers to be more In most cases, teams select sites based on experts' advice. The investigator develops criteria for selecting sites, then relies on the judgment of knowledgeable people. For example, if a team evaluating a family planning project decides to observe three clinics—one highly successful, one moderately successful, and one struggling clinic—it Use a team approach. If possible, two observers should may request USAID staff, local experts, or other observe together. A team can develop more informants to suggest a few clinics for each category. The comprehensive, higher quality data, and avoid individual team will then choose three after examining their bias. recommendations. Using more than one expert reduces individual bias in selection. Alternatively, sites can be selected based on data from observation forms are clear, straightforward, and mostly performance monitoring. For example, activity sites closed-ended. (clinics, schools, credit institutions) can be ranked from best to worst based on performance measures, and then a sample drawn from them. Step 4. Decide on the best timing Timing is critical in direct observation, especially when conscious or disturb the situation. In these cases, recording events are to be observed as they occur. Wrong timing can should take place as soon as possible after observation. distort findings. For example, rural credit organizations receive most loan applications during the planting season, when farmers wish to purchase agricultural inputs. If credit institutions are observed during the nonplanting season, an inaccurate picture of loan processing may result. loan applications in the morning; farmers in tropical climates may go to their fields early in the morning and return home by noon. Observation periods should reflect work rhythms. Step 5. Conduct the field observation Establish rapport. Before embarking on direct observation, a certain level of rapport should be established with the people, community, or organization to be studied. The presence of outside observers, especially if officials or experts, may generate some anxiety among those being observed. Often informal, friendly conversations can reduce anxiety levels. Also, let them know the purpose of the observation is not to report on individuals' performance, but to find out what kind of problems in general are being encountered. persuasive when being watched. However, if observers stay for relatively longer periods, people become less self- conscious and gradually start behaving naturally. It is essential to stay at least two or three days on a site to gather valid, reliable data. Train observers. If many sites are to be observed, nonexperts can be trained as observers, especially if Step 6. Complete forms Take notes as inconspicuously as possible. The best time for recording is during observation. However, this is not always feasible because it may make some people self- Step 7. Analyze the data Data from close-ended questions from the observation form can be analyzed using basic procedures such as frequency counts and cross-tabulations. Statistical software packages such as SAS or SPSS facilitate such statistical analysis and data display.
  • 22. 4 Direct Observation of Primary Health Care Services in the Philippines An example of structured direct observation was an effort to identify deficiencies in the primary health care system in the Philippines. It was part of a larger, multicountry research project, the Primary Health Care Operations Research Project (PRICOR). The evaluators prepared direct observation forms covering the activities, tasks, and subtasks health workers must carry out in health clinics to accomplish clinical objectives. These forms were closed-ended and in most cases observations could simply be checked to save time. The team looked at 18 health units from a "typical" province, including samples of units that were high, medium and low performers in terms of key child survival outcome indicators. The evaluation team identified and quantified many problems that required immediate government attention. For example, in 40 percent of the cases where followup treatment was required at home, health workers failed to tell mothers the timing and amount of medication required. In 90 percent of cases, health workers failed to explain to mothers the results of child weighing and growth plotting, thus missing the opportunity to involve mothers in the nutritional care of their child. Moreover, numerous errors were made in weighing and plotting. This case illustrates that use of closed-ended observation instruments promotes the reliability and consistency of data. The findings are thus more credible and likely to influence program managers to make needed improvements. CDIE's Tips series provide advice and suggestions to USAID managers on how to plan and conduct performance monitoring and evaluation activities. They are supplemental references to the reengineering automated directives system (ADS), chapter 203. For further information, contact Annette Binnendijk, CDIE Senior Evaluation Advisor, phone (703) 875–4235, fax (703) 875–4866, or e-mail. Tips can be ordered from the Development Information Services Clearinghouse by calling (703) 351-4006 or by faxing (703) 351–4039. Please refer to the PN number. To order via Internet, address requests to docorder@disc.mhs.compuserve.com Analysis of any open-ended interview questions can also sites selected; using closed-ended, unambiguous response provide extra richness of understanding and insights. Here, categories on the observation forms, recording observations use of database management software with text storage promptly, and using teams of observers at each site. capabilities, such as dBase, can be useful. Step 8. Check for reliability and validity. Direct observation techniques are susceptible to error and bias that can affect reliability and validity. These can be minimized by following some of the procedures suggested, such as checking the representativeness of the sample of Selected Further Reading Information in this Tips is based on "Rapid Data Collection Methods for Field Assessments" by Krishna Kumar, in Team Planning Notebook for Field-Based Program Assessments (USAID PPC/CDIE, 1991). For more on direct observation techniques applied to the Philippines health care system, see Stewart N. Blumenfeld, Manuel Roxas, and Maricor de los Santos, "Systematic Observation in the Analysis of Primary Health Care Services," in Rapid Appraisal Methods, edited by Krishna Kumar (The World Bank:1993)
  • 23. PERFORMANCE MONITORING & EVALUATION TIPS USING RAPID APPRAISAL METHODS ABOUT TIPS These TIPS provide practical advice and suggestions to USAID managers on issues related to performance monitoring and evaluation. This publication is a supplemental reference to the Automated Directive System (ADS) Chapter 203. WHAT IS RAPID APPRAISAL? Rapid Appraisal (RA) is an approach that draws on multiple evaluation methods and techniques to quickly, yet systematically, collect data when time in the field is limited. RA practices are also useful when there are budget constraints or limited availability of reliable secondary data. For example, time and budget limitations may preclude the option of using representative sample surveys. BENEFITS – WHEN TO USE RAPID APPRAISAL METHODS Rapid appraisals are quick and can be done at relatively low cost. Rapid appraisal methods can help gather, analyze, and report relevant information for decision-makers within days or weeks. This is not possible with sample surveys. RAs can be used in the following cases: • for formative evaluations, to make mid-course corrections in project design or implementation when customer or partner feedback indicates a problem (See ADS 203.3.6.1); • when a key management decision is required and there is inadequate information; • for performance monitoring, when data are collected and the techniques are repeated over time for measurement purposes; • to better understand the issues behind performance monitoring data; and • for project pre-design assessment. LIMITATIONS – WHEN RAPID APPRAISALS ARE NOT APPROPRIATE Findings from rapid appraisals may have limited reliability and validity, and cannot be generalized to the larger population. Accordingly, rapid appraisal should not be the sole basis for summative or impact evaluations. Data can be biased and inaccurate unless multiple methods are used to strengthen the validity of findings and careful preparation is undertaken prior to beginning field work. WHEN ARE RAPID APPRAISAL METHODS APPROPRIATE? Choosing between rapid appraisal methods for an assessment or more time-consuming methods, such as sample surveys, should depend on balancing several factors, listed below. • Purpose of the study. The importance and nature of the decision depending on it. • Confidence in results. The accuracy, reliability, and validity of NUMBER 5 2ND EDITION, 2010 1
  • 24. findings needed for management decisions. 2 • Time frame. When a decision must be made. • Resource constraints (budget). • Evaluation questions to be answered. (see TIPS 3: Preparing an Evaluation Statement of Work) USE IN TYPES OF EVALUATION Rapid appraisal methods are often used in formative evaluations. Findings are strengthened when evaluators use triangulation (employing more than one data collection method) as a check on the validity of findings from any one method. Rapid appraisal methods are also used in the context of summative evaluations. The data from rapid appraisal methods and techniques complement the use of quantitative methods such as surveys based on representative sampling. For example, a randomized survey of small holder farmers may tell you that farmers have a difficult time selling their goods at market, but may not have provide you with the details of why this is occurring. A researcher could then use interviews with farmers to determine the details necessary to construct a more complete theory of why it is difficult for small holder farmers to sell their goods. KEY PRINCIPLES FOR ENSURING USEFUL RAPID APPRAISAL DATA COLLECTION No set of rules dictates which methods and techniques should be used in a given field situation; however, a number of key principles can be followed to ensure the collection of useful data in a rapid appraisal. • Preparation is key. As in any evaluation, the evaluation design and selection of methods must begin with a thorough understanding of the evaluation questions and the client’s needs for evaluative information. The client’s intended uses of data must guide the evaluation design and the types of methods that are used. • Triangulation increases the validity of findings. To lessen bias and strengthen the validity of findings from rapid appraisal methods and techniques, it is imperative to use multiple methods. In this way, data collected using one method can be compared to that collected using other methods, thus giving a researcher the ability to generate valid and reliable findings. If, for example, data collected using Key Informant Interviews reveal the same findings as data collected from Direct Observation and Focus Group Interviews, there is less chance that the findings from the first method were due to researcher bias or due to the findings being outliers. Table 1 summarizes common rapid appraisal methods and suggests how findings from any one method can be strengthened by the use of other methods. COMMON RAPID APPRAISAL METHODS INTERVIEWS This method involves one-on-one interviews with individuals or key informants selected for their knowledge or diverse views. Interviews are qualitative, in-depth and semi-structured. Interview guides are usually used and questions may be further framed during the interview, using subtle probing techniques. Individual interviews may be used to gain information on a general topic but cannot provide the in-depth inside knowledge on evaluation topics that s key informants may provide. quickly. MINISURVEYS A minisurvey consists of interviews with between five to fifty individuals, usually selected using non- probability sampling (sampling in which respondents are chosen based on their understanding of issues related to a purpose or specific questions, usually used when sample sizes are small and time or access to areas is limited). Structured questionnaires are used with a limited number of close-ended questions. Minisurveys generate quantitative data that can often be collected and analyzed FOCUS GROUPS The focus group is a gathering of a homogeneous body of five to twelve participants to discuss issues and experiences among themselves. These are used to test an idea or to get a reaction on specific topics. A moderator introduces the topic, timulates and focuses the EVALUATION METHODS COMMONLY USED IN RAPID APPRAISAL • Interviews • Community Discussions • Exit Polling • Transect Walks (see p. 3) • Focus Groups • Minisurveys • Community Mapping • Secondary Data Collection • Group Discussions • Customer Service Surveys • Direct Observation
  • 25. COMMUNITY DISCUSSIONS 3 documents the conversation. respond directly to the moderator. community discussions. The discussion, and prevents domination of discussion by a few, while another evaluator This method takes place at a public meeting that is open to all community members; it can be successfully moderated with as many as 100 or more people. The primary interaction is between the participants while the moderator leads the discussion and asks questions following a carefully prepared interview guide. GROUP DISCUSSIONS This method involves the selection of approximately five participants who are knowledgeable about a given topic and are comfortable enough with one another to freely discuss the issue as a group. The moderator introduces the topic and keeps the discussion going while another evaluator records the discussion. Participants talk among each other rather than DIRECT OBSERVATION Teams of observers record what they hear and see at a program site using a detailed observation form. Observation may be of the physical surrounding or of ongoing activities, processes, or interactions. COLLECTING SECONDARY DATA This method involves the on-site collection of existing secondary data, such as export sales, loan information, health service statistics, etc. These data are an important augmentation to information collected using qualitative methods such as interviews, focus groups, and evaluator must be able to quickly determine the validity and reliability of the data. (see TIPS 12: Indicator and Data Quality) TRANSECT WALKS rticipatory COMMUNITY MAPPING nique LOGYTHE ROLE OF TECHNO IN RAPID APPRAISAL Certain equipment and technologies can aid the rapid collection of data and help to decrease the incidence of errors. These include, for example, hand held computers or personal digital assistants (PDAs) for data input, cellular phones, digital recording devices for interviews, videotaping and photography, and the use of geographic information syste The transect walk is a pa approach in which the evaluator asks a selected community member to walk with him or her, for example, through the center of town, from one end of a village to the other, or through a market. The evaluator asks the individual, usually a key informant, to point out and discuss important sites, neighborhoods, businesses, etc., and to discuss related issues. ms (GIS) data and aerial photographs. Community mapping is a tech that requires the participation of residents on a program site. It can be used to help locate natural resources, routes, service delivery points, regional markets, trouble spots, etc., on a map of the area, or to use residents’ feedback to drive the development of a map that includes such information.
  • 26. COMMON RAPID APPRAISAL METHODS Table 1 Method Useful for Providing Example Advantages Limitations Further References INDIVIDUAL INTERVIEWS Interviews − A general overview of the topic from someone who has a broad knowledge and in-depth experience and understanding (key informant) or in- depth information on a very specific topic or subtopic (individual) − Suggestions and recommendations to improve key aspects of a program Key informant: Interview with program implementation director Interview with director of a regional trade association Individual: Interview with an activity manager within an overall development program Interview with a local entrepreneur trying to enter export trade − Provides in-depth, inside information on specific issues from the individuals perspective and experience − Flexibility permits exploring unanticipated topics − Easy to administer − Low cost − Susceptible to interviewer and selection biases − Individual interviews lack the broader understanding and insight that a key informant can provide TIPS No. 2, Conducting Key Informant Interviews K. Kumar, Conducting Key Informant Surveys in Developing Countries, 1986 Bamberger, Rugh, and Mabry, Real World Evaluation, 2006 UNICEF Website: M&E Training Modules: Overview of RAP Techniques Minisurveys − Quantitative data on narrowly focused questions, for a relatively homogeneous population, when representative sampling is not possible or required − Quick data on attitudes, beliefs, behaviors of beneficiaries or partners − A customer service assessment − Rapid exit interviews after voting − Quantitative data from multiple respondents − Low cost − Findings are less generalizable than those from sample surveys unless the universe of the population is surveyed TIPS No. 9, Conducting Customer Service Assessments K. Kumar, Conducting Mini Surveys in Developing Countries, 1990 Bamberger, Rugh, and Mabry, RealWorld Evaluation, 2006 on purposeful sampling GROUP INTERVIEWS Focus Groups − Customer views on services, products, benefits − Information on implementation problems − Suggestions and recommendations for improving specific activities − Discussion on experience related to a specific program intervention − Effects of a new business regulation or proposed price changes − Group discussion may reduce inhibitions, allowing free exchange of ideas − Low cost − Discussion may be dominated by a few individuals unless the process is facilitated/ managed well TIPS No. 10, Conducting Focus Group Interviews K. Kumar, Conducting Group Interviews in Developing Countries, 1987 T. Greenbaum, Moderating Focus Groups: A Practical Guide for Group Facilitation, 2000 4
  • 27. Group Discussions − Understanding of issues from different perspectives and experiences of participants from a specific subpopulation − Discussion with young women on access to prenatal and infant care − Discussion with entrepreneurs about export regulations − Small group size allows full participation − Allows good understanding of specific topics − Low cost − Findings cannot be generalized to a larger population Bamberger, Rugh, and Mabry, RealWorld Evaluation, 2006 UNICEF Website: M&E Training Modules: Community Meetings Community Discussions − Understanding of an issue or topic from a wide range of participants from key evaluation sites within a village, town, city, or city neighborhood − A Town Hall meeting − Yields a wide range of opinions on issues important to participants − A great deal of information can be obtained at one point of time − Findings cannot be generalized to larger population or to subpopulations of concern − Larger groups difficult to moderate Bamberger, Rugh, and Mabry, RealWorld Evaluation, 2006 UNICEF Website: M&E Training Modules: Community Meetings ADDITIONAL COMMONLY USED TECHNIQUES Direct Observation − Visual data on physical infrastructure, supplies, conditions − Information about an agency’s or business’s delivery systems, services − Insights into behaviors or events − Market place to observe goods being bought and sold, who is involved, sales interactions − Confirms data from interviews − Low cost − Observer bias unless two to three evaluators observe same place or activity TIPS No. 4, Using Direct Observation Techniques WFP Website: Monitoring & Evaluation Guidelines: What Is Direct Observation and When Should It Be Used? Collecting Secondary Data − Validity to findings gathered from interviews and group discussions − Microenterprise bank loan info. − Value and volume of exports − Number of people served by a health clinic, social service provider − Quick, low cost way of obtaining important quantitative data − Must be able to determine reliability and validity of data TIPS No. 12, Guidelines for Indicator and Data Quality PARTICIPATORY TECHNIQUES Transect Walks − Important visual and locational information and a deeper understanding of situations and issues − Walk with key informant from one end of a village or urban neighborhood to another, through a market place, etc. − Insiders viewpoint − Quick way to find out location of places of interest to the evaluator − Low cost − Susceptible to interviewer and selection biases Bamberger, Rugh, and Mabry, Real World Evaluation, 2006 UNICEF Website: M&E Training Modules: Overview of RAP Techniques Community Mapping − Info. on locations important for data collection that could be difficult to find − Quick comprehension on spatial location of services/resources in a region which can give insight to access issues − Map of village and surrounding area with locations of markets, water and fuel sources, conflict areas, etc. − Important locational data when there are no detailed maps of the program site − Rough locational information Bamberger, Rugh, and Mabry, Real World Evaluation, 2006 UNICEF Website: M&E Training Modules: Overview of RAP Techniques 5
  • 28. References Cited M. Bamberger, J. Rugh, and L. Mabry, Real World Evaluation. Working Under Budget, Time, Data, and Political Constraints. Sage Publications, Thousand Oaks, CA, 2006. T. Greenbaum, Moderating Focus Groups: A Practical Guide for Group Facilitation. Sage Publications, Thousand Oaks, CA, 2000. K. Kumar, “Conducting Mini Surveys in Developing Countries,” USAID Program Design and Evaluation Methodology Report No. 15, 1990 (revised 2006). K. Kumar, “Conducting Group Interviews in Developing Countries,” USAID Program Design and Evaluation Methodology Report No. 8, 1987. K. Kumar, “Conducting Key Informant Interviews in Developing Countries,” USAID Program Design and Evaluation Methodology Report No. 13, 1989. For more information: TIPS publications are available online at [insert website]. Acknowledgements: Our thanks to those whose experience and insights helped shape this publication including USAID’s Office of Management Policy, Budget and Performance (MPBP). This publication was authored by Patricia Vondal, PhD., of Management Systems International. Comments regarding this publication can be directed to: Gerald Britan, Ph.D. Tel: (202) 712-1158 gbritan@usaid.gov Contracted under RAN-M-00-04-00049-A-FY0S-84 Integrated Managing for Results II 6
  • 29. 1 PERFORMANCE MONITORING & EVALUATION TIPS SELECTING PERFORMANCE INDICATORS ABOUT TIPS These TIPS provide practical advice and suggestions to USAID managers on issues related to performance monitoring and evaluation. This publication is a supplemental reference to the Automated Directive System (ADS) Chapter 203. WHAT ARE PERFORMANCE INDICATORS? Performance indicators define a measure of change for the results identified in a Results Framework (RF). When well- chosen, they convey whether key objectives are achieved in a meaningful way for performance management. While a result (such as an Assistance Objective or an Intermediate Result) identifies what we hope to accomplish, indicators tell us by what standard that result will be measured. Targets define whether there will be an expected increase or decrease, and by what magnitude.1 Indicators may be quantitative or qualitative in nature. Quantitative indicators are numerical: an example is a person’s height or weight. On the other hand, qualitative indicators require subjective evaluation. Qualitative data are sometimes reported in numerical form, but those numbers do not have arithmetic meaning on their own. Some examples are a score on an institutional capacity index or progress along a milestone scale. When developing quantitative or qualitative indicators, the important point is that the indicator be 1 For further information, see TIPS 13: Building a Results Framework and TIPS 8: Baselines and Targets. constructed in a way that permits consistent measurement over time. USAID has developed many performance indicators over the years. Some examples include the dollar value of non- traditional exports, private investment as a percentage of gross domestic product, contraceptive prevalence rates, child mortality rates, and progress on a legislative reform index. Selecting an optimal set of indicators to track progress against key results lies at the heart of an effective performance management system. This TIPS provides guidance on how to select effective performance indicators. NUMBER 6 2ND EDITION, 2010
  • 30. 2 WHY ARE PERFORMANCE INDICATORS IMPORTANT? Performance indicators provide objective evidence that an intended change is occurring. Performance indicators lie at the heart of developing an effective performance management system – they define the data to be collected and enable actual results achieved to be compared with planned results over time. Hence, they are an indispensable management tool for making evidence-based decisions about program strategies and activities. Performance indicators can also be used:  To assist managers in focusing on the achievement of development results.  To provide objective evidence that results are being achieved.  To orient and motivate staff and partners toward achieving results.  To communicate USAID achievements to host country counterparts, other partners, and customers.  To more effectively report results achieved to USAID's stakeholders, including the U.S. Congress, Office of Management and Budget, and citizens. FOR WHAT RESULTS ARE PERFORMANCE INDICATORS REQUIRED? THE PROGRAM LEVEL USAID’s ADS requires that at least one indicator be chosen for each result in the Results Framework in order to measure progress (see ADS 203.3.3.1)2 . This includes the Assistance Objective (the highest-level objective in the Results Framework) as well as supporting Intermediate Results (IRs)3 . These indicators should be included in the Mission or Office Performance Management Plan (PMP) (see TIPS 8: Preparing a PMP). PROJECT LEVEL AO teams are required to collect data regularly for projects and activities, including inputs, outputs, and processes, to ensure they are progressing as expected and are contributing to relevant IRs and AOs. These indicators should be included in a project-level monitoring and evaluation 2 For further discussion of AOs and IRs (which are also termed impact and outcomes respectively in other systems) refer to TIPS 13: Building a Results Framework. 3 Note that some results frameworks incorporate IRs from other partners if those results are important for USAID to achieve the AO. This is discussed in further detail in TIPS 13: Building a Results Framework. If these IRs are included, then it is recommended that they be monitored, although less rigorous standards apply. (M&E) plan. The M&E plan should be integrated in project management and reporting systems (e.g., quarterly, semi- annual, or annual reports). TYPES OF INDICATORS IN USAID SYSTEMS Several different types of indicators are used in USAID systems. It is important to understand the different roles and functions of these indicators so that managers can construct a performance management system that effectively meets internal management and Agency reporting needs. CUSTOM INDICATORS Custom Indicators are performance indicators that reflect progress within each unique country or program context. While they are useful for managers on the ground, they often cannot be aggregated across a number of programs like standard indicators. Example: Progress on a milestone scale reflecting legal reform and implementation to ensure credible elections, as follows:  Draft law is developed in consultation with non- governmental organizations (NGOs) and political parties.  Public input is elicited.
  • 31. 3  Draft law is modified based on feedback.  The secretariat presents the draft to the Assembly.  The law is passed by the Assembly.  The appropriate government body completes internal policies or regulations to implement the law. The example above would differ for each country depending on its unique process for legal reform. STANDARD INDICATORS Standard indicators are used primarily for Agency reporting purposes. Standard indicators produce data that can be aggregated across many programs. Optimally, standard indicators meet both Agency reporting and on-the-ground management needs. However, in many cases, standard indicators do not substitute for performance (or custom indicators) because they are designed to meet different needs. There is often a tension between measuring a standard across many programs and selecting indicators that best reflect true program results and that can be used for internal management purposes. Example: Number of Laws or Amendments to Ensure Credible Elections Adopted with USG Technical Assistance. In comparing the standard indicator above with the previous example of a custom indicator, it becomes clear that the custom indictor is more likely to be useful as a management tool, because it provides greater specificity and is more sensitive to change. Standard indicators also tend to measure change at the output level, because they are precisely the types of measures that are, at face value, more easily aggregated across many programs, as the following example demonstrates. Example: The number of people trained in policy and regulatory practices. CONTEXTUAL INDICATORS Contextual indicators are used to understand the broader environment in which a program operates, to track assumptions, or to examine externalities that may affect success, failure, or progress. They do not represent program performance, because the indicator measures very high- level change. Example: Score on the Freedom House Index or Gross Domestic Product (GDP). This sort of indicator may be important to track to understand the context for USAID programming (e.g. a severe drop in GDP is likely to affect economic growth programming), but represents a level of change that is outside the manageable interest of program managers. In most cases, it would be difficult to say that USAID programming has affected the overall level of freedom within a country or GDP (given the size of most USAID programs in comparison to the host country economy, for example). PARTICIPATION IS ESSENTIAL Experience suggests that participatory approaches are an essential aspect of developing and maintaining effective performance management systems. Collaboration with development partners (including host country institutions, civil society organizations (CSOs), and implementing partners) as well as customers has important benefits. It allows you to draw on the experience of others, obtains buy-in to achieving results and meeting targets, and provides an opportunity to ensure that systems are as streamlined and practical as possible. INDICATORS AND DATA—SO WHAT’S THE DIFFERENCE? Indicators define the particular characteristic or dimension that will be used to measure change. Height is an example of an indicator. The data are the actual measurements or factual information that result from the indicator. Five feet seven inches is an example of data.
  • 32. 4 WHAT ARE USAID’S CRITERIA FOR SELECTING INDICATORS? USAID policies (ADS 203.3.4.2) identify seven key criteria to guide the selection of performance indicators:  Direct  Objective  Useful for Management  Attributable  Practical  Adequate  Disaggregated, as necessary These criteria are designed to assist managers in selecting optimal indicators. The extent to which performance indicators meet each of the criteria must be consistent with the requirements of good management. As managers consider these criteria, they should use a healthy measure of common sense and reasonableness. While we always want the ―best‖ indicators, there are inevitably trade-offs among various criteria. For example, data for the most direct or objective indicators of a given result might be very expensive to collect or might be available too infrequently. Table 1 includes a summary checklist that can be used during the selection process to assess these trade-offs. Two overarching factors determine the extent to which performance indicators function as useful tools for managers and decision-makers:  The degree to which performance indicators accurately reflect the process or phenomenon they are being used to measure.  The level of comparability of performance indicators over time: that is, can we measure results in a consistent and comparable manner over time? 1. DIRECT An indicator is direct to the extent that it clearly measures the intended result. This criterion is, in many ways, the most important. While this may appear to be a simple concept, it is one of the more common problems with indicators. Indicators should either be widely accepted for use by specialists in a subject area, exhibit readily understandable face validity (i.e., be intuitively understandable), or be supported by research. Managers should place greater confidence in indicators that are direct. Consider the following example: Result: Increased Transparency of Key Public Sector Institutions Indirect Indicator: Passage of the Freedom of Information Act (FOIA) Direct Indicator: Progress on a milestone scale demonstrating enactment and enforcement of policies that require open hearings The passage of FOIA, while an important step, does not actually measure whether a target institution is more transparent. The better example outlined above is a more direct measure. Level Another dimension of whether an indicator is direct relates to whether it measures the right level of the objective. A common problem is that there is often a mismatch between the stated result and the indicator. The indicator should not measure a higher or lower level than the result. For example, if a program measures improved management practices through the real value of agricultural production, the indicator is measuring a higher-level effect than is stated (see Figure 1). Understanding levels is rooted in understanding the development hypothesis inherent in the Results Framework (see TIPS 13: Building a Results Framework). Tracking indicators at each level facilitates better understanding and analysis of whether the
  • 33. 5 development hypothesis is working. For example, if farmers are aware of how to implement a new technology, but the number or percent that actually use the technology is not increasing, there may be other issues that need to be addressed. Perhaps the technology is not readily available in the community, or there is not enough access to credit. This flags the issue for managers and provides an opportunity to make programmatic adjustments. Proxy Indicators Proxy indicators are linked to the result by one or more assumptions. They are often used when the most direct indicator is not practical (e.g., data collection is too costly or the program is being implemented in a conflict zone). When proxies are used, the relationship between the indicator and the result should be well-understood and clearly articulated. The more assumptions the indicator is based upon, the weaker the indicator. Consider the following examples: Result: Increased Household Income Proxy Indicator: Dollar value of household expenditures The proxy indicator above makes the assumption that an increase in income will result in increased household expenditures; this assumption is well-grounded in research. Result: Increased Access to Justice Proxy Indicator: Number of new courts opened The indicator above is based on the assumption that physical access to new courts is the fundamental development problem—as opposed to corruption, the costs associated with using the court system, or lack of knowledge of how to obtain legal assistance and/or use court systems. Proxies can be used when assumptions are clear and when there is research to support that assumption. 2. OBJECTIVE An indicator is objective if it is unambiguous about 1) what is being measured and 2) what data are being collected. In other words, two people should be able to collect performance information for the same indicator and come to the same conclusion. Objectivity is critical to collecting comparable data over time, yet it is one of the most common problems noted in audits. As a result, pay particular attention to the definition of the indicator to ensure that each term is clearly defined, as the following examples demonstrate: Poor Indicator: Number of successful firms Objective Indicator: Number of firms with an annual increase in revenues of at least 5% The better example outlines the exact criteria for how ―successful‖ is defined and ensures that changes in the data are not attributable to differences in what is being counted. Objectivity can be particularly challenging when constructing qualitative indicators. Good qualitative indicators permit regular, systematic judgment about progress and reduce subjectivity (to the extent possible). This means that there must be clear criteria or protocols for data collection. 3. USEFUL FOR MANAGEMENT An indicator is useful to the extent that it provides a RESULT INDICATOR Increased Production Real value of agricultural production. Improved Management Practices Number and percent of farmers using a new technology. Improved Knowledge and Awareness Number and percent of farmers who can identify five out of eight steps for implementing a new technology. Figure 1. Levels
  • 34. 6 meaningful measure of change over time for management decision-making. One aspect of usefulness is to ensure that the indicator is measuring the ―right change‖ in order to achieve development results. For example, the number of meetings between Civil Society Organizations (CSOs) and government is something that can be counted but does not necessarily reflect meaningful change. By selecting indicators, managers are defining program success in concrete ways. Managers will focus on achieving targets for those indicators, so it is important to consider the intended and unintended incentives that performance indicators create. As a result, the system may need to be fine-tuned to ensure that incentives are focused on achieving true results. A second dimension is whether the indictor measures a rate of change that is useful for management purposes. This means that the indicator is constructed so that change can be monitored at a rate that facilitates management actions (such as corrections and improvements). Consider the following examples: Result: Targeted legal reform to promote investment Less Useful for Management: Number of laws passed to promote direct investment. More Useful for Management: Progress toward targeted legal reform based on the following stages: Stage 1. Interested groups propose that legislation is needed on issue. Stage 2. Issue is introduced in the relevant legislative committee/executive ministry. Stage 3. Legislation is drafted by relevant committee or executive ministry. Stage 4. Legislation is debated by the legislature. Stage 5. Legislation is passed by full approval process needed in legislature. Stage 6. Legislation is approved by the executive branch (where necessary). Stage 7. Implementing actions are taken. Stage 8. No immediate need identified for amendments to the law. The less useful example may be useful for reporting; however, it is so general that it does not provide a good way to track progress for performance management. The process of passing or implementing laws is a long-term one, so that over the course of a year or two the AO team may only be able to report that one or two such laws have passed when, in reality, a high degree of effort is invested in the process. In this case, the more useful example better articulates the important steps that must occur for a law to be passed and implemented and facilitates management decision-making. If there is a problem in meeting interim milestones, then corrections can be made along the way. 4. ATTRIBUTABLE An indicator is attributable if it can be plausibly associated with USAID interventions. The concept of ―plausible association‖ has been used in USAID for some time. It does not mean that X input equals Y output. Rather, it is based on the idea that a case can be made to other development practitioners that the program has materially affected identified change. It is important to consider the logic behind what is proposed to ensure attribution. If a Mission is piloting a project in three schools, but claims national level impact in school completion, this would not pass the common sense test. Consider the following examples: Result: Improved Budgeting Capacity Less Attributable: Budget allocation for the Ministry of Justice (MOJ) More Attributable: The extent to which the budget produced by the MOJ meets
  • 35. 7 established criteria for good budgeting If the program works with the Ministry of Justice to improve budgeting capacity (by providing technical assistance on budget analysis), the quality of the budget submitted by the MOJ may improve. However, it is often difficult to attribute changes in the overall budget allocation to USAID interventions, because there are a number of externalities that affect a country’s final budget – much like in the U.S. For example, in tough economic times, the budget for all government institutions may decrease. A crisis may emerge that requires the host country to reallocate resources. The better example above is more attributable (and directly linked) to USAID’s intervention. 5. PRACTICAL A practical indicator is one for which data can be collected on a timely basis and at a reasonable cost. There are two dimensions that determine whether an indicator is practical. The first is time and the second is cost. Time Consider whether resulting data are available with enough frequency for management purposes (i.e., timely enough to correspond to USAID performance management and reporting purposes). Second, examine whether data are current when available. If reliable data are available each year, but the data are a year old, then it may be problematic. Cost Performance indicators should provide data to managers at a cost that is reasonable and appropriate as compared with the management utility of the data. As a very general rule of thumb, it is suggested that between 5% and 10% of program or project resources be allocated for monitoring and evaluation (M&E) purposes. However, it is also important to consider priorities and program context. A program would likely be willing to invest more resources in measuring changes that are central to decision- making and less resources in measuring more tangential results. A more mature program may have to invest more in demonstrating higher- level changes or impacts as compared to a new program. 6. ADEQUATE Taken as a group, the indicator (or set of indicators) should be sufficient to measure the stated result. In other words, they should be the minimum number necessary and cost- effective for performance management. The number of indicators required to adequately measure a result depends on 1) the complexity of the result being measured, 2) the amount of information needed to make reasonably confident decisions, and 3) the level of resources available. Too many indicators create information overload and become overly burdensome to maintain. Too few indicators are also problematic, because the data may only provide a partial or misleading picture of performance. The following demonstrates how one indicator can be adequate to measure the stated objective: Result: Increased Traditional Exports in Targeted Sectors Adequate Indicator: Value of traditional exports in targeted sectors In contrast, an objective focusing on improved maternal health may require two or three indicators to be adequate. A general rule of thumb is to select between two and three performance indicators per result. If many more indicators are needed to adequately cover the result, then it may signify that the objective is not properly focused. 7. DISAGGREGATED, AS NECESSARY The disaggregation of data by gender, age, location, or some other dimension is often important from both a management and reporting point of view. Development programs often affect population cohorts or institutions in different ways. For example, it might be important to know to what extent youth (up to age 25) or