By Leonard Oruko and Howard Elliott.
Presented at the ASTI-FARA conference Agricultural R&D: Investing in Africa's Future: Analyzing Trends, Challenges, and Opportunities - Accra, Ghana on December 5-7, 2011. http://www.asti.cgiar.org/2011conf
Apidays New York 2024 - The value of a flexible API Management solution for O...
The Role of Evaluation in Strengthening Agricultural R&D in Sub-Saharan Africa: Information, Instruments and Actors
1. Strengthening Ag. R&D in Africa:
What Role for M&E?
Leonard Oruko and Howard Elliott
Presentation at the IFPRI-ASTI/FARA
Conference
Accra, Ghana, 6 December 2011
2. Has research evaluation supported the case for
Ag. R&D?
• Research evaluation has supported the case for R&D
– The tools and information developed for evidence-based policy were
linked to the economic context of the day
• Research evaluation responded to questions being asked
– Economic returns , welfare analysis, priority setting, funding of
research issues
– Concerns with poverty (well beyond producer and consumer surplus),
NRM and sustainability (beyond production systems) , and later
climate change at increasing scale
• Impact assessment has had to balance the needs for
accountability to funders versus learning and change by
actors
– Economic return, experiments and quasi experiments (quantitative)
– Utilization-focused evaluation, qualitative
3. Why the change in focus this presentation
• Is the practice of M&E responding adequately to the
changing imperatives?
• From “crystal ball gazing” to “planning for results” Nin-Pratt?
– Given the competing choices, where should we allocate our resources?
• From “implementation” to “delivering results”
– Operational management-timely availability of information , decision
making, adjustment and adaptation
• From “returns on investment” to “achievements and lessons”
Fuglie; Nin-Pratt?
– You gave us resources what have we delivered, what have we learnt,
how can we do it better? The learning and accountability agenda
– Are we showing objective evidence of achievement?-
4. Some definitions
• Evaluation vs assessment
– Evaluation-systematic collection and analysis of information on
characteristics and outcomes of a programs to inform decisions
– Assessment is an informal review
• Impact evaluations are based on models of cause and effect
– measures change in outcome attributable to a defined intervention
– require a credible and rigorously defined counterfactual
• Performance evaluation
– Descriptive and normative questions for operational decision making
– Informed by performance monitoring to identify near term
consequences of direct program activities
– Ordinarily lacks rigorously defined counterfactual
5. Could this be the inherent challenge?
Operational
M&E: In the very
• What is the likely payoff to near term,
Ex-ante the proposed investment demonstrate;
Accountability
• How does this inform
impact operations to generate •Results and
tangible near term results clear progress
evaluation •Flexibility and
adaptation to
unforeseen
• These are the actual returns challenges
Ex post on investment
• What were the conditioning •Address the
Impact factors? “imperfect
• Can we scale these out?
Evaluation information
problem ”
Learning and performance improvement
6. Data
• What is the acceptable standard for good and credible data?
– Objective scientific enquiry-the desire to prove or disprove widely held
beliefs that are based on some detectable distribution of personal
experiences
– Reliability, validity, and timeliness to serve as a basis for objective
evidence
– “The plural of anecdote is data”-the careful compilation of “cases”
that provide context for identifying causes of success and failure that
can be widely generalized
• Quality of analysis only as good as the data
• But you also need good analytical capacity-innovative
• Do we need additional investment to generate quality data?
7. Data screams loudest!!
“Data suggests that
between 1961-2007
the observed growth
in agric from SSA is
primarily from
expanding area
under cultivation”
•Chris , Catherine
and Tom will
illustrate this
Source: (De Janvry and Sadoulet, 2010)
8. Evaluation metrics
• Demand for information defines the analytical agenda
– CG Science Council advanced the refinement of approaches and
methods
– The CAADP agenda supporting convergence on ex-ante investment
analysis; next generation questions around moving from sector-wide
to R&D specific interventions
– Debate on approaches, “the pendulum syndrome”; scope for diverse
theoretical constructs to inform analytical agenda
• The call for rigor does not automatically prescribe
quantification
– Advances in tools for establishing the counterfactual
– Choice of evaluation approaches informed by a variety of factors
– Rigorous evaluation have “longevity and long legs”
9. Metrics for near term incremental changes
• On the highway to the big impacts are intermediate
results
– Compared to the big results, there is greater diversity of opinion on
these
– CG Science Council championed this through MTP
– A challenge for network and coordinating entities (SROs and FARA)
• Getting to a consensus on performance criteria
– Need for appropriate proxy indicators to define improvements in
operational performance
– A variety of tools and approaches for tracking the indicators –look at
rich concepts and application from management schools
– Embedded in program implementation; is it really necessary to
define indicators a priori?
10. Improving M&E systems: Take home message
• Generating objective evidence on performance
– Beyond a well thought out RF, the above is about analytics
– Adequate data required for rigorous analysis
– Adequate analytical capacity required for rigor
• Objective evidence informing review dialogue and learning
– The next quantum leap for R&D systems learning for performance
improvement
– Make greater use of ex-ante analysis to inform operational
management of research (baselines and targets)
• How do we organize ourselves to do this?
– Clear lessons from the CG Science Council on agricultural research
– Academia and think tanks are addressing the challenges of rigor
– Harnessing the existing capacity appears to be a coordination
challenge
11. …take home message
• Practitioners
– Apply “Triple A’’ principle on indicators
– Help shape the analytical agenda around indicators
• Category 1 users (managing for results)- information for
decision making
– Programme staff-cogeneration of performance and learning
information
– Convening the review, and learning processes
• Category 3 Users (Stewardship, oversight, beneficiary
stakeholders)
– “Volatility in expectations”; what results in what temporal scale