Monitoring is the continuous collection of data and information on specified indicators to assess the implementation of a development intervention in relation to activity schedules and expenditure of allocated funds, and progress and achievements in relation to its intended outcome.
Evaluation is the periodic assessment of the design implementation, outcome, and impact of a development intervention. It should assess the relevance and achievement of the intended outcome, and implementation performance in terms of effectiveness and efficiency, and the nature, distribution, and sustainability of impact.
8. Evaluation Capacity Development History of ADB Support Build Post-Evaluation Capability 1990-1994 1999-To Date 1995-1998 Build Monitoring and Evaluation Systems Establish Performance Management Systems
9. Evaluation Capacity Development Lessons of Experience The preconditions to success of evaluation capacity development are substantive government demand , existence of a mandate by decree for evaluation , and stability in staffing such that a very high proportion of trained personnel remain in tasks for which they were trained. Lesson 5 Lesson 4 Lesson 3 Lesson 2 Lesson 1
10.
11. Outputs Key: M&E = monitoring and evaluation, ECD = evaluation capacity development Proficiency in M&E Tools, Methods, & Approaches for M&E Strategy & Policy Formulation for M&E Research & Special Studies for M&E Country Strategies for M&E A Strategy for ECD
12. Outputs Key: M&E = monitoring and evaluation Knowledge Sharing & Learning for M&E Knowledge Sharing & Learning Platforms Knowledge Networks Partnership Arrangements with Evaluation Associations
13.
14.
15.
16. Technical Assistance Partnerships ADB Center for Development and Research in Evaluation International Program for Development Evaluation Training Asia-Pacific Finance and Development Center Regional Cooperation and Poverty Reduction Fund of the People’s Republic of China
17. Selected Evaluation Agencies Tajikistan Amirov Fakhriddin K. , State Budget Department Ministry of Finance Lao PDR Vixay Xaovana , Committee for Planning and Investment Akhom Praseuth , Bank of Lao PDR Bounthay Leuangvilay , Budget Department, Ministry of Finance Cambodia Hou Taing Eng , Ministry of Planning Im Sour , Cambodian Rehabilitation and Development Board Suon Sophal , Cambodian Investment Board Lors Pinit , Department of Investment and Cooperation Hay Sovuthea , Supreme National Economic Council Malaysia Arunaselam Rasappan , Center for Development and Research in Evaluation Mariappan Mahalingam , Center for Development and Research in Evaluation Viet Nam Tran Ngoc Lan , Ministry of Planning and Investment Nguyen Dang Binh , Ministry of Planning and Investment Pham Thai Linh , Ministry of Natural Resources and Environment Nguyen Trang Thu , National Academy of Public Administration
19. Deliverables Training & Capacity Building Research & Special Studies Knowledge Sharing & Networking Strategic Direction for Evaluation Capacity Development Strengthened Evaluation Capacity Improved Service Delivery Leading to Poverty Reduction http://www.adb.org/evaluation
20. Training Strategy Stage A1 April 2008 Introductory Training in M&E for Country Trainers (ToT) CeDRE In-Country Preliminary Preparatory Work by ToT Trainees In-Country Stage A2 Apr-Sept 08 Policy Level M&E Training (Round 1) IPDET 1 Stage B1 June 2008 Intermediate M&E Training for Trainees LAO PDR Stage A3 Oct. 2008 In-Country Stage 1 Down-line Training by ToT Trainees In-Country Stage A4 Nov 08-Mar 09 Advanced Level M&E Training for Trainees Cambodia Stage A5 Apr 2009 Policy Level M&E Training (Round 2) IPDET 2 Stage B2 June 2009 In-Country Stage 2 Down-line Training by ToT Trainees In-Country Stage A6 Jul 09-Sept 09 Final Wrap-Up Training & Certification of Country M&E Trainers AFDC/ADB Stage A7 Oct. 2009
Notes de l'éditeur
The basic element of monitoring and evaluation is that both activities hinge on the availability of a results chain. This simplified graphic shows that desired results are not activities but that they depend on activities being carried out. Therefore, good results chains are causally linked. There should be clear “if…then” connections. Good results boxes should also demonstrate change. Each box should describe how one hopes the relevant factor will change. They should be reasonably complete. And they should be simple.
We monitor to track inputs and outputs and compares them to (i) plan, identify and address problems; (ii) ensure effective use of resources; (iii) enhance quality and learning to improve activities and services; (iv) strengthen accountability; (v) provide a management tool; (vi) influence future decisions; and (vii) provide data for evaluation of a development intervention .
We evaluate to determine effectiveness, show impact, strengthen financial responses and accountability, promote a learning culture focused on service improvement, and encourage replication of successful development interventions.
Leveraging monitoring and evaluation, we are now in a position to ensure systematic reporting, communicate results and accountability, measure efficiency and effectiveness, provide information for improved decision making, optimize allocation of resources, and promote continuous learning and improvement.
We must recognize, however, that the degree of control over deliverables decreases as we move up the results chain and that, in parallel, the challenge of monitoring and evaluation increases as we do so.
Notwithstanding, monitoring and evaluation can make a powerful contribution. To recap, there must be a clear statement of a measurable objective; structured indicators for inputs, activities, outputs, outcome, and impact; baselines and a means to compare against targets; and mechanisms for reporting and use of results in decision making. Where applicable, one should build in this a framework and methodology capable of establishing causation. In short, effective monitoring and evaluation systems must be developed if we are to manage for development results and effectiveness.
Beginning 1990, ADB’s approach to evaluation capacity development has reflected corporate shifts from benefit monitoring and evaluation and postevaluation to a framework embracing the project cycle. In the first phase, seven small-scale technical assistance projects built postevaluation capability (i.e., in Bangladesh, the People’s Republic of China, Nepal, Papua New Guinea, Philippines, Sri Lanka, and Thailand). In the second phase, five technical assistance projects established project performance management systems (i.e., in the People’s Republic of China, Nepal, Philippines, Sri Lanka, and Thailand). In the third phase, two technical assistance projects built monitoring and evaluation systems (i.e., in the People’s Republic of China and Philippines).
Other lessons were incorporated in the design of the regional technical assistance I am about to outline. They were: Monitoring and evaluation systems are a means to an end—benefits are obtained when results are used in decision making. It is advisable to locate responsibility for monitoring and evaluation near the capable head of an organization. Monitoring and evaluation systems should not become too complex or resource-intensive. Monitoring and evaluation systems encompass data collection in the field and aggregation and analysis by end users. Evaluation capacity development that concentrates on the oversight agency carries the risk that other entities may lack incentives to provide data and information. Case studies help to develop staff competency and confidence.
The Third International Roundtable on Managing for Development Results held in February 2007 in Hanoi was a milestone for aid effectiveness. It focused on building the capacity of countries to manage for results and develop country-level and regional action plans. The priority on evaluation capacity development is reflected in demand for knowledge products and services, e.g., those offered by IPDET, and the growth of evaluation associations. In September 2007, ADB approved the first of a multiyear, integrating instrument to develop regional capacity for monitoring and evaluation. The technical assistance is financed by the Government of the People’s Republic of China.
The countries involved in the first technical assistance are Cambodia, the Lao People's Democratic Republic, and Viet Nam. One set of activities toward the first output will raise proficiency with regional and national training-of-trainers in tools, methods, and approaches for monitoring and evaluation. SHIPDET is the regional training input to this. Another set will relate to consulting inputs in the field of strategy and policy formulation, including international training at IPDET. The second output will be accomplished by extensive support to formulation of country strategies for monitoring and evaluation, conducted in-country and in sequential fashion, feeding the justification, research, and analysis of options for a strategic direction for evaluation capacity development by ADB and its delivery design.
Activities toward the third output will enhance selected knowledge sharing and learning platforms, extend to evaluation agency staff advice on new and existing knowledge networks on monitoring and evaluation, and promote and conclude partnership arrangements with interested evaluation associations.
Assumptions and risks identify conditions, external to a development intervention, that are needed to ensure that one level of performance indeed causes the next level of performance to happen. In successive slides, here are those identified for the technical assistance at the output, outcome, and impact levels.
Monitoring assumptions and risks is critical to project success. The environment is continually influencing the cause-effect hypotheses on which the development intervention is built. Implementers of a development intervention must ensure that such hypotheses continue to remain valid. Monitoring should be built into the intervention’s performance monitoring and management system. The performance indicators should be regularly monitored, and the assumptions on which they are built should be frequently checked and verified.
Given that planners must make assumptions at the design stage, the implementation arrangements for a development intervention must allow for incorrect assumptions. This can be done by highlighting key assumptions to monitor during the course of the project, suggesting ways of ensuring that an assumption turns out to be correct, indicating how the validity of the assumption can be monitored, and suggesting what action to take if an assumption is proving invalid.
This graphic illustrates the implementation and partnership arrangements for the technical assistance. The Center for Development and Research in Evaluation, Malaysia, represented here today, will coordinate, supervise, and monitor overall activities. It specializes in capacity development for public sector monitoring and evaluation and integrated performance management and has extensive international and regional experience, expertise, capacity, and commitment.
The assistance has a focus on countries of the Greater Mekong Subregion, which are at the forefront of ADB's work on regional cooperation and integration. With us today are representatives of key policy entities selected by the countries themselves. We also have a representative from the Central Asian republics, who we hope can provide an early link to a second phase to this technical assistance. Participants were selected on the basis of their being able to act as focal or resource persons for 3–5 years. We hope that this will indeed be the case.
The technical assistance will be implemented over two years from October 2007. Here is its indicative activities schedule.
The technical assistance will be implemented over two years from October 2007. Here is its indicative activities schedule.
The technical assistance will be implemented over two years from October 2007. Here is its indicative activities schedule.