Dr Ruth Mourik presented some of our conundrums with Subtask 3, developing an evaluation tool for behaviour changers at the IEA DSM Task 24 workshop in Oxford, Sept 5, 2014.
3. Life seemed easy…
What is it?
• Monitoring: measuring progress and
achievements and production of planned
outputs
• Evaluation: structured process of assessing
success in meeting goals and reflect on
learnings
2
4. Life seemed easy…
What is it?
• Monitoring: measuring progress and
achievements and production of planned
outputs
• Evaluation: structured process of assessing
success in meeting goals and reflect on
learnings
Why do it the way we do now?
Establish effect of policies
Assess need for improvements
Assessing value for money
Contribution to evidence base for
effectiveness of behavioral interventions at
population level
2
5. Life seemed easy…
What is it?
• Monitoring: measuring progress and
achievements and production of planned
outputs
• Evaluation: structured process of assessing
success in meeting goals and reflect on
learnings
Why do it the way we do now?
Establish effect of policies
Assess need for improvements
Assessing value for money
Contribution to evidence base for
effectiveness of behavioral interventions at
population level
2
7. It’s getting challenging…
• Evaluation team often not included in design
• often not even part of programme…
3
8. It’s getting challenging…
• Evaluation team often not included in design
• often not even part of programme…
• often only snapshot at end or just after
• not longitudinal, sustainability/rebound often
not assessed
• No insight in formation of networks supporting
lasting change
3
9. It’s getting challenging…
• Evaluation team often not included in design
• often not even part of programme…
• often only snapshot at end or just after
• not longitudinal, sustainability/rebound often
not assessed
• No insight in formation of networks supporting
lasting change
• Large scale M&E of actual behaviour too costly,
• Modelling or self reported (at best)
• ‘proxies’, such as savings or even better: cost
3
effectiveness..
10. It’s getting challenging…
• Evaluation team often not included in design
• often not even part of programme…
• often only snapshot at end or just after
• not longitudinal, sustainability/rebound often
not assessed
• No insight in formation of networks supporting
lasting change
• Large scale M&E of actual behaviour too costly,
• Modelling or self reported (at best)
• ‘proxies’, such as savings or even better: cost
3
effectiveness..
• Proxies= NOT actual behaviour change, only
about value for money etc.
11. It’s getting challenging…
• Evaluation team often not included in design
• often not even part of programme…
• often only snapshot at end or just after
• not longitudinal, sustainability/rebound often
not assessed
• No insight in formation of networks supporting
lasting change
• Large scale M&E of actual behaviour too costly,
• Modelling or self reported (at best)
• ‘proxies’, such as savings or even better: cost
3
effectiveness..
• Proxies= NOT actual behaviour change, only
about value for money etc.
• No participatory process or feedback loops in
the traditional M&E
13. To make life more difficult..
Many of us increasingly value interventions that
are
• tailored,
• multidisciplinary,
• varied interventions,
• qualitative, iterative,
• Systemic
• and have outcomes beyond duration of project and beyond
4
energy, etc..)
14. To make life more difficult..
Many of us increasingly value interventions that
are
• tailored,
• multidisciplinary,
• varied interventions,
• qualitative, iterative,
• Systemic
• and have outcomes beyond duration of project and beyond
4
energy, etc..)
And at the same time judge the ‘behaviour’ of
policymakers who demand for simple, focused,
quantitative and up scaled evaluations defining
success in efficiency and effectiveness terms.
15. To make life more difficult..
Many of us increasingly value interventions that
are
• tailored,
• multidisciplinary,
• varied interventions,
• qualitative, iterative,
• Systemic
• and have outcomes beyond duration of project and beyond
4
energy, etc..)
And at the same time judge the ‘behaviour’ of
policymakers who demand for simple, focused,
quantitative and up scaled evaluations defining
success in efficiency and effectiveness terms.
But how could M&E look like that is:
16. To make life more difficult..
Many of us increasingly value interventions that
are
• tailored,
• multidisciplinary,
• varied interventions,
• qualitative, iterative,
• Systemic
• and have outcomes beyond duration of project and beyond
4
energy, etc..)
And at the same time judge the ‘behaviour’ of
policymakers who demand for simple, focused,
quantitative and up scaled evaluations defining
success in efficiency and effectiveness terms.
But how could M&E look like that is:
Relevant to end-users ‘cost effective’, doable, lasting
actual behavioral change, formation of networks,
focusing on alignment, and processes underpinning
that change?
18. What now?
• No unified way of both designing and M&E
5
interventions
• Different disciplinary approaches have different
methods and foci of M&E, all pertinent to what
they aim
19. What now?
• No unified way of both designing and M&E
5
interventions
• Different disciplinary approaches have different
methods and foci of M&E, all pertinent to what
they aim
• Perhaps more fruitful to focus on learning
processes?
1. Single loop= instrumental, focused on short
term learning about effectiveness in meeting
goals/ outcome focused
2. Double loop= process oriented, focused on
the how and why, long term
21. Way forward or dead
M and E of single loop learning doable to
undertake and fine for low hanging fruit and
non habitual change
Double loop learning much more difficult but
more relevant to our aims…? We want to
focus on:
• Interaction
• Participation quality
• Learning by doing and doing by learning
• Aligning
• Iteration
6
end?
22. Way forward or dead
M and E of single loop learning doable to
undertake and fine for low hanging fruit and
non habitual change
Double loop learning much more difficult but
more relevant to our aims…? We want to
focus on:
• Interaction
• Participation quality
• Learning by doing and doing by learning
• Aligning
• Iteration
• Can or should one central body do this?
• Or do we need user generated content? A
decentralized collective participatory M&E?
6
end?