Programs can be designed to be more likely to be effective in producing positive change in settings that can be characterized as complex adaptive systems. This presentation describes what we already know about what makes programs more likely to be successful in changing behaviour. Next, it explores the organizational blind spots and human nature which prevent us from making better designs. Finally, it shows how evaluators can guide better program design using standard and emerging methods.
1. How to design programs that work
better in complex adaptive systems
Presented at Australasian Evaluation Society
Annual Meeting, Perth, September 2016
Ann Larson, PhD
ann.larson@socialdimensions.com.au
2. Presented as part of a session titled:
Effective proactive evaluation
How can the evidence-base influence the design of
interventions?
John Owen
Ann Larson
Rick Cummings
AES Annual Conference, Perth, 2016
3. Outline
1. Good design principles for creating change in
complex settings derived from evaluation findings
2. Reasons why so many designs not incorporate
complexity-sensitive design features
3. Examples of good and bad strategies to incorporate
into designs
4. Last word on the role of evaluators
4. What works in creating positive
change in complex settings?
Evaluations tell us what really happened. The accumulation of
evaluation findings amounts to an evidence base that is more sensitive to
context than research findings.
5. • Obtain flexible, long term funding – because change is not linear,
continuous nor predictable
• Situate new behaviour in relevant history and saliency of priorities
and concerns– because cultures and organisations remain
committed to their ways of working
• Build coalitions around a vision for change – because external
shocks will require new partners to support long term change
• Understand different actors motivations for behaviour change:
introduce accountability and incentives
• Start small, be flexible and experiment before attempting wide-
scale change
• Balance the need for fidelity with opportunities for lots of local
initiative to promote genuine institutionalisation
• Monitor, review and act in a timely manner to be adaptive
7. Organisational blind spots make these factor
difficult to integrate into designs
• Command and control culture of central planning
• Structural limitations of processing and responding
to large amounts of data with nuanced implications
• Epistemologies, especially in the health sector
8. Designing interventions to change
systems are more difficult when there
are profound distances between the
designer and the intended
beneficiaries – whether that distance
is geographic, cultural, economic,
religious or linguistic.
Evaluation methods can ‘translate’
beneficiaries voices so that decision
makers can understand
And human nature is
also a barrier
10. • Careful planning does not reduce the likelihood of
programs encountering unexpected obstacles and
opportunities
• Reliance of monitoring and evaluation plans with
high level annual review to guide program
implementation and oversight does not facilitate
timely adaptation when programs are not working
well
• Emphasis on celebrating success rather than learning
from failure makes it hard to recognise what needs
to be changed
• Use of short time frames and rigid budgets to reduce
risk actually makes it more difficult to achieve an
outcome
11. And examples of good design
trends, using evaluation skills and
knowledge
12. Heavily invest in regular
review, such as those
required employed by
TAF/DFAT, USAID’s
active monitoring
… but may require a large investment in
time and may be overly structured for
simple projects or projects that do not
usually encounter problems
13. Determine where the bottlenecks for
adoption are and delegate
responsibility at that level, giving lots of
local autonomy, coaching and fostering
self-organisation
May require tailoring approaches
for different areas or different
types of facilities
14. Learn from good pilots and use
the inform to expand, while
continuing to provide necessary
support
One such example is on the next
two slides …
17. Payment by results gives the
responsibility to implementing agencies
and communities to find their own
solutions by giving people incentives to
do what they want to do. This gives
implementers the incentives to
experiment until they find a successful
strategy.
Only works when there is untapped capacity and a meaningful
outcome to be achieved
Also requires investment in verification
18. Modelling of likely scenarios
during design and at critical
junctures using:
• Human-centred design or
• Agent-based modelling or
• Complex system modelling
All approaches need insights from
evaluation.
19. A simpler approach is to model path
dependency, looking at the conditions
necessary for the design to work
20. They sentenced me to twenty years of boredom
For trying to change the system from within
Leonard Cohen
Evaluators do not need to become
implementers internal to an
organisation to promote designs
that are responsive to complex
contexts
As external actors we can point to
evidence that flexible, adaptive
approaches are more likely to
achieve sustainable change for the
better.
Last words about the role of evaluators