Biopesticide (2).pptx .This slides helps to know the different types of biop...
Finding out what could go wrong before it does – Modelling Risk and Uncertainty
1. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 1
Finding out what could go wrong before it does
– Modelling Risk and Uncertainty
Bruce Edmonds
Centre for Policy Modelling
Manchester Metropolitan University
2. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 2
Classic Policy Modelling
Essential steps:
1. Decide on KPIs of policy success
2. List candidate policies
3. Predict impact of policies: cost and KPIs
4. Choose best policy
Sometimes this is embedded within a repeated
cycle of:
a) Decide on a policy (using steps 2-4 above)
b) Implement it
c) Evaluate the policy
3. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 3
Statistical Models
Approach:
1. Regress KPIs on known outputs
2. Choose inputs that maximise KPIs
3. Hence choose the policy that might most closely implement
those inputs
• Assumes generic fixed relationship – average
success
• Straightforward to do
• Requires enough data between KPIs and inputs
• Candidate policies and regressed inputs may not be
obviously relatable
• Not customisable to particular situations
4. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 4
Micro-Simulation Models
Approach:
1. Divide up population in to areas/groups
2. Choose simple statistical or other model for reaction
3. For each area/group regress/adjust model for their
own data
4. Maybe add some flows between areas/groups
5. Aggregate over areas/groups for overall assessment
• Requires details data for each area/group
• Good for heterogeneity of groups/areas
• Does not work so well when lots of interaction
between groups
5. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 5
Computable General Equilibrium
Models
Approach:
1. Construct a simplified economic model of situation
with and without chosen policy
2. Calculate equilibrium without policy
3. Calculate equilibrium with policy
4. Compare the two equilibria and see if this represents
an improvement and how much of one
• Only simple models are calculable
• Uses strong economic assumptions
• The equilibrium is only one restricted and long-
term aspect of the outcomes
• Does not have a good predictive record
6. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 6
System Dynamic Models
Approach:
1. Build relationship between key variables using flow
and storage approach (maybe in a participatory way)
2. Add in equations and delays
3. Run simulated system with probably inputs
4. Evaluate the results somehow
• Good for dynamics with delayed feedback
• Does not deal with heterogeneity of actors
• ‘Touchy-feely’ judgment of outcomes
• Can look more real that evidence proves
• Not good at predicting outcome values
7. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 7
Simulation Models
Approach:
1. Build a simulation reflecting how parts of system relate
2. Adjust parameters reflecting particular situation/data
3. Check simulation by running for known situation where
outcomes and data is known (validation)
4. Produce different variations of simulation to reflect each
policy to be tested
5. Run each variation many times and measure the outcomes
• Simulation only as strong as knowledge of system
• Might have many unknown parameters
• Never enough data to sufficiently validate
• Policies can be directly implemented
• Outcomes assessed in many different ways
8. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 8
Some modelling tensions
precision (model not vague)
generality
of scope (works for
different cases)
Lack of error (accuracy of results)
realism
(reflects
knowledge of
processes)
Economic Models
Scenarios
Agent-based
models
Stats/regression
models
Reality
Wanted for
policy decisions
9. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 9
Problem 1: System Complexity
• There is no guarantee that a simple model will be
adequate to representing complex
social/ecological/economic/technical systems
• How the parts and actors interact and react might
be crucial to the outcomes (e.g. financial markets)
• We may not know which parts of the system are
crucial to the outcomes
• We may not fully understand how the parts
interact and react
• System and model are both too complex to fully
explore and understand in time available
10. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 10
Problems 2&3: Error and Uncertainty
• The values of many key parameters might be
unknown or only approximately known
• Data might be patchy and of poor quality
• Tiny changes in key factors or parameters might
have huge consequences for outcomes (the
‘butterfly effect’)
• Levels of error may be amplified by the system
(as in automated trading in financial markets)
• There may be processes that we do not even
know are important to the outcomes
11. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 11
Problem 4: Structural Change
• System evolves due to internal dynamics
• For example, innovations might occur
• System might have several very different
behavioural ‘phases’ (e.g. bull and bear markets)
which it shifts between
• The rules of the system might change rapidly…
• ...and well before any equilibrium is reached
• Rule-change might be linked to system state
• Different parts of the system might change in
different ways
12. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 12
Prediction
• Given all these difficulties for many situations,
prediction is not only infeasible…
• ...but suggesting you can predict is dishonest
• and may give false comfort (e.g. Newfoundland
Cod Fisheries Collapse or 2007/8 financial crash)
• Most techniques only work in two cases, where:
1. There is lots of experience/data over many previous
episodes/cases
2. Nothing much changes (tomorrow similar to today)
• Often even approximate or probabilistic
prediction is infeasible and unhelpful
13. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 13
The key question….
How does one manage a system or
situation that is too complex to predict?
14. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 14
Lessons from robotics: Part I
Robotics in the 70s and 80s tried to (iteratively):
1. build a map of its situation (i.e. a predictive
model)
2. use this model to plan its best action
3. then try to do this action
4. check it was doing OK go back to (1)
But this did not work in any realistic situation:
• It was far too slow to react to its world
• to make useable predictions it had to make too
many dodgy assumptions about its world
15. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 15
Lessons from robotics: Part II
Rodney Brooks (1991) Intelligence without
representation. Artificial Intelligence, 47:139–160
A different approach:
1. Sense the world in rich fast ways
2. React to it quickly
3. Use a variety of levels of reaction
a. low simple reactive strategies
b. switched by progressively higher ones
Do not try to predict the world, but react to it quickly
This worked much better.
16. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 16
Lessons from Weather Forecasting
• Taking measurements at a few places and trying
to predict what will happen based on simple
models based on averages does not work well
• Understanding the weather improved with very
detailed simulations fed by rich and
comprehensive sensing of the system
• Even then they recognize that there are more
than one possibilities concerning the outcomes
(using ensembles of specific outcomes)
• If these indicate a risk of severe weather they
issue a warning so mitigating measures can be
taken
17. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 17
Lessons from Radiation Levels
• The human body is a very complex system
• It has long been known that too much radiation
can cause severe illness or death in humans
• In the 30s & 40s it was assumed there was a
“safe” level of radiation
• However it was later discovered that any level of
radiation carried a risk of illness
• Including naturally occurring levels
• Although an increase in radiation might not seem
to affect many people, it did result in more
illnesses in some
18. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 18
Socio-Ecological Systems
• Are the combination of human society embedded
within an ecological system (SES)
• Many social and ecological systems are far too
complex to predict
• Their combination is doubly complex
• E.g. fisheries, deforestation, species extinctions
• Yet we still basically use the 1970s robotics
“predict and plan” approach to these…
• …as if we can plan optimum policies by
estimating/projecting future impact
19. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 19
Why simple models won’t work
• Simpler models do not necessarily get things
“roughly” right
• Simpler models are not more general
• They can also be very deceptive – especially with
regards to complex ways things can go wrong
• In complex systems the detailed interactions can
take outcomes ‘far from equilibrium’ and far from
average behaviour
• Sometimes, with complex systems, a simple
model that relies on strong assumptions can be
far worse than having no models at all
20. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 20
A Cautionary Tale
• On the 2nd July 1992 Canada’s fisheries minister,
placed a moratorium on all cod fishing off
Newfoundland. That day 30,000 people lost their jobs.
• Scientists and the fisheries department throughout
much of the 1980s estimated a 15% annual rate of
growth in the stock – (figures that were consistently
disputed by inshore fishermen).
• The subsequent Harris Report (1992) said (among
many other things) that: “..scientists, lulled by false
data signals and… overconfident of the validity of
their predictions, failed to recognize the statistical
inadequacies in … [their] model[s] and failed to …
recognize the high risk involved with state-of-stock
advice based on … unreliable data series.”
21. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 21
What had gone wrong?
• “… the idea of a strongly rebuilding Northern cod
stock that was so powerful that it …[was]... read
back… through analytical models built upon
necessary but hypothetical assumptions about
population and ecosystem dynamics. Further, those
models required considerable subjective judgement
as to the choice of weighting of the input variables”
(Finlayson 1994, p.13)
• Finlayson concluded that the social dynamics
between scientists and managers were at play
• Scientists adapting to the wishes and worldview of
managers, managers gaining confidence in their
approach from the apparent support of science
22. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 22
Example 1: Fishing!
• …is a dynamic, spatial, individual-based ecological model
that has some of the complexity, adaptability and fragility
of observed ecological systems with emergent outcomes
• It evolves complex, local food webs, endogenous shocks
from invasive species, is adaptive but unpredictable as to
the eventual outcomes
• Into this the impact of humans can be imposed or even
agents representing humans ‘injected’ into the simulation
• The outcomes can be then analysed at a variety of levels
over long time scales, and under different scenarios
• Paper: Edmonds, B. (in press) A Socio-Ecological Test
Bed. Ecology & Complexity.
• Full details and code at: http://openabm.org/model/4204
23. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 23
In this version
• Plants and higher order entities (fish)
distinguished (no photosynthesizing herbivores!)
• First a rich competing plant ecology is evolved
• Then single fish injected until fish take hold and
evolve until there is an ecology of many fish
species, run for a bit to allow ‘transients’ to go
• This state then frozen and saved
• From this point different ‘fishing’ polices
implemented and the simulations then run
• with the outcomes then analysed
24. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 24
The Model
• A wrapped 2D grid of
well-mixed patches with:
– energy (transient)
– bit string of characteristics
• Organisms represented
individually with its own
characteristics,
including:
– bit string of characteristics
– energy
– position
A well-mixed
patch
Each
individual
represented
separately
Slow
random rate
of migration
between
patches
25. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 25
Model sequence each simulation tick
1. Input energy equally divided between patches.
2. Death. A life tax is subtracted, some die, age incremented
3. Initial seeding. until a viable is established, random new individual
4. Energy extraction from patch. energy divided among the
individuals there with positive score when its bit-string is evaluated
against patch
5. Predation. each individual is randomly paired with a number of
others on the patch, if dominate them, get a % of their energy, other
removed
6. Maximum Store. energy above a maximum level is discarded.
7. Birth. Those with energy > “reproduce-level” gives birth to a new
entity with the same bit-string as itself, with a probability of mutation,
Child has an energy of 1, taken from the parent.
8. Migration. randomly individuals move to one of 4 neighbours
9. Statistics. Various statistics are calculated.
26. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 26
First, evolve a rich mixed ecology
Evolve and save a suitable complex ecology with
a balance of tropic layers (final state to the left
with log population scale)
Herbivores
Appear
First Successful
Plant
Simulation
“Frozen”
Carnivores
Appear
27. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 27
This version designed to test possible
outcomes of fishing policies
• Complex aquatic plant ecology evolved
• Herbivore fish injected into ecology and whole
system further evolved
• Once a complex ecology with higher-order
predators then system is fixed as starting point
• Different extraction (i.e. fishing) policies can be
enacted on top of this system:
– How much fish is extracted each time (either absolute
numbers or as a proportion of existing numbers)
– Where uniformly at random or patch-by-patch
– How many ‘reserves’ are kept
– Is there a minimum stock level below which no fishing
28. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 28
Demonstration of the basic model
29. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 29
Typical Harvest Shape (last 100 ticks) for
different catch levels over 20 different runs
Catch level (per tick)
ProportionofMaximum
30. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 30
Decide in your groups
1. Amount of fish extraction (quota) per tick, either:
• Absolute number (0-200)
• Percentage of existing stock (0-100%)
2. The way fish is extracted, either:
• Randomly over whole grid
• Random patch chosen and fished, then next until
quota for tick is reached
3. How many patches will be kept as reserves (not
fished)
4. When to start fishing (0-999 ticks)
31. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 31
Total Extinction Prob. & Av. Total Harvest
(last 100 ticks) for different catch levels
Catch level (per tick)
ProportionofMaximum
32. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 32
Num Fish (all species, 20 runs) – catch
level 25
0
1000
2000
3000
4000
5000
6000
0
31
62
93
124
155
186
217
248
279
310
341
372
403
434
465
496
527
558
589
620
651
682
713
744
775
806
837
868
899
930
961
992
33. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 33
Num Fish (all species, 20 runs) – catch
level 35
0
1000
2000
3000
4000
5000
6000
0
31
62
93
124
155
186
217
248
279
310
341
372
403
434
465
496
527
558
589
620
651
682
713
744
775
806
837
868
899
930
961
992
Catch target=30
34. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 34
Num Fish (all species, 20 runs) – catch
level 50
0
1000
2000
3000
4000
5000
6000
0
31
62
93
124
155
186
217
248
279
310
341
372
403
434
465
496
527
558
589
620
651
682
713
744
775
806
837
868
899
930
961
992
35. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 35
Average (over 20 runs) of fish at end of 5000
simulation ticks
0
1000
2000
3000
4000
5000
0 20 40 60 80 100
Number Fish for Different Catch Levels
36. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 36
Average (over 20 runs) of numbers of fish
species at end of 5000 simulation ticks
0
20
40
60
80
100
120
140
0 20 40 60 80 100
Num Fish Species with Catch Level
37. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 37
Average Number of Species vs. Catch
Level (from a different starting ecology)
0
2
4
6
8
10
12
14
0 5 10 15 20 25 30 35 40
Num Species Fish
38. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 38
Average Number of Species, Catch=20
0
5
10
15
20
25
30
35
0 200 400 600 800 1000
AverageNumberofSpecies
Time
"by patches"
"uniform"
39. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 39
Average Number of Species, Catch=30
0
5
10
15
20
25
30
35
0 200 400 600 800 1000
AverageNumberofSpecies
Time
"by patches"
"uniform"
40. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 40
Average Number of Species, Catch=40
0
5
10
15
20
25
30
35
0 200 400 600 800 1000
AverageNumberofSpecies
Time
"by patches"
"uniform"
41. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 41
A risk-analysis approach
1. Give up on estimating future impact or “safe”
levels of exploitation
2. Make simulation models that include more of the
observed complication and complex interactions
3. Run these lots of times with various scenarios to
discover some of the ways in which things can
go surprisingly wrong (or surprisingly right)
4. Put in place sensors/measures that would give
us the earliest possible warning that these might
be occurring in real life
5. React quickly if these warning emerge
42. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 42
Example 2: Social Influence and
Domestic Water Demand
• Produced for the Environment Agency/DEFRA
• Part of a bigger project to predict future domestic
water demand in the UK given some different
future politico-economic scenarios and climate
change
• The rest of the project were detailed statistical
models to do the prediction
• This model was to examine the assumptions and
look at the envelope of possibilities
• Joint work with Olivier Barthelemy and Scott Moss
43. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 43
Monthly Water Consumption
REL_CHNG
.88
.75
.63
.50
.38
.25
.13
0.00
-.13
-.25
-.38
-.50
20
10
0
Std.Dev = .17
Mean = .01
N = 81.00
44. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 44
Relative Change in Monthly
Consumption
Date
FEB
2001
SEP
2000
APR
2000
N
O
V
1999
JU
N
1999
JAN
1999
AU
G
1998
M
AR
1998
O
C
T
1997
M
AY
1997
D
EC
1996
JU
L
1996
FEB
1996
SEP
1995
APR
1995
N
O
V
1994
JU
N
1994
REL_CHNG
1.0
.8
.6
.4
.2
-.0
-.2
-.4
-.6
45. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 45
Purpose of the SI&DWD Model
• Not long-term prediction
• But to begin to understand the relationship of
socially-influenced consumer behaviour to
patterns of water demand
• By producing a representational agent model
amenable to fine-grained criticism
• And hence to suggest possible interactions
• So that these can be investigated/confirmed
• And this loop iterated
46. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 46
Model Structure - Overall Structure
•Activity
•Frequency
•Volume
Households
Policy
Agent
•Temperature
•Rainfall
•Sunshine
Ground
Aggregate Demand
47. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 47
Model Structure - Microcomponents
• Each household has a variable number of micro-
components (power showers etc.): bath
other_garden_watering shower hand_dishwashing
washing_machine sprinkler clothes_hand_washing
hand_dishwashing toilets sprinkler power_shower
• Actions are expressed by the frequency and
volume of use of each microcomponent
• AVF distribution in model calibrated by data from
the Three Valleys
48. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 48
Model Structure - Household
Distribution
• Households distributed randomly on a grid
• Each household can copy from a set of
neighbours (currently those up to 4 units up, down
left and right from them)
• They decide which is the neighbour most similar
to themselves – this is the one they are most
likely to copy
• Depending on their evaluation of actions they
might adopt that neighbour’s actions
49. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 49
An Example Social Structure
- Global Biased
- Locally Biased
- Self Biased
50. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 50
Household Behaviour - Endorsements
• Action Endorsements: recentAction neighbourhoodSourced
selfSourced globallySourced newAppliance
bestEndorsedNeighbourSourced
• 3 Weights moderate effective strengths of
neighbourhoodSourced selfSourced globallySourced
endorsements and hence the bias of households
• Can be characterised as 3 types of households
influenced in different ways: global-;
neighbourhood-; and self-sourced depending on
the dominant weight
51. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 51
History of a particular action
from one agent’s point of view
Month 1: used, endorsed as self sourced
Month 2: endorsed as recent (from personal use) and neighbour
sourced (used by agent 27) and self sourced (remembered)
Month 3: endorsed as recent (from personal use) and neighbour
sourced (agent 27 in month 2).
Month 4: endorsed as neighbour sourced twice, used by agents 26 and
27 in month 3, also recent
Month 5: endorsed as neighbour sourced (agent 26 in month 4), also
recent
Month 6: endorsed as neighbour sourced (agent 26 in month 5)
Month 7: replaced by action 8472 (appeared in month 5 as neighbour
sourced, now endorsed 4 times, including by the most alike
neighbour – agent 50)
52. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 52
Policy Agent - Behaviour
• After the first month of dry conditions, suggests
AFV actions to all households
• These actions are then included in the list of those
considered by the households
• If the household’s weights predispose it, it may
decide to adopt these actions
• Some other neighbours might imitate these
actions etc.
53. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 53
Number of consecutive dry months in
historical scenario
0
1
2
3
4
5
6
7
8
9
J-73
J-74
J-75
J-76
J-77
J-78
J-79
J-80
J-81
J-82
J-83
J-84
J-85
J-86
J-87
J-88
J-89
J-90
J-91
J-92
J-93
J-94
J-95
J-96
J-97
Simulation Date
Numberofconsequativedrymonths
54. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 54
Simulated Monthly Water
Consumption
REL_CHNG
.075
.063
.050
.037
.025
.012
-.000
-.013
-.025
-.038
-.050
120
100
80
60
40
20
0
Std. Dev = .01
Mean= -.000
N = 325.00
55. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 55
Monthly Water Consumption (again)
REL_CHNG
.88
.75
.63
.50
.38
.25
.13
0.00
-.13
-.25
-.38
-.50
20
10
0
Std.Dev = .17
Mean = .01
N = 81.00
56. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 56
Simulated Change in Monthly
Consumption
Date
SEP
1997
APR
1996
N
O
V
1994
JU
N
1993
JAN
1992
AU
G
1990
M
AR
1989
O
C
T
1987
M
AY
1986
D
EC
1984
JU
L
1983
FE
B
1982
SEP
1980
APR
1979
N
O
V
1977
JU
N
1976
JAN
1975
AU
G
1973
M
AR
1972
O
C
T
1970
REL_CHNG
.10
.08
.06
.04
.02
0.00
-.02
-.04
-.06
57. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 57
Relative Change in Monthly
Consumption (again)
Date
FEB
2001
SEP
2000
APR
2000
N
O
V
1999
JU
N
1999
JAN
1999
AU
G
1998
M
AR
1998
O
C
T
1997
M
AY
1997
D
EC
1996
JU
L
1996
FEB
1996
SEP
1995
APR
1995
N
O
V
1994
JU
N
1994
REL_CHNG
1.0
.8
.6
.4
.2
-.0
-.2
-.4
-.6
58. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 58
30% Neigh. biased, historical
scenario, historical innov. datesAggregate demand series scaled so 1973=100
0
20
40
60
80
100
120
140
160
180
200
J-
73
J-
74
J-
75
J-
76
J-
77
J-
78
J-
79
J-
80
J-
81
J-
82
J-
83
J-
84
J-
85
J-
86
J-
87
J-
88
J-
89
J-
90
J-
91
J-
92
J-
93
J-
94
J-
95
J-
96
J-
97
Simulation Date
RelativeDemand
59. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 59
80% Neigh. biased, historical
scenario, historical innov. datesAggregate demand series scaled so 1973=100
0
20
40
60
80
100
120
140
160
180
200
J-
73
J-
74
J-
75
J-
76
J-
77
J-
78
J-
79
J-
80
J-
81
J-
82
J-
83
J-
84
J-
85
J-
86
J-
87
J-
88
J-
89
J-
90
J-
91
J-
92
J-
93
J-
94
J-
95
J-
96
J-
97
Simulation Date
RelativeDemand
60. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 60
80% Neigh. biased, medium-high
scenario, historical innov. DatesAggregate demand series scaled so 1973=100
0
20
40
60
80
100
120
140
160
180
200
Jan-
73
Jan-
74
Jan-
75
Jan-
76
Jan-
77
Jan-
78
Jan-
79
Jan-
80
Jan-
81
Jan-
82
Jan-
83
Jan-
84
Jan-
85
Jan-
86
Jan-
87
Jan-
88
Jan-
89
Jan-
90
Jan-
91
Jan-
92
Jan-
93
Jan-
94
Jan-
95
Jan-
96
Jan-
97
Simulation Date
RelativeDemand
61. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 61
What did the model tell us?
• That it is possible that social processes:
– can cause a high and unpredictable variety in patterns
of demand
– can ‘lock-in’ behavioural patterns and partially ‘insulate’
them from outside influence (droughts only occasionally
had a permenant affect on patterns of consumption)
• and that the availability of new products could
dominate effects from changing consumptions
habits
62. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 62
Conclusions of Example 2
• ABM can be used to construct fairly-rich
computational descriptions of socially-related
phenomena which can be used
– to replicate systems analytic techniques can’t deal with
– to explore some of the possibilities
• especially those unpredictable but non-random possibilities
caused to human behaviour
– as part of an iterative cycle of detailed criticism
• validatable by both data and expert opinion
– to inform be informed by good observation
63. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 63
A central dilemma – what to trust?
Intuitions
A complex simulation
A policy maker
64. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 64
But Modeller to Policy Actor Interface
is not easy
• Analysts/modellers and policy actors have
different: goals, language, methods, habits…
• Policy Actors will often want predictions –
certainty – even if the analysts know this is
infeasible
• Analysts will know how difficult the situation is to
understand and how much is unknown, and will
want to communicate their caveats (which often
get lost in the policy process)
• So discussion between them does not necessarily
go easily
65. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 65
Many views of a model (I)
- due to syntactic complexity
• Computational ‘distance’ between specification
and outcomes means that
• There are (at least) two very different views of a
simulation
(consequences of complexity)
Simulation
Representation of OutcomesSpecification
66. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 66
Representation of Outcomes (II)
Many views of a model (II)
- understanding the simulation
(consequences of complexity)
Simulation
Representation of Outcomes (I)Specification
Analogy 1
Analogy 2
Theory 1
Theory 2
Summary 1 Summary 2
67. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 67
Four Meanings (of the PM)
Research World
1. The researcher’s idea/intention for the PM
2. The fit of the PM with the evidence/data
The ideavalidation relation extensively
discussed within research world
Policy World
3. The usefulness of the PM for decisions
4. The communicable story of the PM
The goalinterpretation relation extensively
discussed within policy world
68. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 68
Two Worlds
Research
• Ultimate Goal is
Agreement with
Observed (Truth)
• Modeller also has an
idea of what the model
is and how it works
Policy
• Ultimate Goal is in
Final Outcomes
(Usefulness)
• Decisions justified by
a communicable
causal story
Policy Model
• Labels/Documentation may be
different from all of the above!
Modeller
Policy
Advisor
69. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 69
Joining the Two Worlds
Empirical
• Ultimate Goal is
Agreement with
Observed (Truth)
• Modeller also has an
idea of what the model
is and how it works
Instrumental
• Ultimate Goal is in
Final Outcomes
(Usefulness)
• Decisions justified by
a communicable
causal story
Model
• Labels/Documentation may be
different from all of the above!
Tighter loop
(e.g. via
participatory
modelling)
70. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 70
Conclusions
• Complex systems can not be relied upon to behave in
regular ways
• Often averages, equilibria etc. are not very
informative
• Future levels can not meaningfully be predicted
• Simpler models may well make unreliable
assumptions and not be representative
• Rather complex models can be part of a risk-analysis
• Identifying some of the ways in which things can go
wrong, implement measure to watch these, then be
able to react quickly to these (‘driving policy’)
• A tight measure-react loop can be essential for driving
policy – modelling might help in this – but this is hard!
71. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 71
The End!
Bruce Edmonds:
http://bruce.edmonds.name
These Slides: http://slideshare.net/bruceedmonds
Centre for Policy Modelling: http://cfpm.org
72. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 72
Some Pitfalls in Model Construction
Pitfalls Part 1
73. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 73
Modelling Assumptions
• All models are built on assumptions, but…
• They have different origins and reliability, e.g.:
– Empirical evidence
– Other well-defined theory
– Expert Opinion
– Common-sense
– Tradition
– Stuff we had to assume to make the model possible
• Choosing assumptions is part of the art of
simulation but which assumptions are used
should be transparent and one should be honest
about their reliability – plausibility is not enough!
74. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 74
Theoretical Spectacles
• Our conceptions and models constrain how we
1. look for evidence (e.g. where and what kinds)
2. what kind of models we develop
3. how we evaluate any results
• This is Kuhn’s “Theoretical Spectacles” (1962)
– e.g. continental drift
• This is MUCH stronger for a complex simulation
we have immersed ourselves in
• Try to remember that just because it is useful to
think of the world through our model, this does
not make them valid or reliable
75. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 75
Over-Simplified Models
• Although simple models have many pragmatic
advantages (easier to check, understand etc.)…
• If we have missed out key elements of what is being
modelled it might be completely wrong!
• Playing with simple models to inform formal and
intuitive understanding is an OK scientific practice
• …but it can be dangerous when informing policy
• Simple does not mean it is roughly correct, or more
general or gives us useful intuitions
• Need to accept that many modelling tasks requested
of us by policy makers are not wise to do with
restricted amounts of time/data/resources
76. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 76
Underestimating model limitations
• All models have limitations
• They are only good for certain things: a model
that explains well might not predict well
• The may well fail when applied in a different
context than the one they were developed in
• Policy actors often do not want to know about
limitations and caveats
• Not only do we have to be 100% honest about
these limitations, but we also have to ensure that
these limitations are communicated with the
model
77. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 77
Not checking & testing a model
thoroughly
• Doh!
• Sometimes there is not a clear demarcation
between an exploratory phase of model
development and its application to serious
questions (whose answers will impact on others)
• Sometimes an answer is demanded before
thorough testing and checking can be done – “Its
OK, I just want an approximate answer” :-/
• Sometimes researchers are not honest
• Depends on the potential harm if the model is
relied on (at all) and turns out to be wrong
78. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 78
Some Pitfalls in Model Application
Pitfalls Part 2
79. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 79
Insufficiently Validated Models
• One can not rely on a model until it has been
rigorously checked and tested against reality
• Plausibility is nowhere NEAR enough
• This needs to be on more than one case
• Its better if this is done independently
• You can not validate a model using one set of
settings/cases then rely on it in another
• Validation usually takes a long time
• Iterated development and validation over many
cycles is better than one-off models (for policy)
80. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 80
Promising too much
• Modellers are in a position to see the potential of
their work, and so can tantalise others by
suggesting possible/future uses (e.g. in the
conclusions of papers or grant applications)
• They are tempted to suggest they can ‘predict’,
‘evaluate the impact of alternative polices’ etc.
• Especially with complex situations (that ABM is
useful for) this is simply deceptive
• ‘Giving a prediction to a policy maker is like giving
a sharp knife to a child’
81. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 81
The inherent plausibility of ABMs
• Due to the way ABMs map onto reality in a
common-sense manner (e.g. peopleagents)…
• …visualisations of what is happening can be
readily interpretted by non-modellers
• and hence given much greater credence than
they warrant (i.e. the extent of their validation)
• It is thus relatively easy to persuade using a good
ABM and visualisation
• Only we know how fragile they are, and need to
be especially careful about suggesting otherwise
82. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 82
Model Spread
• On of the big advantages of formal models is that
they can be passed around to be checked, played
with, extended, used etc.
• However once a model is out there, it might get
used for different purposes than intended
• e.g. the Black-Scholes model of derivative pricing
• Try to ensure a released model is packaged with
documentation that warns of its uses and
limitations
83. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 83
Narrowing the evidential base
• The case of the Newfoundland cod, indicates how
models can work to constrain the evidence base,
therefore limiting decision making
• If a model is considered authoritative, then the
data it uses and produces can sideline other
sources of evidence
• Using a model rather than measuring lots of stuff
is cheap, but with obvious dangers
• Try to ensure models are used to widen the
possibilities considered, rather than limit them
84. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 84
Other/General Pitfalls
Pitfalls Part 3
85. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 85
Confusion over model purpose
• A model is not a picture of reality, but a tool
• A tool has a particular purpose
• A tool good for one purpose is probably not good
for another
• These include: prediction, explanation, as an
analogy, an illustration, a description, for theory
exploration, or for mediating between people
• Modellers should be 100% clear under which
purpose their model is to be judged
• Models need to be justified for each purpose
separately
86. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 86
When models are used out of the
context they were designed for
• Context matters!
• In each context there will be many
conditions/assumptions we are not even aware of
• A model designed in one context may fail for
subtle reasons in another (e.g. different ontology)
• Models generally need re-testing, re-validating
and often re-developing in new contexts
87. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 87
What models cannot reasonably do
• Many questions are beyond the realm of models
and modellers but are essentially
– ethical
– political
– social
– semantic
– symbolic
• Applying models to these (outside the walls of
our academic asylum) can confuse and distract
88. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 88
The uncertainty is too great
• Required reliability of outcome values is too low
for purpose
• Can be due to data or model reasons
• Radical uncertainty is when its not a question of
degree but the situation might fundamentally
change or be different from the model
• Error estimation is only valid in absence of radical
uncertainly (which is not the case in almost all
ecological, technical or social simulations)
• Just got to be honest about this and not only
present ‘best case’ results
89. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 89
A false sense of security
• If the outcomes of a model give a false sense of
certainly about outcomes then a model can be
worse than useless; positively damaging to policy
• Better to err on the side of caution and say there
is not good model in this case
• Even if you are optimistic for a particular model
• Distinction here between probabilistic and
possibilistic views
90. Finding out what could go wrong before it does, Bruce Edmonds, Cambridge, Sept. 2018. slide 90
Not more facts, but values!
• Sometimes it is not facts and projections that are
the issue but values
• However good models are, the ‘engineering’
approach to policy (enumerate policies, predict
impact of each, choose best policy) might be
inappropriate
• Modellers caught on the wrong side of history
may be blamed even though they were just doing
the technical parts