Evidence-based policy making…what type of evidence do we need?
Présentation de Mark Petticrew au colloque "Recherche interventionnelle contre le cancer : Réunir chercheurs, décideurs et acteurs de terrain » - 17 et 18 novembre 2014, BnF, Paris
Colloque RI 2014 : Intervention de Mark PETTICREW (London School of Hygiene and Tropical Medicine)
1. Evidence-based policy making…what type
of evidence do we need?
Mark Petticrew
Faculty of Public Health and Policy
London School of Hygiene and Tropical Medicine
Improving health worldwide
www.lshtm.ac.uk
2. • We live in a world of evidence-based everything…and
everyone wants to be (seen to be) “evidence-based”
6. • The term “evidence” in public health is problematic,
by evidence we often mean “trials”, and we wring our
hands about the “weak” public health evidence
base...
• “There isn’t very much evidence, and what there is,
isn’t very good”
7. • The term “evidence” is problematic, by evidence we
often mean “trials”, and other sorts of evaluations of
policies, and we have often wrung our hands about the
“weak” public health evidence base...
• “There isn’t very much evidence, and what there is,
isn’t very good”
• So we need to increase the “flow” of evaluations of the
health effects of interventions (particularly those
outside the health sector)
8. The inverse evidence law
• The strongest evidence we have is often about risk factor
modification, and we have more, “weaker” evidence about
many of the wider social economic and environmental
determinants of health (including policies)
• E.g. The evidence on modal shift in transport - how to get
people to walk and cycle more...
9. Types of intervention
• “Health promotion” activities (Education campaigns; free
bikes)
• Engineering measures (Bicycle infrastructure; traffic
restraint)
• Financial incentives (voucher/fine to leave car at home)
• Providing alternative services (e.g. A new railway station)
• Complex urban transport policies
10. Study designs used to evaluate interventions to
bring about modal shift (Ogilvie et al., 2007)
N (studies)
Randomised controlled trial (individual-level) 3
Panel survey 13
Repeated cross-sectional survey (community-level) 17
Retrospective or after-only survey 11
Case study /uncertain (city-level) 20
12. Three public health interventions for
which there is “no good evidence”
• There have been 5 RCTs of the health effects of social
housing investment. They don’t show major significant
effects on health.
• Is social housing “ineffective”, and so should be
withdrawn?
• (– I don’t think so)
13. Two more interventions for which there
is no trial evidence (but which surely
‘work’)
• Zebra crossings: there are no trials, but there is
direct experiential evidence, and excellent
theory (“common sense”) that if you walk
directly into the traffic you will be knocked
down...there is no “equipoise”
• Gritting pavements. Not worth asking for
“perfect evidence”? However, if the question is
about the comparative effectiveness (different
“doses” of gritting, or salting vs gritting, or CBA
of gritting vs public warnings) then this may be
worth evaluating (though may not be ethical)
15. The myth of the single
“killer” study
• In public health there is rarely one single, killer study which
tells us definitively what to do (or stop doing)
• Good evidence-informed decisions draw on the wider range of
prior evidence (including observational evidence), theory as
well as what is know about causal mechanisms; along with
judgements about plausibility of effects across a range of
outcomes
• “Best available evidence” may often be good enough
• …particularly given the need to act according to the
precautionary principle
16. The need for replication
• “Too many social scientists expect single experiments to settle
issues once and for all. This may be a mistaken generalization
from the history of great crucial experiments in physics and
chemistry. In actuality the significant experiments in the physical
sciences are replicated thousands of times, not only in deliberate
replication efforts, but also as inevitable incidentals in successive
experimentation and in utilizations of those many measurement
devices (such as the galvanometer) that in their own operation
embody the principles of classic experiments.
(Campbell, Reforms as Experiments,
1969).
17. • Because we social scientists have less ability to achieve
“experimental isolation”, because we have good reason to
expect our treatment effects to interact significantly with a
wide variety of social factors, many of which we have not yet
mapped, we have much greater needs for replication
experiments than do the physical sciences….
18. • “Policy outcomes can be monitored with triangulated methods
(accumulation of evidence from a variety of sources to gain
insight, often combining quantitative and qualitative data)”*
*Brownson, Chriqui & Stamakis (2009)
19. All those problems...where are
the answers?
• The answers do not lie simply in more epidemiology, or more
research
• But also in understanding the political and other cultures
within which evidence is produced, valued, used, misused, or
not used at all (in different sectors)
• And a greater focus on the decisions that are taken:
• “What type and strength of evidence (if any) is needed to
support the decision that needs to be taken”
20. • Q: Is, as SV said yesterday, « the best the enemy of the good » in
the case of public health evidence? (A: Yes)
• We need robust RCTs where these are possible
• For addressing the most complex influences on health, we need to
also rely on complex sets of epidemiological evidence, including
modelling studies, knowledge about causes and mechanisms
• We need to be wary of over-emphasising the problems with public
health evidence – it ignores the contribution of different types of
evidence to decisionmaking
• A cautionary tale about « methodological purism »:
21. The 7 CEO’s of Big Tobacco
• Testified in turn to Senator Waxman’s hearings (1980-1994): “I believe that
tobacco is not addictive”
• The tobacco industry developed a range of sophisticated epidemiological
and methodological arguments to undermine the public health evidence
base on the harms of tobacco
• Their “multifactorial causes” argument was developed to argue that
epidemiological studies are hopelessly confounded; nothing can be
“proved”:
22. “Stressed-out” smokers
• “While some scientists have associated cigarette smoking with heart
disease, it is certainly clear that a number of other factors including
life-style, blood pressure, biochemistry, genetics and in particular,
stress, may also be involved”
• ‘‘These diseases are also statistically associated with many other
variables, such as diet, lifestyle, heredity and stress. . . . But the
existence of a statistical association does not mean that smoking
causes these diseases.’’ (BAT statement to the Irish Joint Committee
on Health and Children in 1998)*
*Am J PH Paper on tobacco industry funding of stress:
http://researchonline.lshtm.ac.uk/3743/
25. • First, published studies were repeatedly misquoted, distorting the
main messages.
• Second, ‘mimicked scientific critique’ was used to undermine
• evidence; this form of critique insisted on methodological
perfection, rejected methodological pluralism, adopted a litigation
(not scientific) model*, and was not rigorous.
• Third, TTCs engaged in ‘evidential landscaping’, promoting a parallel
evidence base to deflect attention from SP and excluding company-held
evidence relevant to SP.
*Examining and discounting studies piece by piece
26. Example of evidential
landscaping
• ‘… the real drivers of smoking initiation include factors such as
parental influences, risk preferences, peer influences, socioeconomic
factors, access and price’
27. • Our natural scientific concern with rigour and internal validity
needs to be balanced with the need to integrate a wide range
of evidence to feed into policy and other decisions
• We need to be aware of how others are interpreting and using
the concept of evidence
• Developing better public health evidence, is an incremental
process
28. No, that last slide was too
pessimistic…
• A more positive message:
• We have public health methods are as robust as those used in the
physical sciences – the exact same methods in many cases
• RISP methods are reliable, widely accepted, tried and tested
(centuries old!) and appropriate for investigating and estimating the
effects of policies on health
• We need to take every opportunity to reinforce this message