What have we learned from an empirical approach to moral psychology - especially in relation to the role of rationality in most every day morality?
What are some lessons that the EA movement can take from moral psychology?
Various moral theorists over the years have had different emphasis on the roles that the head and heart play in moral judgement. Early conceptions of the role of the head in morality were that it drives moral judgement. A Kantian might say that the head/reasoning drives moral judgement – when presented with a dillema of some kind, the human engages with ‘system 2’ like processes in a controlled rational nature. An advocate of a Humean model may favor the idea that emotion or the heart (‘system 1’ thinking) plays the dominant role in moral judgement. Modern psychologists often take a hybrid model where both system 1 and system 2 styles of thinking are at play in contributing to the way we judge right from wrong.
http://www.scifuture.org/rationality-moral-judgement-simon-laham/
6. The head is less important than you
may think
MJDM is driven by a variety of factors:
– Emotions (e.g., Valdesolo & DeSteno, 2006)
– Values (e.g., Crone & Laham, 2015)
– Relational and group membership concerns
(e.g., Cikara et al., 2010)
Across a wide range of studies, a majority of
people do not consistently apply abstract moral
principles
– Moral judgments are not decontextualized,
depersonalized and asocial (i.e., not System 2)
7. Another concern…
Not only do people inconsistently apply rationality
in moral judgments, many reject the idea that
consequentialist rationality should have any place in
the moral domain
Appeals to consequentialist logic may backfire
(Kreps and Monin, 2014)
– People who give consequentialist justifications for
their moral positions are viewed as less committed
and less authentic
8. Another route to an effective EA
Is trying to change people’s minds the best way
to expand the EA movement?
Moral judgment is subject to a variety of
contextual effects
Knowledge of such effects can be used to
‘nudge’ people towards utilitarianism (see
Thaler & Sunstein, 2008)
9. Trolleys
Other contextual factors:
– Temporarily accessible rules (Broeders et al.,
2011)
– Wording (Petrinovich & O’Neill, 1996)
– Order effects (e.g., Schwitzgebel & Cushman,
2012)
– …
10. Beyond trolleys
Identifiable victim effect
(Small & Loewenstein, 2003)
Single vs. joint evaluation
and preference reversals (Kogut & Ritov, 2005)
vs
11. Decision framing and the moral circle
Moral circle as psychological category
Malleable? Consequences?
Decision framing and
set reduction
– Inclusion vs. exclusion mindsets
Moral circle demarcation as
set reduction
Laham (2009). Journal of Experimental Social Psychology
12. Mindset, circle size and consequences
Mindset
Inclusion Exclusion
Study 1a (N = 30) 65 82 t(28) = 3.08, p < 0.01, d = 1.13.
Study 1b (N = 65) 55 81 t(63) = 4.33, p < 0.01, d = 1.07.
Study 2 (N = 49) 68 82 t(47) = 3.56, p < 0.01, d = 1.02.
Condition
1=Exc.
0=Inc.
Set-size
Obligation
to Outgroups
0.46** 0.32*
0.40**
(0.25+)
Laham (2009). Journal of Experimental Social Psychology
13. Ease of retrieval and the moral circle
Availability heuristic (Tversky and Kahneman,1973)
“ease with which instances or associations come to mind”
Declarative vs. experiential
Ease vs. difficulty of retrieval (Schwarz et al., 1991)
Moral circle and subjective ease
Laham (2013). Social Psychology
14. ‘Practical’ take-home
Things beside rationality matter in morality
People believe that things beside rationality
should matter
So:
– (a) present EA in a manner that does not trade
utilitarian options off against deeply held values,
identities, or emotions
– (b) use decision framing techniques to ‘nudge’
people towards utilitarian choices
Notes de l'éditeur
Good morning
What we’ve learned about MJDM, especially wrt the role of rationality, and the implications of this to EA.
Not an EA, so forgive me if I’m working from an impoverished view of what constitutes the philosophy and practice of EA.
EA as head and heart; esp. head (rationality) as locus of cost-benefit and cost effectiveness calculations, informed by the logic of consequentialism or utilitarianism
This head-heart dialogue can also be used to frame the history of theorizing about moral judgment.
Although head vs heart is an intuitive way to frame theorizing about MJDM, we use a slightly different terminiology.
Outline
Reframe EA as a product of system 1 and system 2 processes.
In theory, System 2 is responsible for the kind of impartial, decontextualized, analytic central to utilitarianism.
In moral psychology much research about the role of system 1 and system 2 processes in the moral domain and the relation of these kinds of processes to utilitarian decision making has been done using…
The strict utilitarian (at least act utilitarian) should say act in both cases. Most people do not do this. Why?
Direct transfer of muscular force to the victim
Some evidence that variables that increase the likelihood of System 2 processes increase utilitarian responses in trolley dilemmas:
Deliberation/intuition; time pressure, cognitive load.
…but this is not without its critics.
I don’t want to give a detailed account of Greene’s theory or its problems: Greene’s EA talk in San Francisco.
Rather…
1. Although a majority of people are willing utilitarians under some conditions, factors like emotions can get in the way.
2. A majority of people do not systematically apply abstract moral principles, in a context-free, impartial fashion to these kinds of moral dilemmas.
Variety of contemporary theories: none claim rational calculation is the sole driver of moral judgment
The likelihood of utilitarian responses varies according to a variety of extra-rational concerns:
Cikara et al. - Extreme outgroup members, who seem neither warm nor competent (e.g. homeless), were the worst off; it was most morally acceptable to sacrifice them and least acceptable to save them.
When the util. option is pitted against emotions, values, group membership concerns, people are less likely to choose the utilitarian opiton.
This may seem obvious to some of you, but it might not be obvious to all of you – Peter Singer gives good accoutns of these kidns of factors in his books The life you can save and The most good you can do.
However, this fact is not apparent to everyone…
In preparing for this talk, I visited some EA blogs and saw various blog posts from people trying to start an EA group in the community or on campus and one of the common themes I detected was the incredulity felt at people’s reluctance to apply rational calculation in the moral domain.
Another concern…
If people (a) don’t consistently use rational, utilitarian principles in moral judgments and (b) don’t really think that such principles belong in the moral domain,
Exploit the fact that moral judgments are contextualized, personalized and social to construct decision contexts that increase the likelihood of utilitarian responses
Order - more likely to rate the Push and Switch scenarios equivalently when Push was presented before Switch (70% versus 54%, Z = 8.1,
p < .001).
I don’t want to rely exclusively on trolley dilemmas in making these points about the framing of moral judgments – as I mentioned, there are some shortcomings with an overreliance on these stimuli in formulating theories of moral psychology.
Contextual effects abound in other
In sum, the results support our hypothesis that
when considered in isolation, helping a single individual
is valued more highly than helping a group, but preference
is reversed when one has to choose between those
two goals.