We present an integration of rational moral reasoning with emotional intelligence. The moral reasoning system alone could not simulate the different human reactions to the Trolley dilemma and the Footbridge dilemma. However, the combined system can simulate these human moral decision making processes. The introduction of affect in rational ethics is important when robots com-municate with humans in a practical context that includes moral relations and decisions. Moreover, the combination of ratio and affect may be useful for ap-plications in which human moral decision making behavior is simulated, for ex-ample, when agent systems or robots provide healthcare support.
2. Outline of this presentation
• Background
• Domain
• Moral reasoning system
• Silicon Coppelia
• Moral reasoning + Silicon Coppelia = Moral Coppelia
• Results
• Discussion
• Future Work
Cartagena, 15-11-2012 IBERAMIA 2012 2
3. Background
• Machines interact more with people
• Machines are becoming more autonomous
• Rosalind Picard (1997): ‘‘The greater the freedom of
a machine, the more it will need moral standards.’’
• We should manage that machines do not harm us or
threaten our autonomy
Cartagena, 15-11-2012 IBERAMIA 2012 3
4. Domain: Medical Ethics
• Within SELEMCA, we develop caredroids
• Patients are in a vulnerable position. Moral behavior
of robot is extremely important.
We focus on Medical Ethics
• Conflicts between:
1. Beneficence
2. Non-maleficence
3. Autonomy
4. Justice
Cartagena, 15-11-2012 IBERAMIA 2012 4
5. Previous Work:
Moral reasoning system
We developed a rational moral reasoning system that
is capable of balancing between conflicting moral goals.
Cartagena, 15-11-2012 IBERAMIA 2012 5
6. Limitations moral reasoning
• Only moral reasoning results in very cold decision-
making, only in terms of rights and duties
• Wallack, Franklin & Allen (2010): “even agents who
adhere to a deontological ethic or are utilitarians may
require emotional intelligence as well as other ‘‘supra-
rational’’ faculties, such as a sense of self and a
theory of mind”
• Tronto (1993): “Care is only thought of as good care
when it is personalized”
Cartagena, 15-11-2012 IBERAMIA 2012 6
7. Problem: Not Able to Simulate
Trolley Dilemma vs Footbridge Dilemma
• Greene et al. (2001) find that moral dilemmas vary
systematically in the extent to which they engage
emotional processing and that these variations in
emotional engagement influence moral judgment.
• Their study was inspired by the difference between
two variants of an ethical dilemma:
Trolley dilemma (little emotional processing)
Footbridge dilemma (much emotional
processing)
Cartagena, 15-11-2012 IBERAMIA 2012 7
8. Solution: Add Emotional Processing
• Previously, we developed Silicon Coppelia, a
model of emotional intelligence.
• This can also be projected in others, for
Theory of Mind
• Learns from experience Personalization
Connect Moral Reasoning to Silicon Coppelia
• More human-like moral reasoning
• Personalize moral decisions and
communication about moral reasoning
Cartagena, 15-11-2012 IBERAMIA 2012 8
11. Results
Kill 1 to Save 5 Do Nothing
Moral system
Trolley X
Footbridge X
Moral Coppelia
Trolley X
Footbridge X
Cartagena, 15-11-2012 IBERAMIA 2012 11
12. Discussion
• The introduction of affect in rational ethics is
important when robots communicate with humans
in a practical context that includes moral relations
and decisions.
• Moreover, the combination of ratio and affect may be
useful for applications in which human moral
decision making behavior is simulated
for example, when agent systems or robots provide
healthcare support, or in entertainment settings
Cartagena, 15-11-2012 IBERAMIA 2012 12
13. Future Work
• More detailed model of Autonomy
• In applications, choose actions that:
• Improve autonomy patient
• Improve well-being patient
• Do not harm patient
• Distribute resources equally among patients
• Persuasive Technology. Moral dilemmas about:
• Helping vs Manipulating
Cartagena, 15-11-2012 IBERAMIA 2012 13
Care-droids = care-agents, care-robots, assist care-deliverers and patients
Silicon Coppelia emotional intelligence, theory of mind, personalization (through adaptation / learning from interaction)
Moral Reasoning system alone could not simulate difference trolley dilemma and footbridge dilemma. Moral Reasoning system combined with Silicon Coppélia could simulate these human moral decision making processes
1: Robot tries to convince elder to exercise 2: Entertainment Bad can also be interesting.
Mental integrity, Physical integrity, Privacy, Capability to make autonomous decision: Cognitive Functioning, Adequate Information, Reflection