The document discusses combining rational and emotional elements in ethical reasoning systems for artificial agents. It presents Moral Coppélia, a model that integrates a rational moral reasoning system with Silicon Coppélia, an AI model of emotional intelligence. Moral Coppélia is able to simulate differences in moral dilemmas that engage emotional processing. It also improves on rational systems by personalizing decisions based on the agent's emotional state and perspective of others. The model successfully predicts outcomes of trolley-style dilemmas and criminal decisions by humans when matching individual traits and emotional states to its parameters. Potential applications of Moral Coppélia include healthcare robots, virtual coaching, games and more human-like artificial agents.
Streamlining Python Development: A Guide to a Modern Project Setup
Moral coppelia - Comining Ratio with Affect in Ethical Reasoning - Good AIfternoon, Radboud University Nijmegen
1. Moral Coppélia Combining Ratio with
Affect in Ethical
Reasoning
Matthijs Pontier
matthijspon@gmail.com
2. Overview of this presentation
•
•
•
•
•
•
•
•
SELEMCA
Moral Reasoning
Silicon Coppelia: Model of Emotional Intelligence
Moral Reasoning + Silicon Coppelia = Moral Coppelia
Predicting Crime with Moral Coppelia
Other Applications
Discussion
Future Work
Nijmegen, 18-11-2013
Good AIfternoon
2
3. SELEMCA
• Develop ‘Caredroids’: Robots or Computer Agents
that assist Patients and Care-deliverers
• Focus on patients who stay in long-term care facilities
Nijmegen, 18-11-2013
Good AIfternoon
3
4. Possible functionalities
• Care-broker: Find care that matches need patient
• Companion: Become friends with the patient to
prevent loneliness and activate the patient
• Coach: Assist the patient in making healthy choices:
Exercising, Eating healthy, Taking medicine, etc.
Nijmegen, 18-11-2013
Good AIfternoon
4
7. How people perceive caredroids
• People perceive caredroids in terms of:
• Affordances
• Ethics
• Aesthetics
• Realism
Nijmegen, 18-11-2013
Good AIfternoon
7
8. Aesthetics / Realism Caredroids
• Uncanny valley: Too human-like makes it eerie
• We associate almost-human with death: Zombies etc.
Design robots of which realism in appearance matches
realism in behavior
Nijmegen, 18-11-2013
Good AIfternoon
8
9. Affordances Caredroids
• Make sure the caredroid is a useful tool for the
patients and the care deliverers
Make sure the caredroid is an expert in its task
Make sure the caredroid personalizes its behavior
to the user
Nijmegen, 18-11-2013
Good AIfternoon
9
10. Affordances Caredroids
• Make sure the caredroid is a useful tool for the
patients and the care deliverers
Make sure the caredroid is an expert in its task
Make sure the caredroid personalizes its behavior
to the user
Nijmegen, 18-11-2013
Good AIfternoon
10
11. Ethics Caredroids
• Make the robot behave ethically good, so patients
perceive the robot as ethically good
• Patients are in a vulnerable position.
Moral behavior of robot is extremely important.
We focus on Medical Ethics
• Conflicts between:
1.
2.
3.
4.
Autonomy
Beneficence
Non-maleficence
Justice
Nijmegen, 18-11-2013
Good AIfternoon
11
12. Background Machine Ethics
• Machines are becoming more autonomous
Rosalind Picard (1997): ‘‘The greater the freedom of
a machine, the more it will need moral standards.’’
• Machines interact more with people
We should manage that machines do not harm us or
threaten our autonomy
• Machine ethics is important to establish perceived
trust in users
Nijmegen, 18-11-2013
Good AIfternoon
12
13. Domain: Medical Ethics
• Within SELEMCA, we develop caredroids
• Patients are in a vulnerable position. Moral behavior
of robot is extremely important.
We focus on Medical Ethics
• Conflicts between:
1. Beneficence
2. Non-maleficence
3. Autonomy
4. Justice
Nijmegen, 18-11-2013
Good AIfternoon
13
14. Moral reasoning system
We developed a rational moral reasoning system that
is capable of balancing between conflicting moral goals.
Nijmegen, 18-11-2013
Good AIfternoon
14
15. Limitations rational moral reasoning
• Only moral reasoning results in very cold decisionmaking, only in terms of rights and duties
• Wallack, Franklin & Allen (2010): “Ethical agents
require emotional intelligence as well as other ‘suprarational’ faculties, such as a sense of self and a
‘Theory of Mind”
• Tronto (1993): “Care is only thought of as good care
when it is personalized”
Nijmegen, 18-11-2013
Good AIfternoon
15
16. Problem: Not Able to Simulate
Trolley Dilemma vs Footbridge Dilemma
• Greene et al. (2001) find that moral dilemmas vary
systematically in the extent to which they engage
emotional processing and that these variations in
emotional engagement influence moral judgment.
• Their study was inspired by the difference between
two variants of an ethical dilemma:
Trolley dilemma
(moral impersonal)
Footbridge dilemma (moral personal)
Nijmegen, 18-11-2013
Good AIfternoon
16
17. Solution: Add Emotional Processing
• Previously, we developed Silicon Coppelia, a model
of emotional intelligence.
• This can be projected in others for Theory of Mind
• Learns from experience Personalization
Connect Moral Reasoning to Silicon Coppelia
• More human-like moral reasoning
• Personalize moral decisions and communication
about moral reasoning
Nijmegen, 18-11-2013
Good AIfternoon
17
19. Silicon Coppelia
• We developed Silicon Coppelia, with the goal to create
emotionally human-like robots
• Simulation experiments: System behaves consistent
with Theory and Intuition
• Compare performance model with performance real
human in speeddating experiment
Nijmegen, 18-11-2013
Good AIfternoon
19
20. Turing Test
• Turing Test was originally text-based
• We enriched test with affect-laden communication
• Facial expressions showing emotions
• Capable of vocal speech
• Afterwards questionnaire: How do you think Tom
perceived you?
Measure made continuous and more elaborated
than simply yes/no
• Analysis: Bayesian structural equation modeling
Nijmegen, 18-11-2013
Good AIfternoon
20
22. Results
• Participants did not detect differences on single
variables
• Participants did not recognize significant differences
on cognitive-affective structure
• Model in which conditions (1: human, 2: robot) were
assumed equal explained data better than model in
which conditions were assumed different
Nijmegen, 18-11-2013
Good AIfternoon
22
23. Conclusions Speed-Date
• We created simulation of affect so natural that young
women could not discern dating a robot from a man
• Important for:
• Understanding human affective communication
• Developing communication technologies
• Developing emotionally human-like robots
Nijmegen, 18-11-2013
Good AIfternoon
23
24. Silicon Coppelia + Moral Reasoning:
Decisions based on:
1. Rational influences
• Does action help me to reach my goals?
2. Affective influences
• Does action lead to desired emotions?
• Does action reflect Involvement I feel towards user?
• Does action reflect Distance I feel towards user?
3. Moral reasoning
• Is this action morally good?
Nijmegen, 18-11-2013
Good AIfternoon
24
25. Results Trolley & Footbridge
Kill 1 to Save 5
Moral system
Trolley
Footbridge
Moral Coppelia
Trolley
Footbridge
Nijmegen, 18-11-2013
Good AIfternoon
Do Nothing
X
X
X
X
25
26. Background Criminology
Study
• Substantial evidence emotions are fundamental in
criminal decision making
• But emotions rarely in criminal choice models
Study relation Ratio+Emotions+Moral
Apply Moral Coppelia to criminology data
Predict criminal decisions participants
Nijmegen, 18-11-2013
Good AIfternoon
26
27. Adding Expected Emotional
State Affect to Moral Coppelia
• Expected Emotion(action, emotion) =
(1-b) * AEB(action, emotion) + b * current_emotion
• EESA(action) =
1-
* (Desired(emotion(i)) – EE(action, i))
• ExpectedSatisfaction(action) =
wmor*Morality(action) +
wrat*ExpectedUtility(action) +
wemo*EESA(action)
Nijmegen, 18-11-2013
Good AIfternoon
27
28. Matching data to model
Match:
• Honesty/Humility
• Perceived Risk
• Negative State Affect
to Weightmorality
to Expected Utility
to EESA
Parameter Tuning:
1. Find optimal fits for initial sample
2. Predict decisions for holdout sample
Nijmegen, 18-11-2013
Good AIfternoon
28
30. Conclusions Criminology Study
• Validation of Moral Coppélia
• Adds to criminological theory
• Useful in applications
Nijmegen, 18-11-2013
Good AIfternoon
30
38. Conclusions
•
•
•
•
We created an affective moral reasoning system
System matches decisions medical ethical experts
System can simulate trolley and footbridge dilemma
System can predict human criminal choices
Nijmegen, 18-11-2013
Good AIfternoon
38
39. Discussion
• The introduction of affect in rational ethics is
important when robots communicate with humans
• Combination Ratio + Affect + Morals useful for
applications that simulate human decision making
for example, when agent systems or robots provide
healthcare support, or in entertainment settings
Nijmegen, 18-11-2013
Good AIfternoon
39
40. Future Work
• More detailed model of Autonomy
• In Applications, choose actions that:
• Improve autonomy patient
• Improve well-being patient
• Do not harm patient
• Distribute resources equally among patients
• Persuasive Technology
Moral dilemmas about Helping vs Manipulating
• Integrate current system with
Health Care Intervention models
Nijmegen, 18-11-2013
Good AIfternoon
40