A presentation for IEEE's Ethics Symposium happening in Vancouver, May 2016. Featuring presentations from John C. Havens, Mike Van der Loos, John P. Sullins, and Alan Mackworth.
3. Agenda
• Introductions
• John C. Havens
• Mike Van der Loos
• Alan Mackworth
• John Sullins
• Moderated Panelists Discussion
• Audience Q&A
• End
#AIEthics
6. • Launched April 5, 2016
• Executive Committee of twelve global thought leaders in AI, autonomous tech, Ethics
• Eleven Committees featuring over eighty additional thought leaders from over twelve countries
• IEEE Staff/Society Involvement: Representatives from SA, TA, RAS, SSIT, Computer Society, IEEE P2040*
• AI Association Involvement: AAAI, EurAI, IJCAI
• Policy orgs represented: WEF, UN, FCC, Future of Privacy Forum*
• Companies represented include: IBM, EMC, Cisco, NXP, LucidAI, Google DeepMind*
• Academic Institutions represented include: University of Texas, TU Delft, University of British Columbia,
Arizona State University, University of Washington, University of Cambridge, Duke University, Harvard
University, MIT, Georgia Institute of Technology*
*Partial listing
8. Committees:
• Executive Committee
• AI Ecosystem Mapping Committee
• General Principles and Guidance
• Legal Issues
• Affective Computing
• Safety and Beneficence of AGI and ASI
• Individual/Personal Data Control
• Economics of Machine Automation/Humanitarian Issues
• Methodologies to Guide Ethical Research, Design and Manufacturing
• How to Imbue Ethics/Values into AI
• Reframing Lethal Autonomous Weapons Systems (LAWS)
9. • Global Initiative invited to have satellite meeting as part of Europe’s largest AI Conference
• Initiative Committees gather for first face-to-face meeting
• Initiative Committees bring Charter Language (Crowdsourced Code of Conduct) to event
• Committees Bring Standards Projects to Workshops (to submit to SA)
• Attendees at Workshops help iterate Language
• Attendees to Workshops provide feedback and vote on Projects
10. • Second face to face meeting at UT in March, 2017 before SXSW Conference
• Attendees evolve Charter 2.0 to Charter 3.0
• Charter available via Creative Commons License for good of technology community at large
• By March 2017, over Multiple Standards Projects will be recommended to SA as PARs
• At UT, Global Initiative announces its formation as an Alliance, global University partnerships
• Alliance iterates Charter annually via meetings around the world, creates Certifications/Workshops to
implement Charter in multiple verticals, serves as an ongoing, global R&D Standards Pipeline for SA
12. WHAT SHOULD A ROBOT DO? – A quest to develop interactive
robots with ethics in mind
H.F. MACHIEL VAN DER LOOS
ELIZABETH A. CROFT
AJUNG MOON
THE UNIVERSITY OF BRITISH COLUMBIA
COLLABORATIVE ADVANCED ROBOTICS AND INTELLIGENT SYSTEMS LAB
13. 13
HFM VAN DERLOOS
CARIS LAB
MAY13, 2016
Collaborative Advanced Robotics and Intelligent Systems Lab
ELIZABETH A. CROFT
Elizabeth A. Croft Mike Van der Loos
14. 14
HFM VAN DERLOOS
ROBOTS ARE COMING
HUMAN-ROBOTCOLLABORATION
CARIS lab, UBC (2010)
www.plasticsnews.com
Baxter, Rethink Robotics (2012)
MAY13, 2016
15. 15
HFM VAN DERLOOS
ROBOETHICS
MAY13, 2016
ETHICSAPPLIED TOROBOTICS
Roboethics
- Human ethics
- Applied ethics adopted by
designers / manufacturers /
users
- Code of conduct implemented in
the artificial intelligence of
robots
- Artificial ethics for robots to
exhibit ethically acceptable
behaviour
Roboethics Robot Ethics
- Morality of a hypothetical robot
that is equipped with a
conscience and freedom to
choose its own actions
Robot’s Ethics
Fiorella Operto, Ethics in Advanced Robotics, 18 IEEE ROBOT. AUTOM. MAG. 72–78 (2011)
16. 16
HFM VAN DERLOOS
PROBLEM
MAY13, 2016
What is right / wrong? Fair / unfair?
What should / ought a robot do?
Who knows the answers?
Design decision
Policy decisions
Technical implementations
Culture
Religion
Context
Philosophical
stance
…
19. 19
HFM VAN DERLOOS
AUTONOMOUS CARS
MAY13, 2016
STUDYING WHAT PEOPLE THINK
A total of 10 polls and 766 responses on autonomous cars polls since April 25, 2014
20. 20
HFM VAN DERLOOS
AUTONOMOUS CARS
MAY13, 2016
Image by: Craig Berry
Continue straight
and kill the child
64%
Swerve and kill the
passenger (you)
36%
IF YOU FIND YOURSELF AS THE PASSENGER OF
THE TUNNEL PROBLEM, HOW SHOULD THE CAR
REACT?
N=113. Analyzed on June 22, 2014
Difficult
24%
Moderately
difficult
28%
Easy
48%
HOW HARD WAS IT FOR YOU TO ANSWER THE
TUNNEL PROBLEM QUESTION?
N=116. Analyzed on June 22, 2014
Passenger
44%
Lawmakers
33%
Other
11%
Manufacturer /
designer
12%
WHO SHOULD DETERMINE HOW THE CAR
RESPONDS TO THE TUNNEL PROBLEM?
N=113. Analyzed on June 22, 2014
STUDYING WHAT PEOPLE THINK
22. 22
HFM VAN DERLOOS
CONCLUSION
TAKE HOME MESSAGES
MAY13, 2016
PROBLEM:
What should a robot
do?
Public acceptance &
design decisions
Democratic approach
to moral decisions
Delegating decision
making to atomic
interactions
Human-Robot Interaction (HRI)
Roboethics
23. 23
HFM VAN DERLOOS
ACKNOWLEDGMENTS
CARIS Lab
ICICS
UBC Dept. of Mechanical Engineering
CFI
NSERC
Vanier Canada Graduate Scholarships
MAY13, 2016
CONTACT INFORMATION:
Mike Van der Loos, Ph.D., P.Eng.
Assoc. Prof., Dept. of Mechanical Engineering, UBC
6250 Applied Science Lane
Vancouver, BC V6T 1Z4 CANADA
phone: +1-604-827-4479
email: vdl@mech.ubc.ca
web: http://mech.ubc.ca/machiel-van-der-loos/
research: http://caris.mech.ubc.ca; http://rreach.mech.ubc.ca
Ori: http://www.openroboethics.org
25. Trusted Artificial Autonomous Agents
Alan Mackworth
• New ontological category: Artificial Autonomous Agents (AAAs)
• Q: Can we trust them?
• A: No!
• Q: Why not?
• A: E.g. ‘Deep Learning’: opaque, with massive, inaccessible training sets
• Ethical agents have to be trustworthy
• Need new methods to build trusted, ethical agents
• Ensure AAAs values are aligned with users’ and society’s values
26. Five Approaches to Building Trusted Agents
1. Formal methods for specification and verification
2. Hierarchical constraint-based modular architectures
3. Inferring human values: e.g. inverse reinforcement learning
4. Semi-autonomy, human in the loop
5. Participatory Action Design: user-centered with Wizard of Oz techniques
27. What We Need
Any ethical discussion presupposes we (and agents) can:
•Model agent structure and functionality
•Predict consequences of agent commands and actions
•Impose constraints on agent actions such as goal reachability, safety
and liveness (absence of deadlock and livelock)
•Determine if an agent satisfies those constraints (almost always)
28. Formal Methods to Build Trustworthy AAAs
To show that implementation satisfies specification, we need a
tripartite theory:
1. Language to express agent structure and dynamics
2. Language for constraint-based specifications
3. Method to determine if an agent will (be likely to) satisfy its
specifications, connecting 1 to 2
30. Formal Methods for Agent Verification
The CBA framework consists of:
1. Constraint Net (CN) → system modelling
2. Timed -automata → behavior specification
3. Model-checking and Liapunov methods → behavior verification
A
(Zhang & Mackworth, 1993, …)
31. Hierarchical Modular CBA in CN
← CBA Structure
↑
Control Synthesis with
Prioritized Constraints
Constraint1 > Constraint2 > Constraint3
>
32. Artificial Semi-autonomous Agents (ASAs)
• Keep human(s) in the loop
• Shared autonomy at the higher control levels
• Provide ‘sliders’ for users to adjust autonomy levels
• Not one size fits all
• Case study: smart wheelchairs for cognitively and physically impaired
older adults
33. Docking and Back-in Parking Assistance
Driving Scenario at Long Term Care Facility
34. Shared Autonomy Wheelchair Control Modes
Level 1: Basic safety by limiting speed
Level 2: Level 1 + non-intrusive steering guidance
Level 3: Level 1 + intrusively turning away from obstacles
Level 4: Completely autonomous
The Wizard [Baum, 1900]
Systems developed using user-centered Participatory Action Design
methodology and Wizard of Oz techniques
35. Closing Thoughts
• More R&D on building trusted AAAs and ASAs required
• Formal specification and verification of AAAs needed
• Governments lack technical expertise to develop standards
• Lack of effective global standards bodies with enforcement
• Regulatory capture: power of corporations to fend off regulation
• Poor education of AI scientists & roboticists in morals and ethics
• AI singularity & superintelligence hype overshadows real concerns
• See One Hundred Year Study of AI https://ai100.stanford.edu
Thanks to: Y. Zhang, P. Viswanathan, A. Mihailidis, B. Adhikari, I. Mitchell, J. Little, ….
Contact: mack@cs.ubc.ca @AlanMackworth URL: http://www.cs.ubc.ca/~mack
38. Embedded Ethics Design for AI and Robotics
• Building workable solutions requires many disciplines to work
together
• When it is working well, philosophy is a big picture discipline and it
has much to offer in our quest of building beneficial AI and robotics
applications
• Especially in the area of ethics and the design of artificial moral agents
39. Bryant Walker Smith
• Lawyers and Engineers Should Speak the
Same Robot Language, Bryant Walker
Smith, 2015
• Each application has many uses
• Actual
• Legal
• Reasonable
• Use intended by the designer
• “An open question is the extent to which
product design should attempt to confine
actual uses to those that are legal,
reasonable, or intended.”
40. Ethical Design
I recommend we add ethical use to the list of
potential uses as well
A-Actual Use
B-Reasonable Use
C-Intended Use
D-Legal Use
E-Ethical Use
Actual Use
Reasonable use
Intended UseLegal Use
Ethical Use
41. Ethics Applied to AI and Robotics
Image from: Are Deontological Moral Judgements Rationalizations?
Some problems
• Classical Ethics is only
concerned with human
agency
• What is the best ethical
system to apply?
• No science is ever truly
finished so the science of
ethics will not result in
one unified theory either
42. A Helpful Alternative
• The following discussions can be
distracting
• Egoism vs Altruism
• Self interest vs Benevolence
• Free Will vs Determinism
• Responsibility
• Morality has roots in evolution
• Ethics is a tool or instrument
that we use to design new forms
of beneficial behavior
American Pragmatist Philosopher:1859-1952
44. Embedded Ethics Design
• The "...engineer, carries on the great part of his
work without consciously asking himself whether
his work is going to benefit himself or someone
else. He is interested in the work itself; such
objective interest is a condition of mental and
moral health.... Nevertheless, there are occasions
when conscious reference to the welfare of others
is imperative." Dewey, Ethics 1935.
• We need embedded ethics professionals at the
level of the design team
• To meet the needs for engineers who must focus on
their work
• And for the organization that employs them to pay
appropriate concern to the ethical impacts of their
work
• This can take the form of consultants but it would
be best to have some of the designers trained in
values sensitive design
• Their job is to find the areas of ethical concern in a
design and suggest constructive means for
mitigating problems in the design stage
• This prevents the approach we often see
• release-disaster-beg forgiveness
• Since embedded ethicists might be susceptible to
something like the Stockholm syndrome, we must
also have ethics review boards
45. AI and Robotics Ethics Boards
Short term ethical concerns are met by creating a dialog that follows
these steps
1. Identify the ethical concerns
raised by the new technology.
a. Anticipate consequences.
Create proactive ethics rather than
merely reactive ones.
b. Enhance the standard model IRB
and replace it with one that fosters
embedded ethicists in the design
groups that closely work with them
and help foster a community of
practice around ethical deliberation.
2. Vet the overall design strategy of the
organization.
a. Define the ethical goals—what does the
organization want to craft as its legacy?
3. Help operationalize the ethical code
of the organization as it is applied to AI
and robotic projects and update this
code as new challenges are resolved.
4. Keep a repository of these
deliberations to facilitate future
discussions
46. Artificial Ethical/Moral Agents (AEA, AMA)
• Artificial Practical Wisdom
• Virtues for robots
• Security
• Integrity
• Accessibility
• Ethical trust
• Functional moral sensibility
• Accurate choice of ethical actions
and goals
• Context sensitive
• Accurate ranking of exemplar
cases and reasoning
47. For More Information
Applied Professional Ethics for the
Reluctant Roboticist. Open
Robotics, 2015
Ethics Boards for Research in
Robotics and Artificial Intelligence:
Is it Too Soon to Act?
Chapter 5 in Social Robots:
Boundaries, Potential, Challenges,
edited by Marco Nørskov, Ashgate