SlideShare une entreprise Scribd logo
1  sur  37
Joanna J. Bryson
University of Bath, United Kingdom
@j2bryson
Intelligence by Design
Software engineering for
transparency and accountability
Who is responsible for AI?
How do we govern the
commercial use of AI?
What is AI?
Intelligence is the capacity to do the
right thing at the right time – to
translate perception into action.
Artificial Intelligence is a trait of
artefacts, deliberately built to
facilitate our intentions.
Nothing about intelligence changes
responsibility for that deliberate act.
Intelligence is computation–a transformation of information. Not math.
Computation is a physical process, taking time, energy, & space.
Finding the right thing to do at the right time requires search.
Cost of search = # of options# of acts (serial computing).
Examples:
• Any 2 of 100 possible actions = 1002 = 10,000 possible plans.
• # of 35-move games of chess > # of atoms in the universe.
Concurrency can save real time, but not energy, and requires more space.
Quantum saves on space (sometimes) but not energy(?)
Omniscience (“AGI”) is not a real threat. No one algorithm
can solve all of AI.
Not math.
Viv Kendon, Durham
Humanity’s winning (ecological)
strategy exploits concurrency –
we share what we know, mining
others’ prior search.
Now we do this with machine
learning.
AI is already “super-human” at
chess, go, speech transcription,
lip reading, deception detection
from posture, forging voices,
handwriting, & video, general
knowledge and memory.
This spectacular recent growth
derives from using ML to
exploit the discoveries
(previous computation) both
biological and cultural.
Becoming more intelligent does
not in itself imply becoming
more humanlike.
Intelligence is a form of computation; AI
extends & reuses ours; ML uploads ours
2015 US labor statistics
ρ = 0.90
Our implicit
behaviour is
not our ideal.
Ideals are for
explicit
planning and
cooperation.
Caliskan,
Bryson &
Narayanan,
2017
Why do we care about
accountability and transparency?
Accountability
Transparency
Information that lets you know
what your system is doing.
Not trust – knowledge
(requires cybersecurity.)
Responsibility assigned by a society.
Manufacturers of commercial
products are accountable for what
their products do, unless they can
prove another actor is (e.g. a user.)
Transparency allows corporations to
demonstrate due diligence – lack of
accountability.
• Law and Justice are more about dissuasion than recompense.
• Safe, secure, accountable software systems are modular –
suffering* in such is incoherent. *e.g. systemic dysphoria of
isolation, loss of status or wealth.
• No penalty of law enacted directly against an artefact (including a
shell company) can have efficacy.
Only Humans Can Be Accountable
Bryson, Diamantis & Grant
(AI & Law, September 2017)
1. Robots are multi-use tools. Robots should not be designed solely or
primarily to kill or harm humans, except in the interests of national
security.
2. Humans, not robots, are responsible agents. Robots should be designed
& operated as far as is practicable to comply with existing laws &
fundamental rights & freedoms, including privacy.
3. Robots are products. They should be designed using processes which
assure their safety and security. [devops]
4. Robots are manufactured artefacts.They should not be designed in a
deceptive way to exploit vulnerable users; instead their machine nature
should be transparent.
5. The person with legal responsibility for a robot should be attributed.
[like automobile titles] Boden et al 2011; cf. Bryson AISB 2000; Bryson;
Prescott, Connection Science, 2017; Floridi 2018.
UK Principles of Robotics (2011)
Asimov’s Laws revised for
Manufacturer Responsibility
Owner /
Operator
Respon-
sibility
OECD Principles of AI 2019 (endorsed by 42 governments)
1. AI should benefit people and the planet by driving inclusive growth, sustainable
development and well-being.
2. AI systems should be designed in a way that respects the rule of law, human rights,
democratic values and diversity, and they should include appropriate safeguards – 
for example, enabling human intervention where necessary – to ensure a fair and
just society.
3. There should be transparency and responsible disclosure around AI systems to
ensure that people understand when they are engaging with them [the AI systems]
and can challenge outcomes.
4. AI systems must function in a robust, secure and safe way throughout their
lifetimes, and potential risks should be continually assessed and managed.
5. Organisations and individuals developing, deploying or operating AI systems should
be held accountable for their proper functioning in line with the above principles.
OECD Principles of AI 2019 (endorsed by 42 governments )
1. AI should benefit people and the planet by driving inclusive growth, sustainable
development and well-being.
2. AI systems should be designed in a way that respects the rule of law, human rights,
democratic values and diversity, and they should include appropriate safeguards – 
for example, enabling human intervention where necessary – to ensure a fair and
just society.
3. There should be transparency and responsible disclosure around AI systems to
ensure that people understand when they are engaging with them [the AI systems]
and can challenge outcomes.
4. AI systems must function in a robust, secure and safe way throughout their
lifetimes, and potential risks should be continually assessed and managed.
5. Organisations and individuals developing, deploying or operating AI systems should
be held accountable for their proper functioning in line with the above principles.
The Design of Intelligence
No intelligent system springs into
existence from a concept.
Google uses only its own fiberoptic network (laid globally), chips
designed and built in-house (unlike the EU), because of cybersecurity –
even other fiberoptic cables in a bundle might spy on traffic.
AI is much more than algorithms or data.
You cannot separate the social concerns of AI from cybersecurity.
Google converts old paper mills, decommissioned coal plants into data centres.
Design and Accountability
• AI facilitates mandating transparently-honest accounts of system
engineering, both development and performance, including human
participants’.
• Log and monitor who does what, when, in terms of:
• Adding or changing lines of code
• What data & software libraries are used, their provenance & pedigree.
• Training and testing procedures, both during development and in
active use.
• The active system’s performance.
Feasibility of AI (∋ ML ∋
DNN) Transparency
• Worst case: AI is as inscrutable as humans.
• We audit accounts, not accountant’s synapses.
• Systems developers can set up (AI & human)
processes to monitor limits on performance.
• For decades we’ve trained simpler models to
inspect complex models (see recently
Ghahramani); transparent models can be better,
and easier to improve (see Rudin).
facebook – Rapid
Release at Massive Scale
Chuck Rossi
https://code.facebook.com/posts/270314900139291/rapid-release-at-massive-scale
Monitors are simpler,
better-understood AI
programs that run
continuously to ensure
a system is meeting
performance metrics,
and to detect early
evidence of failures or
interference / intrusion.
Think also about every
driverless car fatality.
Autombiles are
regulated because we
know they’re
dangerous.
• Based on modular systems-engineering methodologies: Behavior-
Based AI (Brooks 1991), Object-Oriented Design (Parnas &al.
1985), and Agile Development (Beck 2000).
• Decompose modular systems around what needs to be known in
order to perform the systems’ tasks.
• Optimise and constrain both perception and learning to subtask.
• Provide API to exploit knowledge, derive action from knowledge.
• Set priorities of system – must both maintain currency of its
awareness / knowledge, and pursue its goals (POSH “plans” =
Behaviour Trees.)
Behaviour Oriented Design
(Bryson 2001)
Wortham,Theodorous, &
Bryson, RO-MAN 2017
video at 5x
Transparency for developers via real time visualised priorities
ABOD3: A BOD Environment, III (Theodorous)
video:
live:
Wortham,Theodorou & Bryson 2017
(exp 1 video)
ABOD3 also helps naïve users
Wortham PhD
(2018)
Anthropomorphising may
reduce transparency.
New research project
(funded by 2017 AXA award)
What Makes Us Responsible?
“Isn’t this a double standard?”
• The claim that purely synthetic intelligent systems shouldn’t
themselves require moral consideration is often seen as unfair.
• First Answer: Not a double standard, a recommendation. Pick criteria
for personhood, don’t build to it, because owning and selling persons
is immoral (Bryson 2010, 2018).
• Second Answer: Human ethics, even aesthetics, coevolved with our
organism. Extending it even to other biological species is difficult, even
though they share our qualia. Building AI that we can extend it to is
probably impossible and almost certainly immoral (e.g. would require
human cloning, Bryson &al. 2017).
• Third Answer: A discontinuity occurred when we {evolved | invented}
words for the concept of responsibility (Bryson 2008, 2009).
• Third Answer: A discontinuity occurred when we {evolved | invented}
words for the concept of responsibility (Bryson 2008, 2009).
• Third Answer: A discontinuity occurred when we {evolved | invented}
words for the concept of responsibility (Bryson 2008, 2009).
Implies Ethics Are Not Universal
• Proposition: the core of ethics is who is responsible for what.
• Moral agents are considered responsible for their actions by a society.
• Moral patients are considered the responsibility of a society’s agents.
• Proposition: ethics is the set of behaviours that creates and sustains a
society, including by defining its identity.
• We want to say e.g.“We are more ethical.” Instead, we have to name
an ethics metric, e.g.“our society is more ethical in terms of the
proportion of the population sharing economic benefits.”
What about robots’
phenomenological
experiences?
We are obliged to build AI
we are not obliged to.
We will never build something
from chips and wires that shares
our visceral experience as much
as cows (or rats) do.
What about cows’?
Bryson, 2010
Should we regulate AI?
• Yes – we already do. All commerce is regulated.
• We just need to do it better – regulatory bodies needed
that understand software / DevOps.
• Expect those who build and use AI to be accountable, to
be able to prove due diligence.
• Work with and innovate governments to ensure
adequate redistribution (investment in infrastructure).
Aylin	Caliskan	
@aylin_cim
Thanks to my collaborators, and to you for
your attention.
Andreas Theodorou
@recklessCoding
Tom Dale Grant
Mihailis E.
Diamantis
... and the rest of Amoni
Holly Wilson
@wilsh010
Nolan McCarty
@nolan_mc
Alex
Stewart
@al_cibiades
Arvind	Narayanan	
@random_walker
Regulating AI
• Do not reward corporations by capping liabilities when they
fully automate business processes – Legal lacuna of synthetic
persons (Bryson, Diamantis & Grant 2017.)
• Do not motivate obfuscation of systems by reducing liabilities
for badly-tested or poorly-monitored learning, or special status
for systems with ill-defined properties, such as ‘consciousness’.
• Clear code is safer and can be more easily maintained, but
messy code is cheaper to produce (in the short run.)
• Regulation should motivate clarity (transparency, safety) by
requiring proof of due diligence.
Summary and Conclusions
• AI is by definition an artefact – something deliberately
built. That deliberation has ethical consequences.
• Augmenting our intelligence through technology defines
our societies, has lead to our ecological domination.
• We are obliged to each other to design AI accountably.
• We are obliged not to build AI we are obliged to.
• Assuming we deem as ethical an equitable society,
sufficiently stable to build businesses and families.
Social Disruption from
AI / ICT
• Empowerment of individuals.
• Rapid formation of new social identities.
• Dissipation of distance leading to:
• communication of wealth and power across national borders.
• concentration of wealth / business ⟹ inequality
Inequality
Matters
Empirically,
Gini =.27 ~ ideal.
0 is too low, (need to reward
excellence);
.3–.4 social disruption;
> .4 economies decline.
∑
n
i=1
∑
n
j=1
|xi − xj |
2n∑
n
i=1
xi
The Gini
Coefficient is half of
the relative mean
absolute difference in
wealth.
We can fix this.
Polarization and the Top 1%
r = .67
Polarization lagged 12 years r = .91
.5.6.7.8.911.1
Polarization
Index
791113151719
Percentage
Share
1913
1921
1929
1937
1945
1953
1961
1969
1977
1985
1993
2001
2009
Income share of top 1% Polarization index
Figure 1.2: Top One Percent Income Share and House Polarization
Voorheis, McCarty & Shor State Income Inequality and Political Polarization
We’ve Been Here Before
Scheidel, 2017
Polarization over 140 Years
r = .89
.2.4.6.811.21877
1885
1893
1901
1909
1917
1925
1933
1941
1949
1957
1965
1973
1981
1989
1997
2005
2013
House Senate
Polarization
Voorheis, McCarty & Shor State Income Inequality and Political Polarization
• Late 19C inequality perhaps driven by then-new distance-reducing
technologies: news, oil, rail, telegraph; now bootstrapped by ICT?
• Great coupling – period of low inequality where wages track
productivity – probably due to policy.
What about jobs?
AI, Employment,
and Wages
• We have more AI than ever, &
more jobs than ever (Autor, 2015,
“Why are there still so many
jobs.”)
• AI may be increasing inequality, by
making it easier to acquire skills.
This reduces an aspect of wage
differentiation – a factor believed
to benefit redistribution.
• Example 1: There are more human
bank tellers since ATMs, because
each branch has fewer, so branches
are cheaper, so more branches.
• Tellers are now better paid, but
fewer branch managers, who used
to be really well paid.
• Example 2: There aren’t enough
truck drivers, because it’s no longer
a well-paid job.
• GPS + power steering = anyone
can do it.

Contenu connexe

Tendances

Tendances (19)

2016 promise-of-ai
2016 promise-of-ai2016 promise-of-ai
2016 promise-of-ai
 
AI Presentation Y65 Class Dinner
AI Presentation Y65 Class DinnerAI Presentation Y65 Class Dinner
AI Presentation Y65 Class Dinner
 
Artificial Intelligence in Biomedical Engineering and Informatics: Review and...
Artificial Intelligence in Biomedical Engineering and Informatics: Review and...Artificial Intelligence in Biomedical Engineering and Informatics: Review and...
Artificial Intelligence in Biomedical Engineering and Informatics: Review and...
 
Computing, cognition and the future of knowing,. by IBM
Computing, cognition and the future of knowing,. by IBMComputing, cognition and the future of knowing,. by IBM
Computing, cognition and the future of knowing,. by IBM
 
Learning to trust artificial intelligence systems accountability, compliance ...
Learning to trust artificial intelligence systems accountability, compliance ...Learning to trust artificial intelligence systems accountability, compliance ...
Learning to trust artificial intelligence systems accountability, compliance ...
 
AI 3.0
AI 3.0AI 3.0
AI 3.0
 
Internet of Things and Artificial Intelligence
Internet of Things and  Artificial IntelligenceInternet of Things and  Artificial Intelligence
Internet of Things and Artificial Intelligence
 
Robot & Frank & Basic AI
Robot & Frank & Basic AIRobot & Frank & Basic AI
Robot & Frank & Basic AI
 
inte
inteinte
inte
 
Gregory Ericson - Machine Intelligence
Gregory Ericson - Machine IntelligenceGregory Ericson - Machine Intelligence
Gregory Ericson - Machine Intelligence
 
Ai morality-today-2018-web
Ai morality-today-2018-webAi morality-today-2018-web
Ai morality-today-2018-web
 
Fontys Eric van Tol
Fontys Eric van TolFontys Eric van Tol
Fontys Eric van Tol
 
Steve Mills - Your Cognitive Future
Steve Mills - Your Cognitive FutureSteve Mills - Your Cognitive Future
Steve Mills - Your Cognitive Future
 
Basics of machine_learning
Basics of machine_learningBasics of machine_learning
Basics of machine_learning
 
A reading of ibm research innovations - for 2018 and ahead
A reading of ibm research innovations - for 2018 and aheadA reading of ibm research innovations - for 2018 and ahead
A reading of ibm research innovations - for 2018 and ahead
 
IBM Watson: How it Works, and What it means for Society beyond winning Jeopardy!
IBM Watson: How it Works, and What it means for Society beyond winning Jeopardy!IBM Watson: How it Works, and What it means for Society beyond winning Jeopardy!
IBM Watson: How it Works, and What it means for Society beyond winning Jeopardy!
 
15/3 -17 impact exponential technologies
15/3 -17 impact exponential technologies 15/3 -17 impact exponential technologies
15/3 -17 impact exponential technologies
 
Chapter 1 (final)
Chapter 1 (final)Chapter 1 (final)
Chapter 1 (final)
 
Web 2.0 Collective Intelligence - How to use collective intelligence techniqu...
Web 2.0 Collective Intelligence - How to use collective intelligence techniqu...Web 2.0 Collective Intelligence - How to use collective intelligence techniqu...
Web 2.0 Collective Intelligence - How to use collective intelligence techniqu...
 

Similaire à Joanna Bryson (University of Bath) - Intelligence by Design_ Systems engineering for the coming era of AI regulation

GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
James Anderson
 
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
AJHSSR Journal
 

Similaire à Joanna Bryson (University of Bath) - Intelligence by Design_ Systems engineering for the coming era of AI regulation (20)

The Ethics of Artificial Intelligence in Digital Ecosystems
The Ethics of Artificial Intelligence in Digital EcosystemsThe Ethics of Artificial Intelligence in Digital Ecosystems
The Ethics of Artificial Intelligence in Digital Ecosystems
 
3 Steps To Tackle The Problem Of Bias In Artificial Intelligence
3 Steps To Tackle The Problem Of Bias In Artificial Intelligence3 Steps To Tackle The Problem Of Bias In Artificial Intelligence
3 Steps To Tackle The Problem Of Bias In Artificial Intelligence
 
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaEthical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
 
UNIT I - AI.pptx
UNIT I - AI.pptxUNIT I - AI.pptx
UNIT I - AI.pptx
 
The role we play as creators - A designer's take on AI
The role we play as creators - A designer's take on AIThe role we play as creators - A designer's take on AI
The role we play as creators - A designer's take on AI
 
How artificial intelligence can help you today
How artificial intelligence can help you todayHow artificial intelligence can help you today
How artificial intelligence can help you today
 
Deep-Dive-AI-final-report.pdf
Deep-Dive-AI-final-report.pdfDeep-Dive-AI-final-report.pdf
Deep-Dive-AI-final-report.pdf
 
Artificial Intelligence.
Artificial Intelligence.Artificial Intelligence.
Artificial Intelligence.
 
The AI Now Report The Social and Economic Implications of Artificial Intelli...
The AI Now Report  The Social and Economic Implications of Artificial Intelli...The AI Now Report  The Social and Economic Implications of Artificial Intelli...
The AI Now Report The Social and Economic Implications of Artificial Intelli...
 
General studies with Integration of ethics with ICT.pptx
General studies with Integration of ethics with ICT.pptxGeneral studies with Integration of ethics with ICT.pptx
General studies with Integration of ethics with ICT.pptx
 
Emerging Technologies in Data Sharing and Analytics at Data61
Emerging Technologies in Data Sharing and Analytics at Data61Emerging Technologies in Data Sharing and Analytics at Data61
Emerging Technologies in Data Sharing and Analytics at Data61
 
ALT Ethical AI summit, HB keynote, Dec 2023
ALT Ethical AI summit, HB keynote, Dec 2023ALT Ethical AI summit, HB keynote, Dec 2023
ALT Ethical AI summit, HB keynote, Dec 2023
 
AGI Part 1.pdf
AGI Part 1.pdfAGI Part 1.pdf
AGI Part 1.pdf
 
Artificial Intelligence: Shaping the Future of Technology
Artificial Intelligence: Shaping the Future of TechnologyArtificial Intelligence: Shaping the Future of Technology
Artificial Intelligence: Shaping the Future of Technology
 
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
 
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
 
Tecnologías emergentes: priorizando al ciudadano
Tecnologías emergentes: priorizando al ciudadanoTecnologías emergentes: priorizando al ciudadano
Tecnologías emergentes: priorizando al ciudadano
 
Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...
Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...
Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...
 
Ethical Questions in Artificial Intelligence (AI).pptx
Ethical Questions in Artificial Intelligence (AI).pptxEthical Questions in Artificial Intelligence (AI).pptx
Ethical Questions in Artificial Intelligence (AI).pptx
 
20220203 jim spohrer purdue v12
20220203 jim spohrer purdue v1220220203 jim spohrer purdue v12
20220203 jim spohrer purdue v12
 

Plus de Codiax

Dr. Laura Kerber (NASA’s Jet Propulsion Laboratory) – Exploring Caves on the ...
Dr. Laura Kerber (NASA’s Jet Propulsion Laboratory) – Exploring Caves on the ...Dr. Laura Kerber (NASA’s Jet Propulsion Laboratory) – Exploring Caves on the ...
Dr. Laura Kerber (NASA’s Jet Propulsion Laboratory) – Exploring Caves on the ...
Codiax
 
Costas Voliotis (CodeWeTrust) – An AI-driven approach to source code evaluation
Costas Voliotis (CodeWeTrust) – An AI-driven approach to source code evaluationCostas Voliotis (CodeWeTrust) – An AI-driven approach to source code evaluation
Costas Voliotis (CodeWeTrust) – An AI-driven approach to source code evaluation
Codiax
 
Dr. Lobna Karoui (Fortune 500) – Disruption, empathy & Trust for sustainable ...
Dr. Lobna Karoui (Fortune 500) – Disruption, empathy & Trust for sustainable ...Dr. Lobna Karoui (Fortune 500) – Disruption, empathy & Trust for sustainable ...
Dr. Lobna Karoui (Fortune 500) – Disruption, empathy & Trust for sustainable ...
Codiax
 
Luka Postružin (Superbet) – ‘From zero to hero’ in early life customer segmen...
Luka Postružin (Superbet) – ‘From zero to hero’ in early life customer segmen...Luka Postružin (Superbet) – ‘From zero to hero’ in early life customer segmen...
Luka Postružin (Superbet) – ‘From zero to hero’ in early life customer segmen...
Codiax
 
Gema Parreno Piqueras (Apium Hub) – Videogames and Interactive Narrative Cont...
Gema Parreno Piqueras (Apium Hub) – Videogames and Interactive Narrative Cont...Gema Parreno Piqueras (Apium Hub) – Videogames and Interactive Narrative Cont...
Gema Parreno Piqueras (Apium Hub) – Videogames and Interactive Narrative Cont...
Codiax
 
Janos Puskas (Accenture) – Azure IoT Reference Architecture for enterprise Io...
Janos Puskas (Accenture) – Azure IoT Reference Architecture for enterprise Io...Janos Puskas (Accenture) – Azure IoT Reference Architecture for enterprise Io...
Janos Puskas (Accenture) – Azure IoT Reference Architecture for enterprise Io...
Codiax
 
Adria Recasens, DeepMind – Multi-modal self-supervised learning from videos
Adria Recasens, DeepMind – Multi-modal self-supervised learning from videosAdria Recasens, DeepMind – Multi-modal self-supervised learning from videos
Adria Recasens, DeepMind – Multi-modal self-supervised learning from videos
Codiax
 
Roelof Pieters (Overstory) – Tackling Forest Fires and Deforestation with Sat...
Roelof Pieters (Overstory) – Tackling Forest Fires and Deforestation with Sat...Roelof Pieters (Overstory) – Tackling Forest Fires and Deforestation with Sat...
Roelof Pieters (Overstory) – Tackling Forest Fires and Deforestation with Sat...
Codiax
 
Javier Fuentes Alonso (Uizard) – Using machine learning to turn you into a de...
Javier Fuentes Alonso (Uizard) – Using machine learning to turn you into a de...Javier Fuentes Alonso (Uizard) – Using machine learning to turn you into a de...
Javier Fuentes Alonso (Uizard) – Using machine learning to turn you into a de...
Codiax
 
Emeli Dral (Evidently AI) – Analyze it: production monitoring for machine lea...
Emeli Dral (Evidently AI) – Analyze it: production monitoring for machine lea...Emeli Dral (Evidently AI) – Analyze it: production monitoring for machine lea...
Emeli Dral (Evidently AI) – Analyze it: production monitoring for machine lea...
Codiax
 
Matthias Feys (ML6) – Bias in ML: A Technical Intro
Matthias Feys (ML6) – Bias in ML: A Technical IntroMatthias Feys (ML6) – Bias in ML: A Technical Intro
Matthias Feys (ML6) – Bias in ML: A Technical Intro
Codiax
 
Christophe Tallec, Hello Tomorrow – Solving our next decade challenges throug...
Christophe Tallec, Hello Tomorrow – Solving our next decade challenges throug...Christophe Tallec, Hello Tomorrow – Solving our next decade challenges throug...
Christophe Tallec, Hello Tomorrow – Solving our next decade challenges throug...
Codiax
 
Sean Holden (University of Cambridge) - Proving Theorems_ Still A Major Test ...
Sean Holden (University of Cambridge) - Proving Theorems_ Still A Major Test ...Sean Holden (University of Cambridge) - Proving Theorems_ Still A Major Test ...
Sean Holden (University of Cambridge) - Proving Theorems_ Still A Major Test ...
Codiax
 
Olga Afanasjeva (GoodAI) - Towards general artificial intelligence for common...
Olga Afanasjeva (GoodAI) - Towards general artificial intelligence for common...Olga Afanasjeva (GoodAI) - Towards general artificial intelligence for common...
Olga Afanasjeva (GoodAI) - Towards general artificial intelligence for common...
Codiax
 
Maciej Marek (Philip Morris International) - The Tools of The Trade
Maciej Marek (Philip Morris International) - The Tools of The TradeMaciej Marek (Philip Morris International) - The Tools of The Trade
Maciej Marek (Philip Morris International) - The Tools of The Trade
Codiax
 
Jakub Langr (University of Oxford) - Overview of Generative Adversarial Netwo...
Jakub Langr (University of Oxford) - Overview of Generative Adversarial Netwo...Jakub Langr (University of Oxford) - Overview of Generative Adversarial Netwo...
Jakub Langr (University of Oxford) - Overview of Generative Adversarial Netwo...
Codiax
 
Jakub Bartoszek (Samsung Electronics) - Hardware Security in Connected World
Jakub Bartoszek (Samsung Electronics) - Hardware Security in Connected WorldJakub Bartoszek (Samsung Electronics) - Hardware Security in Connected World
Jakub Bartoszek (Samsung Electronics) - Hardware Security in Connected World
Codiax
 
Jair Ribeiro - Defining a Successful Artificial Intelligence Strategy for you...
Jair Ribeiro - Defining a Successful Artificial Intelligence Strategy for you...Jair Ribeiro - Defining a Successful Artificial Intelligence Strategy for you...
Jair Ribeiro - Defining a Successful Artificial Intelligence Strategy for you...
Codiax
 
Cindy Spelt (Zoom In Zoom Out) - How to beat the face recognition challenges?
Cindy Spelt (Zoom In Zoom Out) - How to beat the face recognition challenges?Cindy Spelt (Zoom In Zoom Out) - How to beat the face recognition challenges?
Cindy Spelt (Zoom In Zoom Out) - How to beat the face recognition challenges?
Codiax
 
Alexey Borisenko (Cisco) - Creating IoT solution using LoRaWAN Network Server
Alexey Borisenko (Cisco) - Creating IoT solution using LoRaWAN Network ServerAlexey Borisenko (Cisco) - Creating IoT solution using LoRaWAN Network Server
Alexey Borisenko (Cisco) - Creating IoT solution using LoRaWAN Network Server
Codiax
 

Plus de Codiax (20)

Dr. Laura Kerber (NASA’s Jet Propulsion Laboratory) – Exploring Caves on the ...
Dr. Laura Kerber (NASA’s Jet Propulsion Laboratory) – Exploring Caves on the ...Dr. Laura Kerber (NASA’s Jet Propulsion Laboratory) – Exploring Caves on the ...
Dr. Laura Kerber (NASA’s Jet Propulsion Laboratory) – Exploring Caves on the ...
 
Costas Voliotis (CodeWeTrust) – An AI-driven approach to source code evaluation
Costas Voliotis (CodeWeTrust) – An AI-driven approach to source code evaluationCostas Voliotis (CodeWeTrust) – An AI-driven approach to source code evaluation
Costas Voliotis (CodeWeTrust) – An AI-driven approach to source code evaluation
 
Dr. Lobna Karoui (Fortune 500) – Disruption, empathy & Trust for sustainable ...
Dr. Lobna Karoui (Fortune 500) – Disruption, empathy & Trust for sustainable ...Dr. Lobna Karoui (Fortune 500) – Disruption, empathy & Trust for sustainable ...
Dr. Lobna Karoui (Fortune 500) – Disruption, empathy & Trust for sustainable ...
 
Luka Postružin (Superbet) – ‘From zero to hero’ in early life customer segmen...
Luka Postružin (Superbet) – ‘From zero to hero’ in early life customer segmen...Luka Postružin (Superbet) – ‘From zero to hero’ in early life customer segmen...
Luka Postružin (Superbet) – ‘From zero to hero’ in early life customer segmen...
 
Gema Parreno Piqueras (Apium Hub) – Videogames and Interactive Narrative Cont...
Gema Parreno Piqueras (Apium Hub) – Videogames and Interactive Narrative Cont...Gema Parreno Piqueras (Apium Hub) – Videogames and Interactive Narrative Cont...
Gema Parreno Piqueras (Apium Hub) – Videogames and Interactive Narrative Cont...
 
Janos Puskas (Accenture) – Azure IoT Reference Architecture for enterprise Io...
Janos Puskas (Accenture) – Azure IoT Reference Architecture for enterprise Io...Janos Puskas (Accenture) – Azure IoT Reference Architecture for enterprise Io...
Janos Puskas (Accenture) – Azure IoT Reference Architecture for enterprise Io...
 
Adria Recasens, DeepMind – Multi-modal self-supervised learning from videos
Adria Recasens, DeepMind – Multi-modal self-supervised learning from videosAdria Recasens, DeepMind – Multi-modal self-supervised learning from videos
Adria Recasens, DeepMind – Multi-modal self-supervised learning from videos
 
Roelof Pieters (Overstory) – Tackling Forest Fires and Deforestation with Sat...
Roelof Pieters (Overstory) – Tackling Forest Fires and Deforestation with Sat...Roelof Pieters (Overstory) – Tackling Forest Fires and Deforestation with Sat...
Roelof Pieters (Overstory) – Tackling Forest Fires and Deforestation with Sat...
 
Javier Fuentes Alonso (Uizard) – Using machine learning to turn you into a de...
Javier Fuentes Alonso (Uizard) – Using machine learning to turn you into a de...Javier Fuentes Alonso (Uizard) – Using machine learning to turn you into a de...
Javier Fuentes Alonso (Uizard) – Using machine learning to turn you into a de...
 
Emeli Dral (Evidently AI) – Analyze it: production monitoring for machine lea...
Emeli Dral (Evidently AI) – Analyze it: production monitoring for machine lea...Emeli Dral (Evidently AI) – Analyze it: production monitoring for machine lea...
Emeli Dral (Evidently AI) – Analyze it: production monitoring for machine lea...
 
Matthias Feys (ML6) – Bias in ML: A Technical Intro
Matthias Feys (ML6) – Bias in ML: A Technical IntroMatthias Feys (ML6) – Bias in ML: A Technical Intro
Matthias Feys (ML6) – Bias in ML: A Technical Intro
 
Christophe Tallec, Hello Tomorrow – Solving our next decade challenges throug...
Christophe Tallec, Hello Tomorrow – Solving our next decade challenges throug...Christophe Tallec, Hello Tomorrow – Solving our next decade challenges throug...
Christophe Tallec, Hello Tomorrow – Solving our next decade challenges throug...
 
Sean Holden (University of Cambridge) - Proving Theorems_ Still A Major Test ...
Sean Holden (University of Cambridge) - Proving Theorems_ Still A Major Test ...Sean Holden (University of Cambridge) - Proving Theorems_ Still A Major Test ...
Sean Holden (University of Cambridge) - Proving Theorems_ Still A Major Test ...
 
Olga Afanasjeva (GoodAI) - Towards general artificial intelligence for common...
Olga Afanasjeva (GoodAI) - Towards general artificial intelligence for common...Olga Afanasjeva (GoodAI) - Towards general artificial intelligence for common...
Olga Afanasjeva (GoodAI) - Towards general artificial intelligence for common...
 
Maciej Marek (Philip Morris International) - The Tools of The Trade
Maciej Marek (Philip Morris International) - The Tools of The TradeMaciej Marek (Philip Morris International) - The Tools of The Trade
Maciej Marek (Philip Morris International) - The Tools of The Trade
 
Jakub Langr (University of Oxford) - Overview of Generative Adversarial Netwo...
Jakub Langr (University of Oxford) - Overview of Generative Adversarial Netwo...Jakub Langr (University of Oxford) - Overview of Generative Adversarial Netwo...
Jakub Langr (University of Oxford) - Overview of Generative Adversarial Netwo...
 
Jakub Bartoszek (Samsung Electronics) - Hardware Security in Connected World
Jakub Bartoszek (Samsung Electronics) - Hardware Security in Connected WorldJakub Bartoszek (Samsung Electronics) - Hardware Security in Connected World
Jakub Bartoszek (Samsung Electronics) - Hardware Security in Connected World
 
Jair Ribeiro - Defining a Successful Artificial Intelligence Strategy for you...
Jair Ribeiro - Defining a Successful Artificial Intelligence Strategy for you...Jair Ribeiro - Defining a Successful Artificial Intelligence Strategy for you...
Jair Ribeiro - Defining a Successful Artificial Intelligence Strategy for you...
 
Cindy Spelt (Zoom In Zoom Out) - How to beat the face recognition challenges?
Cindy Spelt (Zoom In Zoom Out) - How to beat the face recognition challenges?Cindy Spelt (Zoom In Zoom Out) - How to beat the face recognition challenges?
Cindy Spelt (Zoom In Zoom Out) - How to beat the face recognition challenges?
 
Alexey Borisenko (Cisco) - Creating IoT solution using LoRaWAN Network Server
Alexey Borisenko (Cisco) - Creating IoT solution using LoRaWAN Network ServerAlexey Borisenko (Cisco) - Creating IoT solution using LoRaWAN Network Server
Alexey Borisenko (Cisco) - Creating IoT solution using LoRaWAN Network Server
 

Dernier

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 

Dernier (20)

Cyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdfCyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdf
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 

Joanna Bryson (University of Bath) - Intelligence by Design_ Systems engineering for the coming era of AI regulation

  • 1. Joanna J. Bryson University of Bath, United Kingdom @j2bryson Intelligence by Design Software engineering for transparency and accountability
  • 2. Who is responsible for AI? How do we govern the commercial use of AI? What is AI?
  • 3. Intelligence is the capacity to do the right thing at the right time – to translate perception into action. Artificial Intelligence is a trait of artefacts, deliberately built to facilitate our intentions. Nothing about intelligence changes responsibility for that deliberate act.
  • 4. Intelligence is computation–a transformation of information. Not math. Computation is a physical process, taking time, energy, & space. Finding the right thing to do at the right time requires search. Cost of search = # of options# of acts (serial computing). Examples: • Any 2 of 100 possible actions = 1002 = 10,000 possible plans. • # of 35-move games of chess > # of atoms in the universe. Concurrency can save real time, but not energy, and requires more space. Quantum saves on space (sometimes) but not energy(?) Omniscience (“AGI”) is not a real threat. No one algorithm can solve all of AI. Not math. Viv Kendon, Durham
  • 5. Humanity’s winning (ecological) strategy exploits concurrency – we share what we know, mining others’ prior search. Now we do this with machine learning.
  • 6. AI is already “super-human” at chess, go, speech transcription, lip reading, deception detection from posture, forging voices, handwriting, & video, general knowledge and memory. This spectacular recent growth derives from using ML to exploit the discoveries (previous computation) both biological and cultural. Becoming more intelligent does not in itself imply becoming more humanlike.
  • 7. Intelligence is a form of computation; AI extends & reuses ours; ML uploads ours 2015 US labor statistics ρ = 0.90 Our implicit behaviour is not our ideal. Ideals are for explicit planning and cooperation. Caliskan, Bryson & Narayanan, 2017
  • 8. Why do we care about accountability and transparency?
  • 9. Accountability Transparency Information that lets you know what your system is doing. Not trust – knowledge (requires cybersecurity.) Responsibility assigned by a society. Manufacturers of commercial products are accountable for what their products do, unless they can prove another actor is (e.g. a user.) Transparency allows corporations to demonstrate due diligence – lack of accountability.
  • 10. • Law and Justice are more about dissuasion than recompense. • Safe, secure, accountable software systems are modular – suffering* in such is incoherent. *e.g. systemic dysphoria of isolation, loss of status or wealth. • No penalty of law enacted directly against an artefact (including a shell company) can have efficacy. Only Humans Can Be Accountable Bryson, Diamantis & Grant (AI & Law, September 2017)
  • 11. 1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security. 2. Humans, not robots, are responsible agents. Robots should be designed & operated as far as is practicable to comply with existing laws & fundamental rights & freedoms, including privacy. 3. Robots are products. They should be designed using processes which assure their safety and security. [devops] 4. Robots are manufactured artefacts.They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent. 5. The person with legal responsibility for a robot should be attributed. [like automobile titles] Boden et al 2011; cf. Bryson AISB 2000; Bryson; Prescott, Connection Science, 2017; Floridi 2018. UK Principles of Robotics (2011) Asimov’s Laws revised for Manufacturer Responsibility Owner / Operator Respon- sibility
  • 12. OECD Principles of AI 2019 (endorsed by 42 governments) 1. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being. 2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards –  for example, enabling human intervention where necessary – to ensure a fair and just society. 3. There should be transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them [the AI systems] and can challenge outcomes. 4. AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed. 5. Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
  • 13. OECD Principles of AI 2019 (endorsed by 42 governments ) 1. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being. 2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards –  for example, enabling human intervention where necessary – to ensure a fair and just society. 3. There should be transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them [the AI systems] and can challenge outcomes. 4. AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed. 5. Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
  • 14. The Design of Intelligence
  • 15. No intelligent system springs into existence from a concept.
  • 16. Google uses only its own fiberoptic network (laid globally), chips designed and built in-house (unlike the EU), because of cybersecurity – even other fiberoptic cables in a bundle might spy on traffic. AI is much more than algorithms or data. You cannot separate the social concerns of AI from cybersecurity. Google converts old paper mills, decommissioned coal plants into data centres.
  • 17. Design and Accountability • AI facilitates mandating transparently-honest accounts of system engineering, both development and performance, including human participants’. • Log and monitor who does what, when, in terms of: • Adding or changing lines of code • What data & software libraries are used, their provenance & pedigree. • Training and testing procedures, both during development and in active use. • The active system’s performance.
  • 18. Feasibility of AI (∋ ML ∋ DNN) Transparency • Worst case: AI is as inscrutable as humans. • We audit accounts, not accountant’s synapses. • Systems developers can set up (AI & human) processes to monitor limits on performance. • For decades we’ve trained simpler models to inspect complex models (see recently Ghahramani); transparent models can be better, and easier to improve (see Rudin).
  • 19. facebook – Rapid Release at Massive Scale Chuck Rossi https://code.facebook.com/posts/270314900139291/rapid-release-at-massive-scale Monitors are simpler, better-understood AI programs that run continuously to ensure a system is meeting performance metrics, and to detect early evidence of failures or interference / intrusion. Think also about every driverless car fatality. Autombiles are regulated because we know they’re dangerous.
  • 20. • Based on modular systems-engineering methodologies: Behavior- Based AI (Brooks 1991), Object-Oriented Design (Parnas &al. 1985), and Agile Development (Beck 2000). • Decompose modular systems around what needs to be known in order to perform the systems’ tasks. • Optimise and constrain both perception and learning to subtask. • Provide API to exploit knowledge, derive action from knowledge. • Set priorities of system – must both maintain currency of its awareness / knowledge, and pursue its goals (POSH “plans” = Behaviour Trees.) Behaviour Oriented Design (Bryson 2001)
  • 21. Wortham,Theodorous, & Bryson, RO-MAN 2017 video at 5x Transparency for developers via real time visualised priorities ABOD3: A BOD Environment, III (Theodorous)
  • 22. video: live: Wortham,Theodorou & Bryson 2017 (exp 1 video) ABOD3 also helps naïve users
  • 23. Wortham PhD (2018) Anthropomorphising may reduce transparency. New research project (funded by 2017 AXA award)
  • 24. What Makes Us Responsible? “Isn’t this a double standard?”
  • 25. • The claim that purely synthetic intelligent systems shouldn’t themselves require moral consideration is often seen as unfair. • First Answer: Not a double standard, a recommendation. Pick criteria for personhood, don’t build to it, because owning and selling persons is immoral (Bryson 2010, 2018). • Second Answer: Human ethics, even aesthetics, coevolved with our organism. Extending it even to other biological species is difficult, even though they share our qualia. Building AI that we can extend it to is probably impossible and almost certainly immoral (e.g. would require human cloning, Bryson &al. 2017). • Third Answer: A discontinuity occurred when we {evolved | invented} words for the concept of responsibility (Bryson 2008, 2009).
  • 26. • Third Answer: A discontinuity occurred when we {evolved | invented} words for the concept of responsibility (Bryson 2008, 2009).
  • 27. • Third Answer: A discontinuity occurred when we {evolved | invented} words for the concept of responsibility (Bryson 2008, 2009). Implies Ethics Are Not Universal • Proposition: the core of ethics is who is responsible for what. • Moral agents are considered responsible for their actions by a society. • Moral patients are considered the responsibility of a society’s agents. • Proposition: ethics is the set of behaviours that creates and sustains a society, including by defining its identity. • We want to say e.g.“We are more ethical.” Instead, we have to name an ethics metric, e.g.“our society is more ethical in terms of the proportion of the population sharing economic benefits.”
  • 28. What about robots’ phenomenological experiences? We are obliged to build AI we are not obliged to. We will never build something from chips and wires that shares our visceral experience as much as cows (or rats) do. What about cows’? Bryson, 2010
  • 29. Should we regulate AI? • Yes – we already do. All commerce is regulated. • We just need to do it better – regulatory bodies needed that understand software / DevOps. • Expect those who build and use AI to be accountable, to be able to prove due diligence. • Work with and innovate governments to ensure adequate redistribution (investment in infrastructure).
  • 30. Aylin Caliskan @aylin_cim Thanks to my collaborators, and to you for your attention. Andreas Theodorou @recklessCoding Tom Dale Grant Mihailis E. Diamantis ... and the rest of Amoni Holly Wilson @wilsh010 Nolan McCarty @nolan_mc Alex Stewart @al_cibiades Arvind Narayanan @random_walker
  • 31. Regulating AI • Do not reward corporations by capping liabilities when they fully automate business processes – Legal lacuna of synthetic persons (Bryson, Diamantis & Grant 2017.) • Do not motivate obfuscation of systems by reducing liabilities for badly-tested or poorly-monitored learning, or special status for systems with ill-defined properties, such as ‘consciousness’. • Clear code is safer and can be more easily maintained, but messy code is cheaper to produce (in the short run.) • Regulation should motivate clarity (transparency, safety) by requiring proof of due diligence.
  • 32. Summary and Conclusions • AI is by definition an artefact – something deliberately built. That deliberation has ethical consequences. • Augmenting our intelligence through technology defines our societies, has lead to our ecological domination. • We are obliged to each other to design AI accountably. • We are obliged not to build AI we are obliged to. • Assuming we deem as ethical an equitable society, sufficiently stable to build businesses and families.
  • 33. Social Disruption from AI / ICT • Empowerment of individuals. • Rapid formation of new social identities. • Dissipation of distance leading to: • communication of wealth and power across national borders. • concentration of wealth / business ⟹ inequality
  • 34. Inequality Matters Empirically, Gini =.27 ~ ideal. 0 is too low, (need to reward excellence); .3–.4 social disruption; > .4 economies decline. ∑ n i=1 ∑ n j=1 |xi − xj | 2n∑ n i=1 xi The Gini Coefficient is half of the relative mean absolute difference in wealth.
  • 35. We can fix this. Polarization and the Top 1% r = .67 Polarization lagged 12 years r = .91 .5.6.7.8.911.1 Polarization Index 791113151719 Percentage Share 1913 1921 1929 1937 1945 1953 1961 1969 1977 1985 1993 2001 2009 Income share of top 1% Polarization index Figure 1.2: Top One Percent Income Share and House Polarization Voorheis, McCarty & Shor State Income Inequality and Political Polarization We’ve Been Here Before Scheidel, 2017 Polarization over 140 Years r = .89 .2.4.6.811.21877 1885 1893 1901 1909 1917 1925 1933 1941 1949 1957 1965 1973 1981 1989 1997 2005 2013 House Senate Polarization Voorheis, McCarty & Shor State Income Inequality and Political Polarization • Late 19C inequality perhaps driven by then-new distance-reducing technologies: news, oil, rail, telegraph; now bootstrapped by ICT? • Great coupling – period of low inequality where wages track productivity – probably due to policy.
  • 37. AI, Employment, and Wages • We have more AI than ever, & more jobs than ever (Autor, 2015, “Why are there still so many jobs.”) • AI may be increasing inequality, by making it easier to acquire skills. This reduces an aspect of wage differentiation – a factor believed to benefit redistribution. • Example 1: There are more human bank tellers since ATMs, because each branch has fewer, so branches are cheaper, so more branches. • Tellers are now better paid, but fewer branch managers, who used to be really well paid. • Example 2: There aren’t enough truck drivers, because it’s no longer a well-paid job. • GPS + power steering = anyone can do it.