Joanna Bryson (University of Bath) - Intelligence by Design_ Systems engineering for the coming era of AI regulation
1. Joanna J. Bryson
University of Bath, United Kingdom
@j2bryson
Intelligence by Design
Software engineering for
transparency and accountability
2. Who is responsible for AI?
How do we govern the
commercial use of AI?
What is AI?
3. Intelligence is the capacity to do the
right thing at the right time – to
translate perception into action.
Artificial Intelligence is a trait of
artefacts, deliberately built to
facilitate our intentions.
Nothing about intelligence changes
responsibility for that deliberate act.
4. Intelligence is computation–a transformation of information. Not math.
Computation is a physical process, taking time, energy, & space.
Finding the right thing to do at the right time requires search.
Cost of search = # of options# of acts (serial computing).
Examples:
• Any 2 of 100 possible actions = 1002 = 10,000 possible plans.
• # of 35-move games of chess > # of atoms in the universe.
Concurrency can save real time, but not energy, and requires more space.
Quantum saves on space (sometimes) but not energy(?)
Omniscience (“AGI”) is not a real threat. No one algorithm
can solve all of AI.
Not math.
Viv Kendon, Durham
6. AI is already “super-human” at
chess, go, speech transcription,
lip reading, deception detection
from posture, forging voices,
handwriting, & video, general
knowledge and memory.
This spectacular recent growth
derives from using ML to
exploit the discoveries
(previous computation) both
biological and cultural.
Becoming more intelligent does
not in itself imply becoming
more humanlike.
7. Intelligence is a form of computation; AI
extends & reuses ours; ML uploads ours
2015 US labor statistics
ρ = 0.90
Our implicit
behaviour is
not our ideal.
Ideals are for
explicit
planning and
cooperation.
Caliskan,
Bryson &
Narayanan,
2017
8. Why do we care about
accountability and transparency?
9. Accountability
Transparency
Information that lets you know
what your system is doing.
Not trust – knowledge
(requires cybersecurity.)
Responsibility assigned by a society.
Manufacturers of commercial
products are accountable for what
their products do, unless they can
prove another actor is (e.g. a user.)
Transparency allows corporations to
demonstrate due diligence – lack of
accountability.
10. • Law and Justice are more about dissuasion than recompense.
• Safe, secure, accountable software systems are modular –
suffering* in such is incoherent. *e.g. systemic dysphoria of
isolation, loss of status or wealth.
• No penalty of law enacted directly against an artefact (including a
shell company) can have efficacy.
Only Humans Can Be Accountable
Bryson, Diamantis & Grant
(AI & Law, September 2017)
11. 1. Robots are multi-use tools. Robots should not be designed solely or
primarily to kill or harm humans, except in the interests of national
security.
2. Humans, not robots, are responsible agents. Robots should be designed
& operated as far as is practicable to comply with existing laws &
fundamental rights & freedoms, including privacy.
3. Robots are products. They should be designed using processes which
assure their safety and security. [devops]
4. Robots are manufactured artefacts.They should not be designed in a
deceptive way to exploit vulnerable users; instead their machine nature
should be transparent.
5. The person with legal responsibility for a robot should be attributed.
[like automobile titles] Boden et al 2011; cf. Bryson AISB 2000; Bryson;
Prescott, Connection Science, 2017; Floridi 2018.
UK Principles of Robotics (2011)
Asimov’s Laws revised for
Manufacturer Responsibility
Owner /
Operator
Respon-
sibility
12. OECD Principles of AI 2019 (endorsed by 42 governments)
1. AI should benefit people and the planet by driving inclusive growth, sustainable
development and well-being.
2. AI systems should be designed in a way that respects the rule of law, human rights,
democratic values and diversity, and they should include appropriate safeguards –
for example, enabling human intervention where necessary – to ensure a fair and
just society.
3. There should be transparency and responsible disclosure around AI systems to
ensure that people understand when they are engaging with them [the AI systems]
and can challenge outcomes.
4. AI systems must function in a robust, secure and safe way throughout their
lifetimes, and potential risks should be continually assessed and managed.
5. Organisations and individuals developing, deploying or operating AI systems should
be held accountable for their proper functioning in line with the above principles.
13. OECD Principles of AI 2019 (endorsed by 42 governments )
1. AI should benefit people and the planet by driving inclusive growth, sustainable
development and well-being.
2. AI systems should be designed in a way that respects the rule of law, human rights,
democratic values and diversity, and they should include appropriate safeguards –
for example, enabling human intervention where necessary – to ensure a fair and
just society.
3. There should be transparency and responsible disclosure around AI systems to
ensure that people understand when they are engaging with them [the AI systems]
and can challenge outcomes.
4. AI systems must function in a robust, secure and safe way throughout their
lifetimes, and potential risks should be continually assessed and managed.
5. Organisations and individuals developing, deploying or operating AI systems should
be held accountable for their proper functioning in line with the above principles.
16. Google uses only its own fiberoptic network (laid globally), chips
designed and built in-house (unlike the EU), because of cybersecurity –
even other fiberoptic cables in a bundle might spy on traffic.
AI is much more than algorithms or data.
You cannot separate the social concerns of AI from cybersecurity.
Google converts old paper mills, decommissioned coal plants into data centres.
17. Design and Accountability
• AI facilitates mandating transparently-honest accounts of system
engineering, both development and performance, including human
participants’.
• Log and monitor who does what, when, in terms of:
• Adding or changing lines of code
• What data & software libraries are used, their provenance & pedigree.
• Training and testing procedures, both during development and in
active use.
• The active system’s performance.
18. Feasibility of AI (∋ ML ∋
DNN) Transparency
• Worst case: AI is as inscrutable as humans.
• We audit accounts, not accountant’s synapses.
• Systems developers can set up (AI & human)
processes to monitor limits on performance.
• For decades we’ve trained simpler models to
inspect complex models (see recently
Ghahramani); transparent models can be better,
and easier to improve (see Rudin).
19. facebook – Rapid
Release at Massive Scale
Chuck Rossi
https://code.facebook.com/posts/270314900139291/rapid-release-at-massive-scale
Monitors are simpler,
better-understood AI
programs that run
continuously to ensure
a system is meeting
performance metrics,
and to detect early
evidence of failures or
interference / intrusion.
Think also about every
driverless car fatality.
Autombiles are
regulated because we
know they’re
dangerous.
20. • Based on modular systems-engineering methodologies: Behavior-
Based AI (Brooks 1991), Object-Oriented Design (Parnas &al.
1985), and Agile Development (Beck 2000).
• Decompose modular systems around what needs to be known in
order to perform the systems’ tasks.
• Optimise and constrain both perception and learning to subtask.
• Provide API to exploit knowledge, derive action from knowledge.
• Set priorities of system – must both maintain currency of its
awareness / knowledge, and pursue its goals (POSH “plans” =
Behaviour Trees.)
Behaviour Oriented Design
(Bryson 2001)
21. Wortham,Theodorous, &
Bryson, RO-MAN 2017
video at 5x
Transparency for developers via real time visualised priorities
ABOD3: A BOD Environment, III (Theodorous)
24. What Makes Us Responsible?
“Isn’t this a double standard?”
25. • The claim that purely synthetic intelligent systems shouldn’t
themselves require moral consideration is often seen as unfair.
• First Answer: Not a double standard, a recommendation. Pick criteria
for personhood, don’t build to it, because owning and selling persons
is immoral (Bryson 2010, 2018).
• Second Answer: Human ethics, even aesthetics, coevolved with our
organism. Extending it even to other biological species is difficult, even
though they share our qualia. Building AI that we can extend it to is
probably impossible and almost certainly immoral (e.g. would require
human cloning, Bryson &al. 2017).
• Third Answer: A discontinuity occurred when we {evolved | invented}
words for the concept of responsibility (Bryson 2008, 2009).
26. • Third Answer: A discontinuity occurred when we {evolved | invented}
words for the concept of responsibility (Bryson 2008, 2009).
27. • Third Answer: A discontinuity occurred when we {evolved | invented}
words for the concept of responsibility (Bryson 2008, 2009).
Implies Ethics Are Not Universal
• Proposition: the core of ethics is who is responsible for what.
• Moral agents are considered responsible for their actions by a society.
• Moral patients are considered the responsibility of a society’s agents.
• Proposition: ethics is the set of behaviours that creates and sustains a
society, including by defining its identity.
• We want to say e.g.“We are more ethical.” Instead, we have to name
an ethics metric, e.g.“our society is more ethical in terms of the
proportion of the population sharing economic benefits.”
28. What about robots’
phenomenological
experiences?
We are obliged to build AI
we are not obliged to.
We will never build something
from chips and wires that shares
our visceral experience as much
as cows (or rats) do.
What about cows’?
Bryson, 2010
29. Should we regulate AI?
• Yes – we already do. All commerce is regulated.
• We just need to do it better – regulatory bodies needed
that understand software / DevOps.
• Expect those who build and use AI to be accountable, to
be able to prove due diligence.
• Work with and innovate governments to ensure
adequate redistribution (investment in infrastructure).
30. Aylin Caliskan
@aylin_cim
Thanks to my collaborators, and to you for
your attention.
Andreas Theodorou
@recklessCoding
Tom Dale Grant
Mihailis E.
Diamantis
... and the rest of Amoni
Holly Wilson
@wilsh010
Nolan McCarty
@nolan_mc
Alex
Stewart
@al_cibiades
Arvind Narayanan
@random_walker
31. Regulating AI
• Do not reward corporations by capping liabilities when they
fully automate business processes – Legal lacuna of synthetic
persons (Bryson, Diamantis & Grant 2017.)
• Do not motivate obfuscation of systems by reducing liabilities
for badly-tested or poorly-monitored learning, or special status
for systems with ill-defined properties, such as ‘consciousness’.
• Clear code is safer and can be more easily maintained, but
messy code is cheaper to produce (in the short run.)
• Regulation should motivate clarity (transparency, safety) by
requiring proof of due diligence.
32. Summary and Conclusions
• AI is by definition an artefact – something deliberately
built. That deliberation has ethical consequences.
• Augmenting our intelligence through technology defines
our societies, has lead to our ecological domination.
• We are obliged to each other to design AI accountably.
• We are obliged not to build AI we are obliged to.
• Assuming we deem as ethical an equitable society,
sufficiently stable to build businesses and families.
33. Social Disruption from
AI / ICT
• Empowerment of individuals.
• Rapid formation of new social identities.
• Dissipation of distance leading to:
• communication of wealth and power across national borders.
• concentration of wealth / business ⟹ inequality
34. Inequality
Matters
Empirically,
Gini =.27 ~ ideal.
0 is too low, (need to reward
excellence);
.3–.4 social disruption;
> .4 economies decline.
∑
n
i=1
∑
n
j=1
|xi − xj |
2n∑
n
i=1
xi
The Gini
Coefficient is half of
the relative mean
absolute difference in
wealth.
35. We can fix this.
Polarization and the Top 1%
r = .67
Polarization lagged 12 years r = .91
.5.6.7.8.911.1
Polarization
Index
791113151719
Percentage
Share
1913
1921
1929
1937
1945
1953
1961
1969
1977
1985
1993
2001
2009
Income share of top 1% Polarization index
Figure 1.2: Top One Percent Income Share and House Polarization
Voorheis, McCarty & Shor State Income Inequality and Political Polarization
We’ve Been Here Before
Scheidel, 2017
Polarization over 140 Years
r = .89
.2.4.6.811.21877
1885
1893
1901
1909
1917
1925
1933
1941
1949
1957
1965
1973
1981
1989
1997
2005
2013
House Senate
Polarization
Voorheis, McCarty & Shor State Income Inequality and Political Polarization
• Late 19C inequality perhaps driven by then-new distance-reducing
technologies: news, oil, rail, telegraph; now bootstrapped by ICT?
• Great coupling – period of low inequality where wages track
productivity – probably due to policy.
37. AI, Employment,
and Wages
• We have more AI than ever, &
more jobs than ever (Autor, 2015,
“Why are there still so many
jobs.”)
• AI may be increasing inequality, by
making it easier to acquire skills.
This reduces an aspect of wage
differentiation – a factor believed
to benefit redistribution.
• Example 1: There are more human
bank tellers since ATMs, because
each branch has fewer, so branches
are cheaper, so more branches.
• Tellers are now better paid, but
fewer branch managers, who used
to be really well paid.
• Example 2: There aren’t enough
truck drivers, because it’s no longer
a well-paid job.
• GPS + power steering = anyone
can do it.