1. 1
Lethal Autonomy.
Should there be a law against it?
Sander Rabin MD JD
The Center for Transhuman Jurisprudence, Inc.
The Future of our Minds, Bodies and Genomes
legal #heet
at
Bots and Brains NYC
15 December 2014
2. Ophthalmic Surgeon
Litigator
Nuclear Engineer
The Center for
Transhuman
Jurisprudence
journey to #heet
Patent Attorney
2
8. 8
An Artificial Intelligence Insight Structure
Seeing through the Hype
9. The mind is a set of mental faculties that
enables cognition, i.e.
Thinking
Memorizing
Imagining
Intending
9
What is the Mind?
10. Although intuitive, eludes definition; maybe
the sum of:
Sentience: Ability to Sense, Feel, or Experience
Perception: Transduction, Interpretation
Awareness: Ability to Experience the World
Self-Awareness: Ability to Experience the Self
10
What is Consciousness?
11. ▪︎ The Self is an idea:
an integrated system for representing a human sustained
over changing patterns of neurosynaptic connections and
activity
11
What is the Self?
12. Qualia & NCC: How are phenomena experienced?
12
Mental states are experienced subjectively in different ways
by different people, e.g., seeing red or feeling pain
13. 13
The Explanatory Gap
Experience arises from a physical basis, but there is no explanation
of why and how the physical becomes mental
14. The hard problem of
consciousness
Is consciousness Turing computable?
How and why do we have qualia?
14
The Hard Problem of Consciousness
How does a lump of fatty tissue and some electricity give rise to
the experience of perceiving, meaning, or thinking?
17. 17
Explanation is Description that
fits the New into the Prevailing Paradigm
18. ▪︎ Turing Test and The Chinese Room?
Turing Test: Machine intelligent if responses to questions
indistinguishable from human – a successful masquerade
Turing Test Intelligence: Symbolic knowledge (representation)
plus logical manipulation
Problem: Information = Representation + Interpretation
(i.e., meaning)
Chinese Room: thought experiment against conclusion that
computer passing Turing Test is intelligent
18
How does Information acquire Meaning?
19. ▪︎ No consensus on definition
▪︎ Intelligence needed to:
▫︎ perceive to acquire knowledge ▫︎ think to create & represent
knowledge ▫︎ reason to use knowledge︎
▫︎ memorize ▫︎ learn ▫︎ plan ▫︎ solve ▫︎ judge adapt
▫︎ communicate in natural language
may be defined as ‘efficient cross domain optimization’
19
What is Intelligence?
20. AI AGI WBE
AI: system that senses environment & takes action maximizing
chances of it’s success
AGI: system that performs any intellectual task that a human being
can perform
WBE: mind uploading approach to AGI - scan & map bio brain & copy
data to computer that runs indistinguishable simulation model
20
Artificial Intelligence
22. 22
What is Artificial Emotion?
Simulation of motor-driven behavior (laughing, crying) is programmable.
But can AI have feelings without basis in organic neural sensation –
the evolutionary origin of emotion?
Is Artificial Emotion Supportive or Subversive?
23. 23
What is Conscience?
Moral compass faculty: Distinguishes personal right from wrong,
with either remorse or satisfaction following action
24. MIND
Threat Assessment
Authentic Non-biological Intelligence
CONSCIOUSNESS | COMPUTABILITY | LAW OF PHYSICS
SELF & SELF AWARENESS
FROM SENSORS TO SENSATION | QUALIA & SUBJECTIVE EXPERIENCE
WAY OUT OF THE CHINESE ROOM: NONBIOLOGICAL ATTRIBUTION OF MEANING
EMOTIONAL INPUT INTO DECISION-MAKING & JUDGMENT
WHOSE CONSCIENCE?
MUST THE HARD PROBLEM BE SOLVED?
26. Opponents point to:
anti-Americanism ▫︎ losing battle for peoples' hearts and minds
▪︎ amplifying extremist power & political destabilization
▪︎ increasing propensity to wage war
▪︎ unconscionably changing quintessential meaning of war
26
What are Cons of LAWs?
27. Like UAVs, LAWs are effective weapons that
comply with IHL with no risk to US soldiers
Satisfy Military Necessity:
▫︎ kill leaders ▫︎ disrupt terrorist networks ▫︎ instill sense of insecurity
Are Non-Indiscriminate in Targeting
▫︎ Distinguish Civilians from Combatants
Are Proportional
▫︎ Do not inherently cause unnecessary suffering
27
What are Pros of LAWs?
28. International Humanitarian Law seeks to limit
war (Rules of War) :
by protecting persons who are not combatants:
▫︎ Distinction
by restricting combatants’ means & methods of warfare:
▫︎ Discrimination ▫︎ No Needless Suffering ▫︎ Proportionality
LAWs not inherently unlawful under IHL
28
Is Lethal Autonomy Legal?
29. Machine Morality v. Human Morality
Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot
Architecture
People outsource wide range of moral questions to friends, peers, experts, writers, and
public figures. Can a machine do any worse?.
Goal is to create ethical-decision programs for LAWs that perform better ethically than
human soldiers in combat.
LAWs can be used in self-sacrificing manner
LAWs can be designed without fear, hysteria, rage, frustration, shell-shock, etc. that
clouds human judgment; may perform better than humans in fog of war
LAWs can avoid human problem of ‘scenario fulfillment’ - distortion or neglect of
information that does not fit pre-existing beliefs
LAWs can program ethical decisions that govern LAWs consistently with Laws of War &
Rules of Engagement
LAWs may be capable of independent, objective monitoring of combat behavior by all
parties and reporting ethical infraction
30. Arguments over legitimacy of weapons ancient.
Arms race makes LAWs inevitable and ban unworkable
Ban won’t stop back market sales
Also assigning potentially lethal tasks to nonmilitary
machines, e.g., driverless cars
People accepting lethal autonomy in nonmilitary
machines will expect same tech in war.
Best way to adapt IHL to LAWs is global dialogue for
common standards and best practices.
30
1
2
3
4
5
Human Morality as Measure:
Unworkable Bans and Unenforceable Treaties
32. ▪︎ Intellect smarter than best human brains
Bio constrains processing speed & size of human brain
GE, SynBio, Neurotech may create bio superAI
If bio brain is physical system, simplest superAI may be WBE
operating faster than bio brain
Bostrum: SuperAI simply dominant at goal-oriented behavior.
Avoids questions of intentionality (Chinese Room) or
consciousness (Hard Problem)
32
What is Superintelligence?
35. “What is the answer?”
“That depends on why you’re asking the question.”
1. Should there be a law against Lethal Autonomy?
NO
2. Is it intelligent to grant lethal autonomy to artificial Intelligence?
PROBABLY NOT BUT LIKELY NECESSARY
3. Will Lethal Autonomy become Anthropomorphically Lethal?
DOUBTFUL
4. What are we teaching our child?
COHERENT EXTRAPOLATED VOLITION
35
ANSWERS
37. Lesson Plan
Program the AI to:
Do what we would have told you to do if we knew everything you
know
Do what we would have told you to do if we thought as fast as
you do and could consider many more possible lines of moral
argument
Do what we would tell you to do if we had your ability to reflect on
and modify ourselves
37
Initially design AI to learn human values by looking at humans, asking
questions, scanning human brains, rather programming with
fixed set of imperatives
38. 38
A First Word Summit on Human Enhancement
Enabling Technology
economic #heet
Coming in 2015
http://www.tranhumanjuris.com
@transhuman juris
NOTES
http://en.wikipedia.org/wiki/Philosophy_of_mind
Cognition: refers to the conscious or unconscious mental processing of information and includes:
Thinking: the symbolic or semiotic (meaning-making) processing of ideas or data to form concepts, reason, calculate, solve problems, make decisions, create knowledge (useful information) and produce language. Thinking is associated with the capacity to: make and use tools; understand cause and effect; recognize patterns of significance; respond to the world in a meaningful way;
NB: Lakehoff, Philosophy in the Flesh: the Embodied Mind & its Challenge to Western Thought (October 8, 1999)
Memory: the ability to preserve, retain, and subsequently recall, knowledge, information or experience;
Imagination: the activity of generating or evoking novel situations, images, ideas, or other states of mind.
Intentionality: the capacity of mental states to be directed towards or be in relation with something in the external world
NOTES
Awareness
the ability to experience the world and the self
Sentience
the ability to sense, feel, or experience;
necessary for the ability to suffer, which is held to confer a status, e.g. personhood, that is entitled to certain rights, or modes of treatment/
Perception
the process by which humans convert or interpret sensory data about the world into information; essential to creating knowledge
Consciousness, the sum of all of the above; and, although intuitive, eludes definition.
Consciousness refers to the relationship between the mind and the world with which it interacts. It has been defined as: subjectivity, awareness, the ability to experience or to feel, wakefulness, having a sense of selfhood, and the executive control system of the mind. There are several states of consciousness that a human experiences. Despite the difficulty in definition, many philosophers believe that there is a broadly shared underlying intuition about what consciousness is. Anything that we are aware of at a given moment forms part of our consciousness, making conscious experience a familiar but mysterious aspect of our lives.
The problem of consciousness is the central issue in current theorizing about the mind. The mind requires a complex dynamic system in the background, like a brain, to operate within the reach of a physical environment. Mind is the stream of consciousness. Despite the lack of any agreed upon theory of consciousness, there is a widespread consensus that an adequate account of mind requires a clear understanding of consciousness and its place in nature and reality.
Questions about the nature of conscious awareness have been asked for as long as there have been humans. By the beginning of the seventeenth century, consciousness had become central in thinking about the mind. Philosophers like John Locke (1688) regarded consciousness as essential to thought as well as to personal identity. For most of the next two centuries the domains of thought and consciousness were regarded as more or less the same.
Understanding consciousness involves a multiplicity not only of explanations but also of questions that they pose and the sorts of answers they require. The relevant questions can be gathered under three crude rubrics as the What, How, and Why questions.
The Descriptive Question: What is consciousness? What are its principal features? And by what means can they best be discovered, described and modeled?
The Explanatory Question: How does consciousness of the relevant sort come to exist? Is it a primitive aspect of reality, and if not how does (or could) consciousness arise from or be caused by non-conscious entities or processes?
The Functional Question: Why does consciousness exist? Does it have a function, and if so what it is it? Does it act causally and if so with what sorts of effects? Does it make a difference to the operation of systems in which it is present, and if so why and how?
J. W. van de Gevel, Charles N. Noussair, The Nexus between Artificial Intelligence and Economics (November 4, 2012). CentER Discussion Paper Series No. 2012-087.
Available at SSRN: http://ssrn.com/abstract=2169860 or http://dx.doi.org/10.2139/ssrn.2169860
NOTES
Qualia
The existence of neural correlates of conscious experiences does not explain why mental states generated by the same stimulus show up as different subjective experiences
Light of of 590 nm produces sensation of yellow, exactly the same sensation is produced by mixing 760 nm red light with 535 nm green light
There is no explainable connection with the physical, measurable characteristics of light and the sensations it produces
NOTES
We lack an explanation of the mental in terms of the physical.
The problem of explaining introspective first-person aspects of mental states and consciousness in terms of objective third-person quantitative neuroscience is called the explanatory gap or the hard problem of consciousness.
NOTES
The Hard Problem
The hard problem is the problem of explaining the relationship between physical phenomena, such as brain
processes, and experience. It is the problem of explaining how and why people have qualitative
phenomenal experiences. Why are physical processes ever accompanied by experience? Why is there
a subjective component to experience? Why does awareness of sensory information exist? These are
formulations of the hard problem. Providing an answer to these questions could lie in understanding
the roles that physical processes play in creating consciousness, and the extent to which these
processes create subjective qualities of experience.
The really hard problem of consciousness is the problem of experience. For example,
when we see there is the experience of visual sensations: the felt quality of redness, the experience of
dark and light, the quality of depth in a visual field. Other experiences go along with, for example the
sound of a clarinet, or the smell of mothballs. Then there are bodily sensations, from pains to orgasms,
mental images that are conjured up internally, the felt quality of emotion, and the experience of a
stream of conscious thought. All of these states are united in that there is something it is like to be in
them. All of them are states of experience.
What makes the hard problem hard and almost unique is that it goes beyond the performance of
functions. Once the performance of all the cognitive and behavioral functions relating to experience
has been explained, there may still remain the further unanswered question: Why is the performance
of these functions accompanied by experience?
A widely-held opinion is that experiences cannot be fully explained in purely physical terms. This is
sometimes expressed as the claim that there is an explanatory gap (Levine, 1983) between the
physical and the phenomenal world of experiences.
There is no consensus about the status of the explanatory gap. Some deny that the gap exists and hold
that consciousness is an entirely physical phenomenon.
NOTES
Knowledge = Understanding
Assembling data and information into facts that inform skills
Acquired through perception, cognition, reasoning, discovery, experience, communication, education
NOTES
Understanding = Explanation
http://en.wikipedia.org/wiki/Understanding
Understanding is an explanation of an object of knowledge sufficient to support intelligent behavior with respect to it
data compression (short hand) via , i.e., simple rules (e.g. math) or a simple model.
e.g., we understand the number 0.33333… by thinking of it as 1/3
e.g., we understand why day & night exist because of a model - rotation of the earth - that explains a tremendous amount of data - sun, solar system, gravity, color, temperature, etc.
NOTES
Explanation
http://en.wikipedia.org/wiki/Explanation
An explanation is
▪︎ description that makes new facts fit into the existing Agreement Reality – our paradigm for how the world works – that “makes sense”
▪︎ a set of statements constructed to describe a set of facts which clarifies the causes, context and consequences of those facts.
Searle’s Chinese Room\
Person ignorant of Chinese sits in room with boxes of Chinese symbols (data base) & book for manipulating symbols (program).
People outside room send in other Chinese symbols as questions in Chinese.
Using book & Chinese symbols, person in room returns correct answers to questions (output).
From outside, appears that room contains intelligent person who speaks Chinese.
Person in room passes Turing Test for understanding Chinese, but does not understand a word of Chinese.
System for manipulating physical symbols, cannot have intelligence or understanding
Computers merely icon transformation systems that lack understanding of meaning of icons.
Only human interpretation transforms icons into useful information.
NOTES
Intelligence
Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. It is not merely book-learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings "catching on," "making sense" of things, or "figuring out.” NOTES
http://arxiv.org/pdf/0706.3639.pdf
http://lesswrong.com/lw/vb/efficient_crossdomain_optimization/
Gevel, A. J. W. van de and Noussair, Charles N., The Nexus between Artificial Intelligence and Economics (November 4, 2012). CentER Discussion Paper Series No. 2012-087. Available at SSRN: http://ssrn.com/abstract=2169860 or http://dx.doi.org/10.2139/ssrn.2169860
Cross Domain optimization is the ability to optimally adapt behavior to fit new circumstances or to optimally engage the circumstances to achieve goals .
Efficiency is a measure of the amount of resources used to optimize
Hence Intelligence = Efficient Cross Domain optimization / resources used = Efficient Cross Domain optimization
Although there is no consensus definition of intelligence, there is wide agreement among AI
researchers that intelligence is required to do the following things:
reason,
use strategy,
solve puzzles,
make judgments under uncertainty;
represent knowledge,
including commonsense knowledge;
plan;
learn;
communicate in natural language; and
integrate the use of all of these skills towards common goals.
Other important capabilities to be included in the concept of AI are the ability to
sense
and the ability to
act
(for example to move and manipulate objects) in the outside world. This includes an
ability to detect and respond to hazards.
Some sources consider "salience", the capacity for
recognizing importance and to evaluate novelty, as an important feature.
Some interdisciplinary approaches to intelligence also emphasize the need to consider imagination (taken as
the ability to form mental images and concepts
that were not programmed in) and autonomy.
Hubert Dreyfus argued that human intelligence and expertise depended primarily on unconscious instincts rather than conscious symbolic manipulation, and argued that these unconscious skills would never be captured in formal rules.
Gevel, A. J. W. van de and Noussair, Charles N., The Nexus between Artificial Intelligence and Economics (November 4, 2012). CentER Discussion Paper Series No. 2012-087. Available at SSRN: http://ssrn.com/abstract=2169860 or http://dx.doi.org/10.2139/ssrn.2169860
NOTES
Artificial Consciousness
Emergent hypothesis: Conscious arises from sufficiently intelligent engineered artifact
Dependence Hypothesis Intelligence is a consequence/function of consciousness.
If artificial conscious is achieved in an engineered artifact, what legal rights, if any, should inure to it?
Artificial consciousness may require legal definition for laws regarding its rights, personhood, legal capacity
NOTES
Artificial Emotion
▪︎ Simulation of motion-driven behavior (e.g., laughing, crying) is programmable but AI can’t have subjective feelings regarding these behaviors because simulation has no basis in organic neural sensation - the evolutionary origin of emotion
▪︎ AE may be more subversive than AI’s extinction threat because “personal” or “caring” AIs bypass innate caution through millennia of children attributing emotions to dolls
▪︎ Once AIs gain emotional foothold as pals, caregivers, sexbots, etc., it will be harder to control their infiltration of the human realm
NOTES
Conscience
is faculty or intuition that assists moral judgment to distinguish right from wrong;
leads to feelings of remorse after acting contrary to personal moral values and feelings of integrity or righteousness after acting consistently with personal moral values
NOTES
Anti-LAWs
Orna Ben-Naftali and Zvi Triger, The Human Conditioning: International Law and Science Fiction, available at: http://ssrn.com/abstract=2343601
Kenneth Anderson and Matthew Waxman, Law and Ethics for Autonomous Weapon Systems Why a Ban Won’t Work and How the Laws of War Can,
available at: http://ssrn.com/abstract=2250126
Five Arguments Against LARS
1. Machine programming will never reach the point of satisfying the fundamental ethical and legal principles required to field a lawful autonomous lethal weapon.
2. No machine system can, through its programming, replace the key elements of human emotion and affect that make human beings irreplaceable in making lethal decisions on the battlefield—compassion, empathy, and sympathy for other human beings.
3. It is simply wrong to take the human moral agent entirely out of the firing loop.
Whatever merit this argument has today, in the near future we will be turning over more and more functions with life or death implications to machines such as driverless cars and surgical robots because they prove to be safer.
A world that accepts self-driving autonomous cars is likely to be one in which people expect similar technologies to be applied to warfare, because it regards them as better (and regards a failure to use them morally objectionable). Maybe a machine-made lethal decision is not necessarily mala in se; and if that is ever accepted as a general moral principle, it will raise difficulties far beyond weapons.
4. LARS are unacceptable because they undermine the possibility of holding anyone accountable for what, if done by a human soldier, might be a war crime.
If the decision to fire is taken by a machine, who should be held responsible—criminally or otherwise—for mistakes?
The soldier who allowed the weapon system to be used where it made a bad decision?
The commander who chose to employ it on the battlefield?
The engineer or designer who programmed it in the first place?
Narrow focus on post-hoc judicial accountability for individuals in war is a mistake in any case.
It is just one of many mechanisms for promoting and enforcing compliance with the laws of war.
Excessive devotion to individual criminal liability as the presumptive mechanism of accountability risks blocking development of machine systems that might, if successful, reduce actual harms to soldiers as well as to civilians on or near the battlefield.
It would be unfortunate to sacrifice real-world gains consisting of reduced battlefield harm through machine systems to satisfy a principle that there always be a human to hold accountable.
It would be better to adapt mechanisms of collective responsibility borne by a “side” in war, through its operational planning and law, including legal reviews of weapon systems and justification of their use in particular operational conditions.
5. By removing human soldiers from risk and reducing harm to civilians through greater precision, the disincentive to resort to armed force is diminished. The result might be a greater propensity to wage war or to resort to military force.
This concern is not special to LARS.
It can be made with respect to any technological development that either reduces risk to one’s own forces or reduces risk to civilians, or both. As a moral matter (even where the law does not require it), sides should strive to use the most sparing methods and means of war; there is no good reason why this obvious moral notion should suddenly be turned on its head.
The argument rests on further questionable assumptions about the “optimal” level of force and whether it is even a meaningful idea in a struggle between two sides with incompatible aims. Force might conceivably be used “too” often, but sometimes
it is necessary to combat aggression, atrocities, or threats of the same. Technologies that reduce risks to human soldiers (or civilians) may also facilitate desirable—even morally imperative—military action.
24/7 war in distant locations with only LAWs and only targeted humans feeling fear and pain is experienced by programmers, commanders, and fellow citizens as virtual game conflated with entertainment, requiring no sacrifice, courage or chivalry.
NOTES
Orna Ben-Naftali and Zvi Triger, The Human Conditioning: International Law and Science Fiction, available at: http://ssrn.com/abstract=2343601
Kenneth Anderson and Matthew Waxman, Law and Ethics for Autonomous Weapon Systems Why a Ban Won’t Work and How the Laws of War Can,
available at: http://ssrn.com/abstract=2250126
Rules of War
Indiscrimination: Indiscriminate A weapon is indiscriminate if can’t be aimed at specific target ot is equally likely to hit civilians as combatants
Unnecessary Suffering: Prohibits needless suffering or injury to combatants by weapons; e.g., warheads with glass. LARs may abide.
Distinction: Requires distinguishing combatants from civilian. LARs may be abide.
Proportionality: even if weapon distinguishes, user user must weigh military gain against civilian harm.
NOTES
Legality of lethal autonomous weapon systems under international law
The need to control human/robot interaction arises from:
1. unpredictability of technology and errors in programming;
2. lack of transparency & secret deviation from regulation in the name of security;
3. political & financial returns from investments;
4. responding to problems of technology with more technology;
5. inherent ambiguity and malleability (manipulation) of laws that end up proving their uselessness as reliable constraints on behavior.
Orna Ben-Naftali and Zvi Triger, The Human Conditioning: International Law and Science Fiction, available at: http://ssrn.com/abstract=2343601
International humanitarian law is comprised of Geneva & Hague Conventions, and subsequent treaties, such as case law, and customs.
A war crime is a serious violation of international humanitarian law.
Orna Ben-Naftali and Zvi Triger, The Human Conditioning: International Law and Science Fiction
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2343601
Human and Nonhuman Moral Outsourcing
Ronald C. Arkin, Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture, available at
http://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf
Would you hand over a moral decision to a machine? Why not? Moral outsourcing and Artificial Intelligence
Joshua Myers, Robot-Morality: Can philosophers program ethical codes into robots? 1 July 2014 available at
http://thehumanist.com/magazine/july-august-2014/up-front/robo-morality
Kenneth Anderson and Matthew Waxman, Law and Ethics for Autonomous Weapon Systems Why a Ban Won’t Work and How the Laws of War Can,
available at: http://ssrn.com/abstract=2250126
In setting of LAWS arms race:
U.S.
has a strategic interest in developing a shared normative framework for how LAWS must perform to be lawful.
must resist impulse to secrecy where feasible
must act before international views about LAWs harden around either of two extremes: bans or no constraints at all
should assert that IHL be applied to LAWs, with special scrutiny of autonomy with respect to target selection & terms of engagement.
Debates over LAWS sound similar to those that arose with respect to technologies that emerged with the industrial era, such as the arguments over submarines and military aviation.
A core objection, then as now, was that they disrupted the prevailing norms of warfare by radically and illegitimately reducing combat risk to the party using them - an objection to “remoteness,” joined to objections that these weapons were unfair, dishonorable, or cowardly, whether with aircraft, submarines, or, today, a cruise missile, drone, or LAWs
Weapons superiority military necessity & perfectly lawful
If a new weapon greatly advantages a side, tendency is for adoption by others that perceive like benefit
Typically, legal prohibitions on weapons erode, as happened with military submarines and aircraft.
What survives is a set of legal rules for the use of the new weapon.
In other cases, legal prohibitions hold, although the exception rather than the rule.
The ban on poison gas, for example, has effectively survived over the 20th century.
The Ottawa Convention banning antipersonnel landmines
Arguments Against LARS and Rebuttals
1. Machine programming will never reach the point of satisfying the fundamental ethical and legal principles required to field a lawful autonomous lethal weapon.
2. No machine system can, through its programming, replace the key elements of human emotion and affect that make human beings irreplaceable in making lethal decisions on the battlefield—compassion, empathy, and sympathy for other human beings.
3. It is simply wrong to take the human moral agent entirely out of the firing loop.
Whatever merit this argument has today, in the near future we will be turning over more and more functions with life or death implications to machines such as driverless cars and surgical robots because they prove to be safer. A world that accepts self-driving autonomous cars is likely to be one in which people expect similar technologies to be applied to warfare, because it regards them as better (and regards a failure to use them morally objectionable). Maybe a machine-made lethal decision is not necessarily mala in se; and if that is ever accepted as a general moral principle, it will raise difficulties far beyond weapons.
4. LARS are unacceptable because they undermine the possibility of holding anyone accountable for what, if done by a human soldier, might be a war crime.
If the decision to fire is taken by a machine, who should be held responsible—criminally or otherwise—for mistakes?
The soldier who allowed the weapon system to be used where it made a bad decision?
The commander who chose to employ it on the battlefield?
The engineer or designer who programmed it in the first place?
Narrow focus on post-hoc judicial accountability for individuals in war is a mistake in any case.
It is just one of many mechanisms for promoting and enforcing compliance with the laws of war.
Excessive devotion to individual criminal liability as the presumptive mechanism of accountability risks blocking development of machine systems that might, if successful, reduce actual harms to soldiers as well as to civilians on or near the battlefield.
It would be unfortunate to sacrifice real-world gains consisting of reduced battlefield harm through machine systems to satisfy a principle that there always be a human to hold accountable.
It would be better to adapt mechanisms of collective responsibility borne by a “side” in war, through its operational planning and law, including legal reviews of weapon systems and justification of their use in particular operational conditions.
5. By removing human soldiers from risk and reducing harm to civilians through greater precision, the disincentive to resort to armed force is diminished. The result might be a greater propensity to wage war or to resort to military force.
This concern is not special to LARS.
It can be made with respect to any technological development that either reduces risk to one’s own forces or reduces risk to civilians, or both. As a moral matter (even where the law does not require it), sides should strive to use the most sparing methods and means of war; there is no good reason why this obvious moral notion should suddenly be turned on its head.
The argument rests on further questionable assumptions about the “optimal” level of force and whether it is even a meaningful idea in a struggle between two sides with incompatible aims. Force might conceivably be used “too” often, but sometimes
it is necessary to combat aggression, atrocities, or threats of the same. Technologies that reduce risks to human soldiers (or civilians) may also facilitate desirable—even morally imperative—military action.
International Treaties
The call for a prohibitory international ban was raised to far greater prominence recently when, in November 2012, Human Rights
Watch issued a report calling for a sweeping multilateral treaty that would ban outright the development, production, sale, deployment, or use of “fully autonomous weapons” programmed to select and engage targets without human
intervention.
A multilateral treaty regulating or prohibiting LARS is misguided.
Although likely to find superficial acceptance, limitations on LARS will have little traction among those most likely to develop
and use them.
As LARS become smarter and faster, and the real-time human role in controlling them gradually recedes, agreeing on what constitutes a prohibited autonomous weapon will be unattainable.
There are challenges of compliance that afflict all such treaty regimes, especially when dealing with dual-use technologies.
There are humanitarian risks to prohibition, given the possibility that LARS could be more discriminating and ethically preferable to alternatives.
Principles, Policies, and Processes for Regulating Automating Weapon Systems
The risks and dangers of advancing LARS are very real.
A better approach than treaties for addressing these systems is the gradual development of internal state norms and best practices that, worked out, debated, and applied to the United States’ own weapons-development process, can be carried outwards to discussions with others around the world. National level processes should be combined with international dialogue aimed at
developing common standards and legal interpretations. This requires a long-term, sustained effort combining internal ethical and legal scrutiny and external diplomacy and collaboration.
As a possible model, an international grouping of legal experts commissioned by the NATO Cooperative Cyber Defense Centre of
Excellence has been working for the past few years in another technologically transformative area of conflict: cyber warfare. This process is meant to develop and propose interpretive guidance (including the Tallinn Manual on the International Law Applicable to Cyber Warfare) for states’ and other actors’ consideration. Although the cyber context is different, insofar as there may be greater disagreement as to the appropriate legal framework, similar international processes—whether involving state representatives, or independent experts, or both—can help foster broad consensus or surface disagreements that require resolution with respect to autonomous weapon systems.
The United States should take the lead in emphasizing publicly the legal principles it applies and the policies and processes it establishes to ensure compliance, encouraging others to do likewise.
Superintelligence
We don’t actually know what superintelligent agents will look like or act like.
Does a submarine swim? Yes, but it doesn’t swim like a fish.
Does an airplane fly? Yes, but it doesn’t fly like a bird.
Nick Bilton, Artificial Intelligence as a Threat, NYT 11/21/14, available at
http://www.nytimes.com/2014/11/06/fashion/artificial-intelligence-as-a-threat.html?_r=0
Can a machine think? Maybe, but it won’t think like a human.
Once we build systems that are as intelligent as humans, these systems will be able to build smarter systems - superintelligent agents, whose rate of growth and expansion might increase exponentially. We can’t build safeguards into something that we haven’t built ourselves.
First mover Thesis
The first superintelligence, by virtue of being first, could obtain a decisive strategic advantage over all other intelligences.
It could form a “singleton” and be in a position to shape the future of all Earth-originating intelligent life.
John Danaher, Bostrom on Superintelligence (1): The Orthogonality Thesis
November 4, 2014, available at http://hplusmagazine.com/2014/11/04/bostrom-superintelligence-1-orthogonality-thesis/
Orthogonality Thesis:
Intelligence and final goals are orthogonal [at right angles to one another; pointing in different directions] :
more or less any level of intelligence could
,in principle be combined with
more or less any final goal.
The thesis asserts that intelligence and final goals are orthogonal to one another: pretty much any level of intelligence is consistent with pretty much any final goal. This gives rise to the possibility of superintelligent machines with final goals that are deeply antithetical to our own.
John Danaher, Bostrom on Superintelligence (2): The Instrumental Convergence Thesis November 10, 2014
http://philosophicaldisquisitions.blogspot.com/2014/07/bostrom-on-superintelligence-2.html
Instrumental Convergence Thesis
The orthogonality thesis concerns final goals.
The instrumental convergence thesis concerns sub-goals and asserts that although a superintelligent agent could, in theory, pursue any final goal, there are sub-goals that it is likely to pursue because they are instrumental in achieving it final goals. Different superintelligent agents are likely to converge upon those instrumental sub-goals. This makes the future behavior of superintelligent agents slightly more predictable.
Instrumental sub-goals are convergent in that their attainment increases the chances of realizing a superintelligent agent’s ultimate goals implying that these instrumental sub-goals are likely to be pursued other superintelligent agents over a wide range ultimate goals and situations.
Bostrom on Superintelligence (3): Doom and the Treacherous Turn November 20, 2014, available at
http://hplusmagazine.com/2014/11/20/bostrom-superintelligence-3-doom-treacherous-turn/
1. The Three-Pronged Argument for Doom
Bostrom is famous for coming up with the concept of an “existential risk”.
He defines this as a risk “that threatens to cause the extinction of Earth-originating intelligent life or to otherwise permanently and drastically destroy its potential for future desirable development” (Bostrom 2014, p. 115).
One of the goals of the institute he runs — the Future of Humanity Institute — is to identify, investigate and propose possible solutions to such existential risks. One of the main reasons for his interest in superintelligence is the possibility that such intelligence could pose an existential risk.
(1) The first mover thesis:
The first superintelligence, by virtue of being first, could obtain a decisive strategic advantage over all other intelligences.
It could form a “singleton” and be in a position to shape the future of all Earth-originating intelligent life.
(2) The orthogonality thesis:
Pretty much any level of intelligence is consistent with pretty much any final goal.
Thus, we cannot assume that a superintelligent artificial agent will have any of the benevolent values or goals that we tend to associate with wise and intelligent human beings
(3) The instrumental convergence thesis:
A superintelligent AI is likely to converge on certain instrumentally useful sub-goals, that is: sub-goals that make it more likely to achieve a wide range of final goals across a wide-range of environments. These convergent sub-goals include the goal of open-ended resource acquisition (i.e. the acquisition of resources that help it to pursue and secure its final goals).
The conjunction of these three theses allows following, interim, conclusion:
The first superintelligence may have:
the power to shape the future of Earth-originating life:
non-anthropomorphic final goals; and,
would likely have instrumental reasons to pursue open-ended resource acquisition,
Combine this conclusion with the premises that :
human beings may be regarded as a useful resource for their atomic or molecular physical content; and
human beings depend for their survival on many other useful resources,
then, the first superintelligence could, by using human beings as resources or appropriating resources human beings rely on, extinguish the human species; i.e, the first superintelligence could pose a significant existential risk
Eliezer Yudkowsky, et al., Reducing Long-Term Catastrophic Risks from Artificial Intelligence. (2010), available at http://intelligence.org/files/ReducingRisks.pdf
http://en.wikipedia.org/wiki/Reflective_equilibrium
http://en.wikipedia.org/wiki/Eliezer_Yudkowsky
http://en.wikipedia.org/wiki/Friendly_artificial_intelligence#Coherent_Extrapolated_Volition
Human terminal values are extremely complicated.
This complexity is not introspectively visible at a glance.
The solution to this problem may involve designing an AI to learn human values by looking at humans, asking questions, scanning human brains, etc., rather than an AI preprogrammed with a fixed set of imperatives that sounded like good ideas at the time.
The explicit moral values of human civilization have changed over time, and we regard this change as progress.
We also expect that progress may continue in the future.
An AI programmed with the explicit values of 1800 might now be fighting to reestablish slavery.
Static moral values are clearly undesirable, but most random changes to values will be even less desirable.
Every improvement is a change, but not every change is an improvement.
Perhaps we could program the AI to
“do what we would have told you to do if we knew everything you know”
and
“do what we would have told you to do if we thought as fast as you do and could consider many more possible lines of moral argument”
and
“do what we would tell you to do if we had your ability to reflect on and modify ourselves.”
In moral philosophy, this approach to moral progress is known as reflective equilibrium [a state of balance or coherence among a set of beliefs arrived at by a process of deliberative mutual adjustment among general principles and particular judgments. ]
(Rawls 1971).
According to the Coherent Extrapolated Volition (CEV) model, our coherent extrapolated volition is our choices and the actions we would collectively take if "we knew more, thought faster, were more the people we wished we were, and had grown up closer together."
Rather than being designed directly by human programmers, an AI is designed by a seed AI programmed to first study human nature and then produce the AI which humanity would want, given sufficient time and insight to arrive at a satisfactory answer.
The appeal to an objective though contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.Making the CEV concept precise enough to serve as a formal program specification is part of the research agenda of the Machine Intelligence Research Institute.
Eliezer Yudkowsky, et al., Reducing Long-Term Catastrophic Risks from Artificial Intelligence. (2010), available at http://intelligence.org/files/ReducingRisks.pdf