SlideShare une entreprise Scribd logo
1  sur  44
Télécharger pour lire hors ligne
Joe Hanson
Senior Project 2012
Dr. Call

     Taking Man Out of the Loop: The Dangers of Exponential Reliance On Artificial Intelligence

            A 2012 Time Magazine article dubbed “The Drone” the 2011 Weapon of the Year.1 With

over 7,000 drones in the air, military use of unmanned vehicles is exponentially rising. Why is

drone technology progressing at such a fast rate? Artificial Intelligence (AI) is at the forefront of

drone technology development. Exponential technological developments in the last century have

changed society in numerous ways. Mankind is beginning to rely increasingly on technology in

everyday life, with many of these technologies bringing beneficial progress to all aspects of

society. Exponential growth in computer, robotic, and electronic technology has led to the

integration of this technology into social, economic, and military systems.

            Artificial intelligence is a part of computer science that is the intelligence and cause of

action of a machine, both in hardware and software form. Using AI this machine can act

autonomously and function in an environment using rapid data processing, pattern recognition,

and environmental perception sensors to make decisions and carry out goals and tasks. AI seeks

to emulate human intelligence, using these sensors to understand and process to solve and adapt

to problems in real time.

            There is debate over whether AI is even plausible; if it is even possible to create a

machine that can emulate human thought. Both humans and computers are able the process

information, but humans have the ability to understand that information. Humans are able to

make sense out of what they see and hear, involving the use of intelligence.2 Some

1   Feifel Sun, TIME Magazine 178, no. 25 (2011): 26.
2   Henry Mishkoff, Understanding Artificial Intelligence (Texas: Texas Instruments, 1985), 5.
2
characteristics of intelligence include the ability to: “respond to situations flexibly, make sense

out of ambiguous or contradictory messages, recognize importance of different elements of a

situation, and draw distinctions.”3 When discussing the possibilities of AI and the creation of a

thinking machine, the main issue is whether or not a computer is able to possess intelligence.

Supporters of AI development argue that because of exponential progress in computer and

robotic technology, AI is developing further than just simple data processing, to the creation of

autonomous AI that can emulate and surpass the intelligence of a human. According to

University of Michigan Professor Paul Edwards, scientists are beginning to “simulate some of

the functional aspects of biological neurons and their synaptic connections, neural networks

could recognize patterns and solve certain kinds of problems without explicitly encoded

knowledge or procedures,” meaning that AI is beginning to incorporate human biology to make

it think.4 On the other side of the debate, AI skeptics and deniers argue that AI will never have

the ability to surpass human intelligence. They argue that the human brain is far too advanced,

that though a machine can calculate data faster, it will never match the complexity of a human

brain.

         In order to emulate human thought, computer systems rely on programmed “expert

systems,” an kind of AI that, “acts as an intelligent assistant,” to the AI's human user.5 An expert

system is not just a computer program that can search and retrieve knowledge. Instead, an expert

system possesses expertise, pools information and creates its own conclusion, “emulating human

reason.”6 An expert system has three components that makes it more technologically advanced


3 Mishkoff, 5.
4 PaulEdwards, The Closed World (Cambridge: The MIT Press, 1997), 356.
5 Edwards, 356.
6 Mishkoff, 5.
3
than a simple informational retrieval system. One of these components is “knowledge base,” a

collection of declarative knowledge (facts) and procedural knowledge (courses of action), acting

as the expert system's memory bank. An expert system can integrate the two types of knowledge

when making a conclusion.7 Another component is an “user interface,” hardware that a human

user can communicate with the system, forming a two-way communication channel. The last

component is the interface engine, which is the most advanced part of the expert system. This

program knows when and how to apply knowledge, and also directs the implementation of that

knowledge. These three components allow the expert system to exceed the capabilities of a

simple information retrieval system.

         The capabilities of expert systems have opened up doors for military application. These

functions can be applied to a number of military situations, from battlefield management, to

surveillance, to data processing. Integrating expert systems into military AI technology gives

those systems the ability to interpret, monitor, plan, predict, and control system behavior.8 A

system is able to monitor its behavior, comparing and interpreting observations collected through

sensory data. The ability to monitor and interpret is important for AI specializing in surveillance

and image analysis, a vital capability for unmanned aerial vehicles. Expert systems also function

as battlefield aids, helping to plan by designing actions, while also prediction, inferring

consequences based on large amounts of data.9 Military application of expert systems in their AI

systems give them an advantage on and off the battlefield, aiding in decision making and

streamlining battlefield management and surveillance.



6 Mishkoff, 55.
8 Mishkoff, 59.
9 Mishkoff, 59.
4
       AI benefits society in a number of ways, including socially, economically, and

technologically. AI's rapid data processing and accuracy can help in many different sectors of

society. Although these benefits are progressive and necessary in connection with other

emerging technologies, specifically computer technology, society must be wary of over-reliance

on AI technology and integration. Over integration of AI into society has begun the trend of

taking a human out of loop, relying more on AI to carry out tasks, ranging small to large. And as

AI technology continues to develop, autonomous AI systems will further be relied on to carry out

tasks in all aspects of society, especially in military systems and weapons, and as there is less

human control, humans must be cautious of putting all the eggs in one basket. The dangers of

using AI in the military often outweigh the benefits, dangers including malfunction, unethical

use, lack of testing, and the unpredictable nature and actions of AI systems.

       The possibility of a loss of control over an AI system, of humans giving a thinking

machine too much responsibility, increases the chances of that reliance backfiring on its human

creators. The backfire isn't just inconvenient, it can also be dangerous, especially if the backfire

takes place in a military system. Missiles, unmanned drones, and other advanced forms of

weaponry are relying on AI to aid them in functioning, and as AI technology becomes faster and

smarter, humans are relying on the AI technology more and more. These systems have the

ability to cause catastrophic damage, and taking humans out of the loop is especially dangerous.

       There has been extensive research and debate over AI in numerous regards. From the

birth of AI as a field at the 1956 Dartmouth Conference, there has been support and opposition,

optimists, skeptics, and deniers from all fields including physics, philosophy, computer science,

and engineering. I will recognize all these different viewpoints, but my argument is that of the
5
skeptics, recognizing the benefits and progress that AI can bring, but still being wary of over-

reliance on AI, specifically its integration into military systems. The idea of putting technology,

such as advanced weaponry and missiles, under the responsibility of an AI system, whether it be

AI software or hardware is especially dangerous. AI machines may lack the ability to think

morally, ethically, or understand morality at all, so giving it the ability to kill while overly

relying on it is a danger. Optimists such as founders of AI Marvin Minsky and John McCarthy

fully support, embrace, and trust the integration of AI into society. On the other side of the

spectrum are the deniers, the most famous being Hubert Dreyfus, who believe that a machine

will never have the capabilities to emulate human intelligence, denying the existence of AI all

together. This section of my paper reviews the existing literature on AI and the diverse views of

its critics and supporters.

            The supporters of AI come from diverse fields of study, but all embrace the technology

and have an optimism and trust for it. Alan Turing, an English computer scientist, was one of the

first scholars to write about AI, even before it was declared as a field. Turing's paper,

“Computing Machinery and Intelligence,” is mainly concerned with the question of “Can

Machines Think?”.10 Turing's work was some of the first looking into computer and AI theory.

Turing introduces the “Turing Test,” which tests a machine, both software and hardware, to see if

it can exhibit intelligent behavior. Turing doesn't just introduce the Turing Test, but also shows

his optimism for AI by refuting the “Nine Objections,” which were nine possible objections of a

machine's ability to think. Some of these objections include a theological objection, the inability

for computers to think independently, mathematical limitations, and complete denial of the


10 Alan   Turing, “Computing Machinery and Intelligence,” Mind 59 (1950): 433-460.
6
existence of thinking machines. Turing refutes these objections through both philosophical and

scientific arguments supporting the possibility of a thinking machine. Turing argues that a

reason that people deny the possibility of thinking machines is not because they think it is

impossible, but rather because they fear it and that, “we like to believe that Man is in some subtle

way superior to the rest of creation.”11 Turing argues that computers will have the ability to

think independently and have conscious experiences.

            Another notable early AI developer was Norbert Wiener, an American mathematician,

who was the originator of cybernetics theory. In The Use of Human Beings: Cybernetics and

Society, Weiner argues that the automation of society is beneficial. Wiener shows that there

shouldn't be a fear of integrating technology into society, but instead people should embrace the

integration. Wiener says that cybernetics and the continuation of technological progress rely on

a human trust in autonomous machines. Though Wiener recognizes the benefits and progress

that automation brings, he still does warn of relying too heavily on it.

            After the establishment of AI as a field at the Dartmouth Conference, the organizer of the

conference, John McCarthy, wrote Defending AI Research. In this book, McCarthy collected

numerous essays that support the development of AI and its benefits to society. McCarthy

reviews the existing literature of notable early AI developers and either refutes or supports their

claims. In the book, McCarthy reviews the article “Artificial Intelligence: A General Survey.”12

The article was written by James Lighthill, a British mathematician. In the article, Lighthill is

critical of the existence of AI as a field. McCarthy refutes Lighthill's claims and defends AI

existence and development. McCarthy also defends AI research from those who claim “AI as an

11   Turing, 444
12   John McCarthy, Defending AI Research (California: CSLI Publications, 1996), 27-34.
7
incoherent concept philosophically,” specifically refuting the arguments of Dreyfus. McCarthy

argues that philosophers often “say that no matter what it [AI] does, it wouldn't count as

intelligent.”13 Lastly, McCarthy refutes the arguments of those who claim that AI research is

immoral and antihuman, saying that these skeptics and opponents are against pure science and

research motivated solely by curiosity.14 McCarthy argues that research in computer science is

necessary for opening up options for mankind. 15

         Hubert Dreyfus is been a prominent denier of the existence of AI for decades. A

professor of philosophy at UC Berkeley, Dreyfus has written numerous books in opposition to

and critiquing the foundations of AI as a field. Dreyfus's main critique of AI is the idea that a

machine can never have the capability to fully emulate human intelligence. Dreyfus argues that

the power of a biological brain can not be matched, even if a machine has superior data

processing capabilities. A biological brain not only reacts to what it perceives in the

environment, but relies on background knowledge and experience to think. 16 Humans also

incorporate ethics and morals into their decisions, and a machine can only use what it is

programmed to think. What Dreyfus is arguing is that the human brain is superior to AI, and that

a machine can't emulate human intelligence. Dreyfus's view that, “scientists are only beginning

to understand the workings of the human brain, with its billions of interconnected neurons

working together to produce thought. How can a machine be built based on something of which

scientists have so little understanding?”17 shows his view on AI.

         When looking at the relationship between the military and computer technology and AI,

13 McCarthy, vii.
14 McCarthy, 2.
15 McCarthy, 20.
16 Hubert Dreyfus, Mind Over Machine, 31.
17 David Masci. 1997. “Artificial Intelligence.” CQ Researcher, 7.
8
there has been much debate over how much integration is safe. As the military integrates

autonomous systems in their communication, information, and weapon systems, the danger of

over reliances rises. One of the first people to recognize this danger was the previously

mentioned Norbert Wiener. Even though Wiener was supportive of AI and its integration into

society, he had a very different viewpoint concerning its use in military and weapon technology.

Wiener wrote a letter in 1947 called “A Scientist Rebels,” which argues and resists government

and military influence on AI and computer research. Wiener warns of the “gravest

consequences” of the government's influence on development of AI.18 Wiener looks at the

development of the atomic bomb as an example, and how the scientists' work falls into the hands

of “he is least inclined to trust,” in this case the government and military. The idea that civilian

scientific research can be integrated by the military and used in weaponry is a critique of the

military's influence on AI development. Scientific research may seem innocent, but as it is

manipulated through military influence, purely scientific research is integrated into war

technology.

            Paul Edwards's The Closed World gives a history of the relationship and the impact that

the military had on AI research and development and vise versa. Edwards looks at why the

military put so much time and effort into computers. Edwards looks at the effects that computer

technology and the integration of AI data processing systems had on the history of the Cold War.

Edwards's broad historic look at computer and AI development gives insight to a military

connection to the progressing technology that still exists today. Computer development began in

the early 1940s, and from that time to the early 1960s, the U.S. military played an important role


18   Norbert Wiener. “From the Archives.” 38.
9
in the progressing computer technologies. After WWII, the military's role in computer research

grew exponentially. The U.S. Army and Air Force began to fund research projects, contracting

large commercial technology corporations such as Northrop and Bell Laboratories. 19 This

growth in military funding and purchases enabled American computer research to progress at an

extremely fast pace, however, due to secrecy, the military was able to keep control over the

spread of research.20 Because of this secrecy, military sponsored computer projects were tightly

controlled and censored. Due to heavy investment, the military did have a role in the

“nurturance” of AI due to their relationship with the government controlled Advanced Research

Projects Agency (ARPA). AI research received over 80% of its funding from ARPA, keeping the

military in tune with AI research and development. 21 The idea that, “the computerization of

society has essentially been a side effect of the computerization of war,” sums up the effect of the

military on computer and AI development.

         Paul Lehner's Artificial Intelligence and National Defense looks at how AI can benefit the

military, specifically through software applications. Written in 1989, Lehner’s view represents

that of the later years of the Cold War, one where the technology had not fully developed, but the

technology was exponentially progressing. Lehner discusses the integration of “expert systems,”

software that can be used to aid and replace human decision makers. Lehner recognizes AI's data

processing speed and accuracy and the benefits that the “expert system” could bring when

applied to the military. Armin Krishnan's Killer Robots looks at the other way that AI is being

integrated into the military, through hardware and weapons, also evaluating the moral and ethical



19 Edwards, 60.
20 Edwards, 63.
21 Edwards, 64.
10
issues surrounding the use of AI weaponry. Krisnan’s book was written in 2009, and looks at AI

in the military currently, specifically looking at the ethical and legal problems associated with

drone warfare and other robotic soldier systems. Some of the ethical concerns Krishnan brings

up are: diffusion of responsibility for mistakes or civilian deaths, moral disengagement of

soldiers, unnecessary war, and automated killing.

       Recently there has been much debate over the legal concerns regarding the use of AI in

military systems and weaponry. One of the leading experts on the legality of AI integration is

Peter W. Singer, the Director of 21st Century Defense Initiative at Brookings. In his article

“Robots At War: The New Battlefield,”(2009) Singer raises the numerous legal concerns. The

laws of war were outlined in the Geneva Convention laws in the middle of the 20th century.

However, due to the progressing and changing war technologies, these 20th century laws of war

are having trouble keeping up with 21st century war technology. Singer argues that the laws of

war need to be updated to include new, AI systems and their integration. Due to high numbers of

civilian deaths from AI systems, specifically drones, Singer also argues that these can be seen

war crimes. Lastly, Singer brings up the question of who is responsible lawfully for an

autonomous machine: the commander, the programmer, the designer, the pilot, or the drone

itself? Singer's interesting look at the legal concerns over changing war technology is also stated

in his participation on the U.S. Congressional hearings on unmanned military systems.

       Many scholars have also looked at what the future holds for AI. In 1993, Vernor Vinge

coined the term “singularity” to describe the idea that one day, AI technology will surpass human

intelligence. This is when computers will become more advanced than human intelligence,

moving human kind into a post-human state. This is the point where AI “wakes up,” gaining the
11
ability to think for itself. This idea of “singularity” is expanded on in Katherine Hayles's How

We Became Posthuman. Hayles looks at this as a time period in the near future where

information is separated from the body, where information becomes materialized and can be

moved through different bodies. Hayles's view shows that AI isn't just mechanically advancing,

but also mentally and psychologically advancing. In the view of singularity, humans are heading

in a direction where computers and humans will have to integrate with each other. But as

technology continues to progress and AI systems become more advanced, it is important to

recognize that the future may be integrated with AI technology.

                                   I. The History of AI: The Early 1900s to 1956

            Beginning in the early 1900s, computer scientists, mathematicians, and engineers began

to experiment with creating a thinking machine. During World War II, the military began using

computers to break codes, ushering in the development of calculating computers. ENIAC, the

Electronic Numerical Integrator And Computer, was the first electronic computer to successfully

function.22 Early on, the majority of computer and AI projects were military funded, giving the

military major influence over allocation and integration of the technology. As computer

technology began to progress, so did AI as a branch of computer science.

            The first person to consider the possibilities of creating AI in the form of a thinking

machine was Alan Turing. In his article “Computing Machinery and Intelligence,” Turing

recognized the possibilities that a machine could plausibly emulate human thought. Turing's

paper was very important to the development of AI as a field, being the first to argue the

plausibility of AI existence, while also establishing a base for the field. Turing's refuting of the


22 Arthur   Burks, “The ENIAC,” Annals of the History of Computing 3, no. 4 (1981): 389.
12
nine objections goes against the views of the skeptics and deniers, recognizing a diverse variety

of arguments against AI.

         Another major figure in the development of computers and artificial intelligence was

Hungarian mathematician John von Neumann. Von Neumann made many important

contributions in a variety of fields, but had a very large impact on computer science. Today's

computers are based on “von Neumann architecture,” building a computer to, “use a sequential

'program' to held in the machines 'memory' to dictate the nature and the order of the basic

computational steps carried out by the machine's central processor.”23 He also used this

architecture and compared it to a human brain, arguing that their functions are very similar. Von

Neumann's 1950 “The Computer and the Brain,” was an important work concerning artificial

intelligence, strengthening Turing's claim that computers could emulate human thought.24 In his

book, Von Neumann compares the human brain to a computer, pointing out similarities in the

their architecture and function. In some cases, the brain acts digitally, because its neurons

themselves operate digitally. Similar to a computer, the neurons fire depending on an order to

activate them.25 The result of von Neumann's work strengthened the plausibility of creating a

thinking machine.

         Ultimately, the work of Turing, Wiener, and Von Neumann show an optimism that the

early computer developers had. All three of them shared a faith in computer science and AI and

supported its progress. Turing finished his paper with, “We can only see a short distance ahead,

but we can see plenty there that needs to be done.”26 Even though these early computer


23 von Neumann, The Computer and the Brain, xii.
24 von Neumann
25 von Neumann, 29.
26 Turing, 460.
13
developers shared this optimism, they were also wary of the dangers of the progressing computer

technology. Specifically Wiener, who had earlier written his letter “A Scientist Rebels,” had a

skeptical view of the future of computer technology. In Cybernetics, Wiener states,

           What many of us fail to realize is that the last four hundred years are a highly

           special period in the history of the world. The pace at which changes during

           these years have taken place is unexampled in earlier history, as is the very

           nature of these changes. This is partly the results of increased communication,

           but also of an increased mastery over nature, which on a limited planet like

           the earth, may prove in the long run to be an increased slavery to nature. For

           the more we get out of the world the less we leave, and in the long run we

           shall have to pay our debts at a time that may be very inconvenient for our

           own survival.27

This quote from Wiener reflects the skepticism of Wiener. He understood the benefits that AI

and computer science could bring to society, but were wary of over-reliance on the technology.

Wiener’s quote is a warning of how fragile the world is, and that we need to be careful of the

rapid development of AI technology. As humans “master nature” through technology, they

become more and more vulnerable to their own creations.

                  II. The History of AI: 1956, The Cold War, and an Optimistic Outlook

           Following the work of Turing, Von Neumann, and Wiener, computer scientists John

McCarthy and Marvin Minsky organized the Dartmouth conference in the summer of 1956. This

conference would lead to the birth of AI as a field, a branch of computer science. The


27   von Neumann, 46.
14
conference was based on the idea that, “machines use language, form abstractions and concepts,

solve kinds of problems now reserved for humans, and improve themselves.”28 Using this idea,

the goal of the conference was to establish AI as a field and show that it was plausible. As a

result, AI began to gain momentum as a field.

         The military had a major influence over the research and development of AI and

computer science beginning in the 1940s. Shortly after World War II, as the Cold War era began,

AI research and development began to grow exponentially. Military agencies had the financial

backing to provide the majority of the funding, as U.S. Army, Navy, and Air Force began to fund

research projects and contract civilian science and research labs for computer science

development. Between 1951 and 1961, military funding for research and development rose from

$2 billion to over $8 billion. By 1961, research and development companies Raytheon and

Sperry Rand were receiving over 90% of their funding from military sources. The large budget

for research and development enabled AI research to take off, as ARPA received 80% of its

funding from the federal government.29 Because of the massive amount of funding from military

sources, American computer research was able to surpass the competition and progress at an

exponential rate. The U.S. Military was able to beat out Britain, their only plausible rival,

making the U.S. the leaders in computer technology.

         There were numerous consequences of the military influence of having their hand in

research and development of computer science early in the Cold War. As a result of their

overwhelming funding, the military was able to keep tight control over the research and



28John McCarthy, Marvin Minsky “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence” (proposal,
Dartmouth College, August 31, 1955).
29 Edwards, 64.
15
development, directing it in the direction they desired. This direction was primarily concerned

with developing technology that could benefit the military themselves, whether it be for

communication or weaponry or national defense. Wanting to keep their influence as strong as

possible, the military kept tight control through secrecy of the research. 30 The military wanted to

make sure that researchers they had on contract were always aware of the interests of national

security, censoring the communication between researchers and scientists in different

organizations. A problem that arose from this censorship was that researchers could no longer

openly share ideas, impeding and slowing down development. This showed that the military was

willing to wait longer to ensure that national security measures were followed.

         As a result of the heavy funding from the military, AI turned from being just theory to

having commercial interests. Parallel to the rapidly progressing computer technology, military

research agencies began to also progress in AI development, studying cognitive processes and

computer simulation.31 The main military research agency to look into AI was the Advanced

Research Projects Agency (ARPA, renamed DARPA in 1972). Joseph Licklider, head of ARPA's

Information Processing Techniques Office, was a crucial figure in increasing development of AI

technology, establishing his office as the primary supporter of “the closed world military goals of

decision support and computerized command and control,” which found “a unique relationship

to the cyborg discourses of cognitive psychology and AI.”32 Thus unique relationship is the basis

of AI, mastering cognitive psychology and then integrating and emulating that psychology into a

machine. This branch of ARPA not only shows the military's interest and impact on research and



30 Edwards, 62.
31 Edwards, 259.
32 Edwards, 260.
16
development of AI, but also the optimism that the military had for its development. ARPA was

able to mix basic computer research with military ventures, specifically for national defense,

allowing the military to control the research and development of AI technology.

           The military influence over DARPA continued into the 1970s, as DARPA became the

most important research agency for military projects. The military began to rely on AI for

military use at an exponential rate. DARPA began to integrate AI technology into a number of

military systems including soldier aids for both pilots and ground soldiers and battlefield

management systems that relied on expert systems. 33

           All these aspects of AI's integration into warfare are known as the “robotic battlefield” or

the “electronic battlefield.” AI research opened the doors for this new warfare technology,

integrating AI and computer technology to create electronic, robotic warfare and automated

command and sensor networks for battlefield management. During the Vietnam War, military

leaders shared an optimism for new AI technology. General William Westmoreland, head of

military operations for the U.S. in Vietnam from 1964 to 1968 predicted that, “on the battlefield

of the future, enemy forces will be located, tracked, and targeted almost instantaneously through

the use of data-links, computer assisted intelligence evaluation and automated fire control.”34

Westmoreland also saw that as the military began to increasingly rely on AI technology, the need

for human soldiers would decrease. Westmoreland’s prediction not only shows the optimism that

military leaders had of AI technology, but also the over reliance that the military would have on

those weapons.

           From the 1950s to the 1980s, DARPA continued to be the military’s main research and

33 Edwards,  297.
34 Armin   Krishnan, Killer Robots: Legality and Ethicality of Autonomous Weapons, 19.
17
development agency. DARPA received heavy funding from the federal government, as military

leaders continued to support the need for the integration of new AI technology. The military

leader’s optimism in AI technology is reflected by the ambitious goals that DARPA had. In

1981, DARPA aimed to create a “fifth generation system,” one that would “have knowledge

information processing systems of a very high level. In these systems, intelligence will be

greatly improved to approach that of a human being.”35 Three years later in 1984, DARPA’s

“Strategic Computing” stressed the need for the new technology stating, “Using this new

technology [of artificial intelligence], machines will perform complex tasks with little human

intervention, or even with complete autonomy.”36 It was in 1984 that the U.S. military began not

just researching and developing AI, but actually integrating it into military applications for use

on the battlefield. DARPA announced the creation of three different projects, an all purpose

autonomous land vehicle, a “pilot’s associate” to assist pilots during missions, and a battlefield

management system for aircraft carriers. The military was beginning to rely on this AI

technology, using it to assist human military leaders and soldiers. Fearing they would lose

ground in their progress to Britain, China and Japan, DARPA spent over $1 billion to maintain

their lead.37

         President Ronald Reagan continued the trend of the federal government using DARPA for

advanced weapon development and showed the military’s commitment to developing AI military

weapons and systems. Reagan’s Strategic Defense Initiative (SDI), later nicknamed “Star Wars,”

was a proposed network of hundreds of orbiting satellites with advanced weaponry and battle



35 Paul Lehner, Artificial Intelligence and National Defense: Opportunity and Challenge, 164.
36 David Bellin, Computers in Battle: Will They Work?, 171.
37 Lehner, 166.
18
management capabilities. These satellites would be equipped with layers of computers, “where

each layer of defense handles its own battle management and weapon allocation decisions.”38

Reagan’s SDI is a perfect example of the government and military’s overly ambitious integration

of AI technology. Reagan was willing to put both highly advanced and nuclear weapons in the

partial control of AI technology. Overall, Reagan’s SDI was a reckless proposition by the

military, taking man out of the loop while putting weapons of mass destruction under the control

of computer systems.

            As a result of the military’s commitment to the research and development of AI, AI

technology has developed rapidly and its integration into both society and military applications.

Before looking at the future of AI, it is important to first look at the different levels of autonomy,

and where the technology currently is present day. In a nutshell, autonomy is the ability of a

machine to function on its own with little to no human control or supervision. There are three

types of machine autonomy: pre-programmed autonomy, limited autonomy, and complete

autonomy. Preprogrammed autonomy is when a machine follows instructions and has no

capacity to think for themselves.39 An example of preprogrammed autonomy is in a factory

machine programmed for one job, such as welding or painting. Limited autonomy is the

technology level that exists today, one where the machine is capable of carrying out most

functions on its own, but still relies on a human operator for more complex behaviors and

decisions. Current U.S. UAVs possess limited autonomy, using sensors and data processing to

come up with solutions, but still relying on human decision making. Complete autonomy is the




38   Lehner, 159.
39   Krishnan, 44.
19
most advanced level, operating themselves with no human input or control.40 Although complete

autonomy is still being developed, AI technology continues to progress at a rapid pace, opening

the doors for complete autonomy, with DARPA estimating that complete autonomy will be

achieved before 2030.41

         In a 2007 interview with Tony Tether, the Director of DARPA, Tether showed his

agency’s optimism and commitment to the development of future of AI technology. Tether refers

to DARPA’s cognitive program, the program focusing on research and development of thinking

machines, as “game changing,” where the computer is able to “learn” its user.42 DARPA is

confident that they will be able to create fully cognitive machines, making AI smarter and more

closely emulating human intelligence. Tether discusses the Command Post of the Future

(CPOF), a distributed, computer run command and control system that functions 24/7, taking

human operators out of the loop. The CPOF, though beneficial for its accurate and rapid data

processing, is a dangerous example of over reliance on AI. Tether says, “those people who are

now doing that 24-by-7 won’t be needed,” but it is important, not just for safety, but to retain full

control, to have a human operator over military weapons and systems. 43 This still shows the

military’s influence over the research and development, directing DARPA’s research towards an

over-reliance on AI machines.

         But what happens when humans rely on AI so much that there is no turning back?

Vinge’s Singularity Theory is the theory that AI will one day surpass human intelligence, and

humans will eventually integrate with AI technology. Vinge’s Singularity points out the ultimate


40 Krishnan, 45.
41 Krishnan, 44.
42 Schatman, “Darpa Chief Speaks.”
43 Schatman.
20
outcome of over reliance and over optimism in AI technology: the loss of control of AI and the

end of the human era. Vinge warns that between 2005 and 2030, computer networks might

“wake up,” ushering in an era of the synthesis of AI and human intelligence. In her book How

We Became Posthuman, Hayles continues Vinge’s Singularity Theory and looks at the separation

of humans from human intelligence, an era where the human mind has advanced psychologically

and mentally when integrated with AI technology.44 Hayles argues that the “the age of the

human is drawing to a close.”45 Hayles looks at all the ways that humans are already beginning

this integration with intelligent machines, such as computer assisted surgery in medicine and the

replacing of human workers with robotic arms in labor, showing that AI machines have the

ability to integrate with or replace humans in a diverse number of aspects of society.46

                            III. Skepticism: The Dangers of Over-Reliance on AI

         Although over-reliance on AI for military purposes is dangerous, AI does bring many

benefits to society. Because of these benefits, humans are drawn to AI technology, becoming

overly optimistic and committed to the technology. The numerous benefits are what give the

military that optimism, however in this section, I will discuss AI’s benefits to civilian society

followed by then by the limitations and dangers of AI on both civilian society and the military.

         AI has the ability to amplify human capabilities, surpassing the accuracy, expertise, and

speed of a task compared to a human. Hearing, seeing, and motion through speech recognition,

computer vision, and robotics are amplified by AI systems. Extremely rapid and efficient data

processing and accurate data processing give AI technology the advantage to humans. In order



44 Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and   Informatics, 2.
45 Hayles, 283.
46 Hayles, 284.
21
to look at these benefits, I will use examples of how AI can be applied to diverse sections of

society. Speech recognition understands and creates speech, increasing speed, ease of access,

and manual freedom when interacting with the machine. 47 In business, office automation is

relying on new AI speech recognition capabilities to streamline business operations. Data entry,

automatic dictation and transcription, and information retrieval all benefit from AI speech

recognition. Human users benefit from this technology through easier, streamlined

communication.48 AI robotics is another beneficial emerging technology for a number of reasons

including: increased productivity, reduced costs, replacing skilled labor, and increased product

quality. 49 AI robotics gives the AI system the ability to perform manual tasks, making them

useful for integration into industrial and manufacturing sectors of society, such as automobile

and computer chip factories. In medicine, surgeons and doctors are now integrating AI

technology to assist in challenging surgery operations and to identify and treat diseases.50 AI has

even found its way into everyday life, assisting the elderly in senior facilities, assisting pilots on

commercial airlines, and being integrating into human homes, creating “smart houses.”51

I recognize this integration of AI is both beneficial and is not dangerous. AI is helping progress

health, economic, and industrial technology, making it safer, more advanced, and more efficient.

Although there are numerous benefits, it is also important to understand both the limitations and

dangers of AI technology, specifically with its integration into military systems.

         Hubert Dreyfus leads the charge against the integration of AI, arguing both the limitations



47 Mishkoff, 108.
48 Mishkoff, 108.
49 Mishkoff, 120.
50 Von Drehle, “Meet Dr. Robot,” 44.; Velichenko. “Using Artificial Intelligence and Computer Technologies for Developing

Treatment Programs for Complex Immune Diseases,” 635.
51 Anderson, “Robot Be Good,” 72.
22
and danger of AI machines. Dreyfus claims in What Computers Can’t Do that early AI

developers were, “blinded by their early success and hypnotized by the assumption that thinking

is a continuum,” meaning that Dreyfus believes this progress cannot continue. 52 Dreyfus is

specifically wary of the integration of AI into systems when they have not been tested. Over

optimism and reliance of AI supporters gives the AI machine the ability to function

autonomously when it has not been fully tested. In Mind Over Machine, Dreyfus expands his

skepticism, warning of the dangers of A.I. decision making because to him, decisions must be

pre-programed into a computer, which leads to the A.I.’s “ability to use intuition [to be] forfeited

and replaced by merely competent decision making. In a crisis competence is not good

enough.”53 Dreyfus takes a skeptical approach by recognizing the benefits of AI on society,

specifically information processing, but strongly opposes the forcing of undeveloped AI on

society. He says that, “AI workers feel that some concrete results are better than none,” that AI

developers continue to integrate untested AI into systems without working out all the

consequences of doing so.54 Dreyfus is correct in saying that humans must not integrate

untested, under developed AI into society, but rather always be cautious. This skeptical approach

is important for the safe integration of AI, specifically when removing a human operator and

replacing him with an autonomous machine.

         Since the 1940s, there has been skepticism of AI in military applications from a diverse

group of opponents. The military’s commitment to the use and reliance of using autonomous

machines for military functions comes with many dangers, removing human operators and



52 Hubert Dreyfus, What Computers Can’t Do, 302.
53 Hubert Dreyfus, Mind Over Machine, 31.
54 Hubert Dreyfus, What Computers Can’t Do, 304.
23
putting more decisions into the hands of the AI machine. Dreyfus argues the danger in

implementing “questionable A.I.-based technologies” that have not been tested. To Dreyfus,

allowing these automated defense systems to be implemented, “without the widespread and

informed involvement of the people to be affected” is not only dangerous, but also

inappropriate.55 It is inappropriate to integrate untested AI into daily life, where that AI may

malfunction or make a mistake that could negatively impact human life. Dreyfus is wary of the

military decision-makers being tempted to “install questionable AI-based technologies in a

variety of critical contexts,” especially those applications that involve weapons and human life.56

Whether its to justify the billions of dollars spent for research and development or the temptation

of the advanced capabilities of the AI machines, military leaders must be cautious of over

reliance on AI technology for military applications.

         Dreyfus was not the first skeptic of technology and its integration into military

applications. Wiener’s letter “A Scientist Rebels” showed both early scientists’ resistance and

skepticism of research and development’s relationship with the military. The point that Wiener

wants to make is that even if scientific information seems innocent, it can still have catastrophic

consequences. Wiener’s letter was written shortly after the bombings of Hiroshima and

Nagasaki, where the atomic bomb developer’s work fell into the hands of the military. To

Wiener, it was even worse that the bomb was used “to kill foreign civilians indiscriminately.”57

The broad message of Wiener’s letter is that scientists should be skeptical of the military

application of their research. Though their work may seem innocent and purely empirical, it can



55 Hubert Dreyfus, Mind Over Machine, 12.
56 Hubert Dreyfus, Mind Over Machine, 12.
57 Wiener, “From the Archives,” 37.
24
still have grave consequences by falling into the hands of the military. Though Wiener is not

explicitly talking about AI research, his skepticism is important. Wiener emphasizes the need for

researchers and developers to be wary of their work, and warns them of the dangers of

cooperating with the military.

           Wiener’s criticism of the military’s relationship with research and development has not

changed that relationship, and the military continues to develop and use more AI technology in

its weapons and systems. The military application of AI brings a number of dangers both to

friendlies, enemies, and civilians. Though AI has many benefits in the military, the dangers

outweigh those benefits. The idea of taking a human out of the loop is not only dangerous, but

when human life is on the line, can a thinking machine be trusted to function like a human?

Functioning completely autonomously, how do we know that that machine will emulate the

thought, decision making, and ethics of a human? The following are some of the dangers of

integrating AI technology into military applications.

           As previously warned by Wiener, the government misuse of AI in the military could be a

dangerous outcome of AI’s integration. Governments like the United States have massive

defense budgets, giving them the resources to build large armies of thinking machines. This

increases the chances of unethical use of AI by countries, specifically the U.S., giving these

countries the opportunity to not just use AI technology for traditional warfare, but expanding its

use for any sort of security. The use of AI opens the doors for unethical infringement upon civil

liberties and privacy within the country.58

           Another major danger of the use of AI in the military is the possibility of malfunctioning


58   Krishnan, 147-148.
25
weapons and networks, when the weapon or system acts in an unanticipated way. As previously

stated, computer programming is built on the idea of programming, finding errors through

malfunction, and fixing those errors. However, when using AI technology that might not be

perfected, the risk of malfunction is greater. Software errors and unpredictable failures leading

to malfunction are both liabilities to the AI military system. These chances of malfunction make

AI military systems untrustworthy, a huge danger when heavily relying on AI software integrated

into military networks.59 It is very challenging to test for errors in the military software.

Software often can pass practical tests, however there are so many situations and scenarios that

perfecting the software is nearly impossible. 60 The larger the networks, the greater the dangers

of malfunction. Thus, when AI conventional weapons are networked and integrated into larger

AI defense networks, “an error in one network component could ‘infect’ many other

components.”61 The malfunction of an AI weapon is not only dangerous to those who are

physically affected, but also opens up ethical and legal concerns. The malfunction of an AI

system could be catastrophic, especially if that system is in control of WMDs. AI controlled

military systems increase the chances of accidental war considerably.

         However, the danger of malfunction is not just theory. July 1988 was an example of an

AI system malfunction. The U.S.S. Vincennes, a U.S. battle ship nicknamed “Robo-cruiser”

because of its automated Aegis system, an automated radar and battle management system, was

patrolling the Persian Gulf. An Iranian civilian airliner carrying 290 people registered on the

system as an F-14 Iranian fighter, and the computer system considered it an enemy. The system



59 Bellin, 209.
60 Bellin, 209.
61 Krishnan, 152.
26
fired and took down the plane, killing all 290 people. This event showed that humans are always

needed in the loop, especially with machine autonomy growing. Giving a machine full control

over weapon systems is reckless and dangerous, and if the military continues to phase out human

operators, these AI systems will be become increasingly greater liabilities.62

          The weakness in the software and functioning capabilities of AI military systems also

make them vulnerable to probing and hacking, exposing flaws or losing control of the unmanned

system.63 Last year, Iran was able to capture a U.S. drone by hacking its GPS system and

making it land in Iran instead of what it thought was Afghanistan. The Iranian engineer who

worked on the team to hijack the drone said that they “electronically ambushed” the drone, "By

putting noise [jamming] on the communications, you force the bird into autopilot. This is where

the bird loses its brain." The Iranian’s successful hijacking of the drone shows the vulnerabilities

of software on even advanced AI systems integrated into drones.64

          Generally war is not predictable, and AI machines function off of programs written for

what is predictable. This is a major flaw in AI military technology, as the programs that make AI

function consist of rules and code. These rules and codes are precise, making it nearly

impossible for AI technology to adapt to a situation and change its functions. Because war is

unpredictable, computerized battle management technology lacks both experience and morality,

both needed to make informed and moral decisions on the battlefield. The ability to adapt is

necessary for battlefield management, and in some cases, computer programming limits the

technology from making those decisions. 65


62 Peter Singer, “Robots At War: The New Battlefield,” 40.
63 Alan Brown, “The Drone Warriors,” 24.
64 Scott Peterson, “Iran Hijacked US Drone, says Iranian Engineer.”
65 Bellin, 233.
27
            The last danger, the “Terminator Scenario” is more of a stretch, but still is a possibility.

In the “Terminator Scenario,” machines become self aware, see that humans are their enemy, and

take over the world, destroying humanity. As AI machines become increasingly intelligence,

their ability to become self aware and intellectually evolve will also develop. The idea of AI

machines beginning to “learn” their human operators and environments is the start of creating

machines that will become fully self aware. If these self aware machines have enough power, for

example their integration into military systems, they have the power to dispose of humanity.66

Though the full destruction of humanity is a stretch, the danger of AI turning on their human

creators is still a possibility and should be recognized as an apparent consequence of integrating

AI into military systems.

                IV. A Continuing Trend: The Military’s Exponential Use of Autonomous AI

            Though these dangers are apparent, and in some cases have lead to loss of human life, the

U.S. military continues to exponentially rely on AI technology in its military systems, integrated

into both its weapon systems and battle network systems. The military is using AI technology,

such as autonomous drones, AI battlefield management systems, and AI communication and

decision making networks for national security and on the battlefield, ushering in a new era of

war technology. The idea of taking man out of the loop on the battlefield is dangerous and

reckless. Removing human operators is not only a threat to human life, but also opens the debate

over ethical, legal, and moral problems regarding the use of AI technology in battle.

           AI has progressively been integrated into military applications, the most common being

weapons (guided missiles and drones) and expert systems for national defense and battlefield


66   Krishnan, 154.
28
management. This increased integration has led to both an over reliance and over optimism of

the technology. The rise of drone warfare through the use of UAVs (Unmanned Aerial Vehicles)

and UCAVs (Unmanned Combat Aerial Vehicles), has brought numerous benefits to military

combat, but also many concerns. As UCAVs become exponentially more autonomous, their

responsibilities have grown, utilizing new technology and advanced capabilities to replace

human operators and take humans out of the loop.67

         The U.S. military’s current level of autonomy on UCAV’s is supervised autonomy, where

a machine can carry out most functions without having to use pre-programmed behaviors. With

supervisor autonomy, an AI machine can make many decisions on its own, requiring little human

supervision. In this case, the machine still relies on a human operator for final complex

decisions such as weapon release and targeting, but is able to function mostly on its own.68

Supervised autonomy is where the military should stop its exponential integration. It is able to

put complex legal and ethical decisions in the hands of a human operator, while still using the

benefits that AI has. When the final decision involves human life or destruction, it is important

to have a human operator making that decision, rather than allowing the computer to decide.

Supervised autonomy still allows a human operator to monitor the functions of the UCAV, while

keeping it ethically and legally under control. It is especially dangerous that the U.S. military is

working towards the creation of completely autonomous machines, ones that can operate on their

own with no human supervision or control. Complete autonomy gives the machine the ability to

learn and think and adjust behavior in specific situations. 69 Giving these completely autonomous



67 Hugh McDaid, Robot Warriors: The Top Secret History of the Pilotless Plane, 162.
68 Krishnan, 44.
69 Krishnan, 44.
29
machines the ability to make their own decisions is dangerous, as their decisions would be

unpredictable and uncontrollable. The U.S. military’s path to creating and utilizing completely

autonomous machines is reckless, and supervised autonomy is farthest the military should go

with AI technology and warfare.

            In the last decade, the use of military robotics has grown for a number of reasons,

including the numerous benefits that AI robotics brings to the battlefield. Originally used for

purely reconnaissance, the military is now utilizing UAVs as weapons. The use of UAVs and

other AI weapons are heavily supported by the low ranking military personnel, the ones who are

directly interacting with the drones. Higher ranking military officials and political leaders are

split, with some fully supporting use while others recognize the dangers and concerns of their

use. For now, the benefits that UAVs possess continue the integration of them into the U.S.

military.

            One of the benefits of AI weaponry is it reduces the man power requirements. In first

world countries, especially the U.S., the pool of prospective soldiers is shrinking. Both physical

requirements and the attractiveness of military service are keeping Americans away from enlisted

in the military. As the military budget decreases, UCAVs are able to replace human soldiers,

cutting personnel costs from human soldiers.70 Another benefit of replacing human soldiers with

AI robotics is that it takes humans out of the line of fire, while also eliminating human fallibility.

The reduction is casualties of war is very appealing to not only the fighting soldiers, but also

their family, friends, and fellow citizens. Being able to take soldiers out of the line of fire and

replace them with robotics saves soldiers lives. These robotics are also able to reduce mistakes


70   Krishnan, 35.
30
and increase performance as compared to their human counterparts. The amplified capabilities

of the machines give them the ability to outperform human soldiers.71 The ability to function

24/7, low response time, advanced communication networks, rapid data and information

processing, and targeting speed and accuracy are some of the many benefits of AI robotics on the

battlefield.

         The benefits of AI military robotics are very important to the lower ranking military

personnel. These soldiers interact with the robotics on the battlefield, recognizing the benefits it

brings to them personally, while failing to recognize the ethical and legal concerns that also come

along with the drones. The following are quotes from enlisted, low ranking U.S. soldiers:72

• “It's surveillance, target acquisition, and route reconnaissance all in one. We saved countless lives, caught
  hundreds of bad guys and disabled tons of IEDs in our support of troops on the ground.”
  -Spc. Eric Myles, UAV Operator
• “We call the Raven and Wasp our Airborne Flying Binoculars and Guardian Angels.”
  -GySgt. Butler
• “The simple fact is this technology saves lives.”
  -Sgt. David Norsworthy

It is understandable why low ranking soldiers embrace the technology and support their use.

UCAVs have proven to be highly effective on the battlefield, saving the lives of U.S. soldiers and

effectively combatting enemies, utilizing their advanced AI functions. Though UCAVs are

effective on the battlefield and especially benefit the soldiers on the front line, the ethical and

legal concerns are very important consequences of the overall use of AI technology.

         However, higher ranking military leaders and political leaders are split in their support.

Some of these leaders fully support the technology, while others are skeptical of too much

automation and the dangers of over reliance. German Army General Wolfgang Schneiderhan,


71Krishnan, 40.
72U.S. House of Representatives. Subcommittee on National Security and Foreign Affairs. Rise of the Drones: Unmanned
Systems and the Future of War Hearing, Fagan, 63.
31
who also served as Chief of Staff of the German Army from 2002 to 2009 shows this skepticism

in his article, “UV’s: An Indispensable Asset in Operations.” Schneiderhan not only looks at the

dangers of taking a human out of the loop, but also the importance of humanitarian law,

specifically involving human life. Schneiderhan explicitly warns that, “unmanned vehicles must

retain a ‘man in the loop’ function in more complex scenarios or weapon employment,”

especially wary of “cognitive computer failure combined with a fully automated and potentially

deadly response.”73 Schneiderhan’s skepticism both recognizes the main dangers of over-

reliance of AI for military use, while also stressing the importance of keeping a human operator

involved in decision making. Schneiderhan argues that a machine should not be making

decisions regarding human life, but rather decisions should be made by a conscious human who

has both experience and situational awareness, while also understanding humanitarian law.74

Schneiderhan’s skepticism contrasts with the over-optimism that many U.S. military leaders

share about the use of AI in weaponry.

         Navy Vice-Admiral Arthur Cebrowski, chief of the DoD’s Office for Force

Transformation stressed the importance of AI technology for “the military transformation,” using

the advanced capabilities and benefits to develop war technology. Cebrowski argues that AI

technology is “necessary” to move money and manpower to support new technologies, including

AI research and development, instead of focusing on improving old technologies.75 Navy Rear

Admiral Barton Strong, DoD Head of Joint Projects argues that AI technology and drones will

“revolutionize warfare.” Strong says that because “they are relatively inexpensive and can


73 Schneiderhan, “UV's, An Indispensable Asset in Operations,” 91.
74 Schneiderhan, 91.
75 U.S. Senate. Foreign Affairs, Defense, and Trade Division. Military Transformation: Intelligence, Surveillance and

Reconnaissance, 7.
32
effectively accomplish missions without risking human life,” drones are necessary for

transforming armies.76 General James Mattis, head of U.S. Joint Forces Command and NATO

Transformation argues that AI robots will continue to play a larger role in future military

operations. Mattis fully supports the use of AI weapons, and since commanding forces in Iraq,

the UAV force has increased to over 5,300 UAV drones. Mattis even understands the

relationship that can form between a soldier and a machine. Mattis embraces the reduction of

risk to soldiers, the efficient gathering of intelligence, and their ability to strike stealthily.

Mattis’s high ranking and support of UAVs will lead to even more use of UAVs. 77 From a

soldier’s point of view, the benefits that drones bring far exceed the legal and ethical concerns

that those soldiers are not responsible for. Drones are proving effective on the battlefield,

leading to support from the low and high ranking military leaders. However, civilian researchers

and scientists continue to be skeptical of the use of AI in the military, especially when involving

human life.

         Looking more closely at the benefits of UCAVs, it is clear why both low ranking and

military leaders are optimistic and supportive of the use of UCAVs. The most clear reason is the

reduction of friendly military casualties, taking U.S. human soldiers out of the line of fire.78

When soldier causalities plays a large part in public perception of war, reducing loss of human

life makes war less devastating on the home front. The advanced capabilities of AI integrated

into military robots and systems is another appealing benefit of AI. Rapid information

processing, accurate decision making and calculations, 24/7 functionality, and battlefield


76 McDaid, 6.
77 Brown, 23.
78 John Keller, “Air Force to Use Artificial Intelligence and Other Advanced Data Processing to Hit the Enemy Where It Hurts,”

6.
33
assessment amplify the capabilities of a human soldier, making UCAVs extremely efficient and

dangerous. By processing large amounts of data at a rapid speed, UCAVs can, “hit the enemy

where it hurts” and take advantage of calculated vulnerabilities before the enemy can prepare a

defense.79 In a chaotic battle situation, where a soldier has to process numerous different

environmental, physical, and mental factors, speed and accuracy of decision making is essential

to a soldier. AI have the ability to cope with the chaos of a battlefield, making decisions faster

and more efficiently, processing hundreds of variables, than human soldiers. 80 While soldiers

are hindered by fear and pain, AI machines lack this emotion, instead being able to function

solely on the battlefield. The advanced capabilities and abilities of UCAVs have proven to be

extremely effective on the battlefield. Though UCAVs are efficient and deadly soldiers, they

also open the doors for numerous ethical, legal, and moral concerns.

                                               V. Ethical Concerns

         Military ethics is a very broad concept, so in order to understand the ethical concerns

caused by the use of AI in the military, I will first discuss what military ethics are. In a broad

sense, ethics look at what is right and wrong. Military ethics is often a confusing and

contradictory concept because war involves violence and killing against others, often considered

to be immoral in general. Though some argue that military ethics cannot exist because of the

killing of others, I will look at military ethics where killing is ethical. In this definition of

military ethics, war is ethical if it counters hostile aggression and is conducted lawfully.81 For

example, the U.S.’s planned raid on Osama Bin Laden’s compound leading to his killing could



79 Keller, 10.
80 The Economist. “No Command, and Control,” 89.
81 Krishnan, 117.
34
be viewed as ethical. Bin Laden was operating an international terrorist organization that had

successfully killed thousands of civilians through their attacks. However, the use of WMDs, for

example, the U.S.’s bombing of Hiroshima and Nagasaki is often viewed as unethical. In the

case of those bombings, thousands of civilians were killed, and it can be debated that the use of

WMDs is not lawful due to their catastrophic damage to a civilian population. The bombings of

Hiroshima and Nagasaki can be viewed as war crimes against a civilian population, breaking

numerous laws of war established in the Rules of Aerial Warfare (the Hague, 1923), including

Article XXII that states: “Aerial bombardment for the purpose of terrorizing the civilian

population, of destroying or damaging private property not of military character, or of injuring

non-combatants is prohibited.” 82

           As shown in the examples, civilian causalities are one of the most unethical concerns

with war in general. As previously stated, the tragedy in the Persian Gulf in 1988 showed the

consequences of an AI systems’s mistake on a large group of civilians. As the military continues

to progressively utilize UCAVs for combat, civilian deaths from UCAVs have also risen. The

U.S. military has relied on UCAVs heavily for counter terrorism operations in Pakistan. Because

of the effectiveness of the strikes, the U.S. continues to utilize drones for airstrikes on terrorist

leaders and terrorist training camps. However, with increasing drone strikes, the death toll of

civilians and non militants has increased exponentially, and has even outnumbered the death toll

of targeted militants.83 This is where the unethical nature of UCAV airstrikes is beginning to

unfold. The effectiveness of the airstrikes is appealing to the military and they continue to utilize

them, yet ignore the thousands of civilians who are also killed. Marge Van Cleef, Co-Chair of

82   The Hague. 1923. Draft Rules of Aerial Warfare. Netherlands: The Hague.
83   Leila Hudson, “Drone Warfare: Blowback From The New American Way of War,” 122.
35
the Women’s International League for Peace and Freedom takes the ethical argument a step

further, claiming that drone warfare is terrorism itself. Van Cleef says that, “families in the

targeted regions have been wipe out simply because a suspected individual happened to be near

them or in their home. No proof is needed.”84 The use of UCAVs has proven to be unethical for

this reason, that civilians are continuously killed in drone strikes. Whether it be through

malfunction, lack of information, or another mistake, UCAVs have shown that they are not able

to avoid the killing of civilians. However, civilians are not the only victims of UCAV use.

         Moral disengagement, changing the psychological impact of killing, is another major

ethical concern of UCAV use. When a soldier is put in charge of a UCAV and gives that UCAV

the order to kill, having a machine as a barrier neutralizes a soldier’s inhibition to kill. Because

of this barrier, soldiers can kill the enemy from a large distance, disengaging the soldier from the

actual feeling of taking a human life. Using UCAVs separates a soldier from emotional and

moral consequences to killing. 85 An example of this moral disengagement is of a UCAV

operator in Las Vegas spending his day operating a UCAV, carrying out airstrikes and other

missions thousands of miles away, then joining his family for dinner that night. Being in these

two situations daily not only leads to emotional detachment from killing, but also hides the

horrors of war. Often on the virtual battlefield, “soldiers are less situationally aware and also less

restrained because of emotional detachment.”86 Because of this emotional detachment to kill,

UCAVs are unethical in that they make the psychological impact of killing non-existent.

         One of the main deterrents of war is the loss of human life. But when humans are taken


84 Marge Van Cleef, “Drone Warfare=Terrorism,” 20.
85 Krishnan, 128.
86 U.S. House of Representatives. Subcommittee on National Security and Foreign Affairs. Rise of the Drones: Unmanned

Systems and the Future of War Hearing, Barrett, 13.
36
out of the line of fire and human causalities shrink as AI weapons increase, is it easier to go to

war? An unethical result of rising use of robotic soldiers is the possibilities of unnecessary war,

when the perception of war is changed due to the lack of military casualties.87 Unmanned

systems in war, “further disconnect the military from society. People are more likely to support

the use of force as long as they view it as costless.”88 When the people at home only see the lack

of human causalities, the horrors of war are hidden and they may think that the impact of going

to war is less than it really is. This false impression that, “war can be waged with fewer costs

and risks” creates an illusion that the war is easy and cheap.89 This can lead nations into a war

that might not be necessary, giving them the perception, “gee, warfare is easy.”90

         These three ethical concerns all fall under the idea of automated killing, which is an

ethical concern in itself. Giving a machine full control over the decision to end a life is unethical

for a number of reasons: machines lack empathy, morals, have no concept of the finality of life,

and life and human experiences. AI machines are programmed far differently from humans, so

the decision to end of human life should never be left up to a machine. When looking at a

machines morals, they may still have the ability to comprehend environments and situations, but

will not have the ability to feel remorse or fear punishment. 91 In the event that an AI machine

kills a human wrongly, will it feel remorse for that killing? It is unethical and dangerous to use

AI weaponry because humans have the ability to think morally, while a machine may just

“blindly pull the trigger because some algorithm says so.”92 AI machines also lack empathy, the



87 Singer, 44.
88 Singer, 44.
89 Cortright, “The Prospect of Global Drone Warfare.”
90 Singer, 44.
91 Krishnan, 132.
92 Krishnan, 132.
37
ability to empathize with human beings. If an AI machine can’t understand human suffering or

has never experienced it themselves, it will continue to carry out unethical acts without being

emotionally effected. Fitting in with empathy and morals, AI machines lack the concept of the

finality of life and the idea of being mortal. Both not knowing and not experiencing death and

the end of life, an AI machine doesn’t have the ability to take finality of life into consideration

when making an ethical decision. With no sense of the ability to die, an AI machine lacks

empathy for death, allowing it to refrain from moral decisions.93 Automated killing opens the

doors for all these ethical concerns.

                                                 VI. Legal Concerns

         However, ethical concerns are not the only problem with the use of AI machines in the

military. There are also a number of legal concerns regarding the use of AI weaponry,

specifically with the rise of drones. Today, modern warfare is still governed by the laws of the

Geneva Convention, a series of laws to establish the laws of war, armed conflict, and

humanitarian treatment. However, the Geneva Convention was drafted during the 1940s, a time

when warfare was radically different. This means that the laws of war are outdated, the 20th

century military laws are not able to keep up with 21st century war technology.94 The laws of

armed conflict need to be updated before the use of UCAVs continues to establish the legality of

using them in the first place. For example, an article of the Geneva Convention's protocol states:

"effective advance warning shall be given of attacks which may affect the civilian population,

unless circumstances do not permit." 95 However, the killing of civilians by UCAVs without prior


93 Krishnan, 133.
94 U.S. House of Representatives. Subcommittee on National Security and Foreign Affairs. Rise of the Drones: Unmanned
Systems and the Future of War Hearing, Singer, 7.
95 Michael Newton, “Flying Into the Future: Drone Warfare and the Changing Face of     Humanitarian        Law.”
38
warning violates the humanitarian protections established by the Geneva Convention, illegally

carrying out attacks resulting in civilian deaths. Only combatants can be lawfully targeted in

armed conflict, and any killing of non-combatants violates armed conflict law.96 Armed conflict

is changing at such a fast pace, it is hard to establish humanitarian laws for war that can adapt to

changing technologies.

            As of now, the actions of UCAVs could be deemed as war crimes, the violation of armed

conflict laws. One legal concern with the use of UCAVs is the debate over whether they are

considered “state sanctioned lethal force” or not. If they are state sanctioned, such as a soldier in

the U.S. Army, they are legal and must follow the laws of armed conflict. However, numerous

drones are operated by the CIA, meaning they are not state sanctioned. Because these drones are

not state sanctioned, they are violating international armed law, as being state sanctioned gives

the U.S. military the right to use lethal force. The killing of civilians in general, but specifically

by non-state sanctioned weapons can be seen as war crimes. 97

            Another legal problem of drone warfare concerns liability of the weapon, who is to blame

for an AI malfunction or mistake. There are so many people involved in the development,

building, and operation of a drone, making it hard to decide who is responsible for an error. Is it

the computer scientist who programmed the drone, the engineer who built the drone, the operator

of the drone, or the military leader who authorized the attack? It can even be argued that the

drone is solely responsible for its own actions, and should be tried and punished as though it is a

human soldier. Article 1 of the Hague Convention requires combatants to be, “commanded by a




96   Ryan Vogel, “Drone Warfare and the Law of Armed Conflict,” 105.
97   Van Cleef, 20.
39
person responsible for his subordinates.”98 This makes sense for human soldiers, but makes it

very hard to legally control an autonomous machine, one that cannot take responsibility for its

own actions when acting autonomously. Because UCAV use is rising, there needs to be

established legal accountability laws in the event of a robotic malfunction or mistake leading to

human or environmental damage.99

            The field of AI continues to develop at an extremely rapid pace, opening up the door for

increased optimism and reliance on the new technologies. However, this exponential growth

comes with numerous ethical, legal, and moral concerns, especially in regards to its relationship

with the military. The military has influenced the research and development of AI since it was

established in the 1950s, and continues to have a hand in AI growth through heavy funding and

involvement. Though AI brings great benefits to society politically, socially, economically, and

technologically, we should be warned of over reliance on the technology. It is important to

always keep a human in the loop, whether it be for civilian or military purposes. AI technology

has the power to shape the society we live in today, but each increase in autonomy should be

taken with a grain of salt.




98   Krishnan, 103.
99   Krishnan, 103.
40
                                         Bibliography

Adler, Paul S. and Terry Winograd. Usability: Turning Technologies Into Tools. New York:

       Oxford University Press, 1992.

Anderson, Alan Ross. Minds and Machines. New Jersey: Prentice-Hall Inc.. 1964.

Anderson, Michael, and Susan Leigh Anderson. “Robot Be Good.” Scientific American 303, no.

       4 (2010): 72-77.

Bellin, David and Gary Chapman. Computers in Battle: Will They Work?. New York: Harcourt

       Brace Jovanovich Publishers, 1987.

Brown, Alan S. “The Drone Warriors.” Mechanical Engineering 132, no. 1 (January 2010):

       22-27.

Burks, Arthur W. “The ENIAC: The First General-Purpose Electronic Computer,” Annals of the

       History of Computing 3, no. 4 (1981): 310–389.

Cortright, David. “The Prospect of Global Drone Warfare.” CNN Wire (Oct 19, 2011).

Dhume, Sadanand. “The Morality of Drone Warfare: The Reports About Civilian Casualties are

       Unreliable.” Wall Street Journal Online, (Aug 17, 2011).

Dreyfus, Hubert L. Mind Over Machine. New York: The Free Press, 1986.

Dreyfus, Hubert L. What Computers Can't Do: The Limits of Artificial Intelligence. New York:

       Harper Colophon Books, 1979.

Edwards, Paul N. The Closed World: Computers and the Politics of Discourse in Cold War

       America. Massachusetts: MIT Press, 1996.

Ford, Nigel. How Machines Think. Chichester, England: John Wiley and Sons, 1987.
41
Hayles, Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and

       Informatics. Chicago: The University of Chicago Press, 1999.

Heims, Steve J. John Von Neumann and Norbert Wiener: From Mathematics to the Technologies

       of Life and Death. Massachusetts: MIT Press, 1980.

Hogan, James P. Mind Matters. New York: Ballantine Publishing Group, 1997.

Hudson, Leila, Colin Owens, and Matt Flannes. “Drone Warfare: Blowback From The New

       American Way of War.” Middle East Policy 18, no. 3 (Fall 2011): 122-132.

Keller, John. “Air Force to Use Artificial Intelligence and Other Advanced Data Processing to

       Hit the Enemy Where It Hurts,” Military & Aerospace Electronics 21, no. 3 (2010): 6-10.

Krishnan, Armin. Killer Robots: Legality and Ethicality of Autonomous Weapons. Vermont:

       Ashgate, 2009.

Lehner, Paul. Artificial Intelligence and National Defense: Opportunity and Challenge.

       Pennsylvania: Tab Books Inc., 1989.

Le Page, Michael. “What Happens When We Become Obsolete?” New Scientist 211, no. 2822

       (July 2011): 40-41.

Lyons, Daniel. "I, ROBOT." Newsweek 153, no. 21 (May 25, 2009): 66-73. Military &

       Government Collection..

Masci, David. “Artificial Intelligence”. CQ Researcher. 7, no. 42 (1997): 985-1008.

McCarthy, John, and Marvin Minsky. “A Proposal for the Darthmouth Summer Project on

       Artificial Intelligence.” AI Magazine 27, no. 4 (Winter 2006): 12-14.

McCarthy, John. Defending A.I. Research. California: CSLI Publications, 1996.
42
McDaid, Hugh and David Oliver. Robot Warriors: The Top Secret History of the Pilotless Plane.

       London: Orion Books Ltd., 1997.

McGinnis, John. “Accelerating AI.” Northwestern University Law Review 104, no. 3 (2010):

       1253-1269.

Michie, Donald. Machine Intelligence and Related Topics. New York: Gordon and Breach, 1982.

Minsky, Marvin and Seymour Papert. “Artificial Intelligence.” Lecture to Oregon State

       Systems's 1974 Condon Lecture, Eugene, OR, 1974.

Mishkoff, Henry C. Understanding Artificial Intelligence. Dallas, Texas: Texas Instruments,

       1985.

Newton, Michael A. “Flying Into the Future: Drone Warfare and the Changing Face of

       Humanitarian Law.” Keynote Address to University of Denver's 2010 Sutton

       Colloquium, Denver, CO, November 6, 2010.

Perlmutter, David D. Visions of War: Picturing War From The Stone Age to the Cyber Age. New

       York: St. Martin's Press, 1999.

Pelton, Joseph N. “Science Fiction vs. Reality.” Futurist 42, no. 5 (Sept/Oct 2008): 30-37.

Peterson, Scott. “Iran Hijacked US Drone, says Iranian Engineer.” Christian Science Monitor,

       (15 December, 2011).

Schneiderhan, Wolfgang. “UV's, An Indispensable Asset in Operations.” NATO's Nations and

       Partners for Peace 52, no. 1 (2007): 88-92.

Shachtman, Noah. “Darpa Chief Speaks.” Wired, (20 February 2007).

Shapiro, Kevin. “How the Mind Works.” Commentary 123, no. 5 (May 2007): 55-60.
43
Singer, P.W. “Robots At War: The New Battlefield.” Wilson Quarterly 33, no. 1 (Winter 2009):

       30-48.

The Economist. “Drones and the man: The Ethics of Warfare.” The Economist 400, no. 8744

       (July 2010): 10.

The Economist. “No Command, and Control.” The Economist 397, no. 8710 (Nov 2010): 89.

The Hague. 1923. Draft Rules of Aerial Warfare. Netherlands: The Hague.

Triclot, Mathieu. “Norbert Wiener's Politics and the History of Cybernetics.” Lecture to

       ICESHS's 2006 The Global and the Local: The History of Science and the Cultural

       Integration of Europe, Cracow, Poland, September 6-9, 2006.

Tucker, Patrick. “Thank You Very Much, Mr. Roboto.” Futurist 45, no. 5 (2011): 24-28.

Turing, Alan. “Computing Machinery and Intelligence.” Mind 59 (1950): 433-460.

U.S. House of Representatives. Subcommittee on National Security and Foreign Affairs. Rise of

       the Drones: Unmanned Systems and the Future of War Hearing, 23 March 2010.

       Washington: Government Printing Office, 2010.

U.S. Senate. Foreign Affairs, Defense, and Trade Division. Military Transformation:

       Intelligence, Surveillance and Reconnaissance. (S. Rpt RL31425). Washington: The

       Library of Congress, 17 January 2003.

U.S. Senate. Foreign Affairs, Defense, and Trade Division. Unmanned Aerial Vehicles:

       Background and Issues for Congress Report. (S. Rpt RL31872). Washington: The Library

       of Congress, 25 April 2003.
44
U.S. Senate. Foreign Affairs, Defense, and Trade Division. Unmanned Aerial Vehicles:

       Background and Issues for Congress Report. (S. Rpt RL31872). Washington: The

       Library of Congress, 21 November 2005.

Van Cleef, Marge. “Drone Warfare=Terrorism.” Peace and Freedom 70, no. 1 (Spring 2010): 20.

Velichenko, V., and D. Pritykin. “Using Artificial Intelligence and Computer Technologies for

       Developing Treatment Programs for Complex Immune Diseases.” Journal of

       Mathematical Sciences 172, no. 5 (2011): 635-649.

Vinge, Vernor. “Singularity.” Lecture at the VISION-21 Symposium, Cleveland, OH, March

       30-31, 1993.

Vogel, Ryan J. “Drone Warfare and the Law of Armed Conflict.” Denver Journal of International

       Law and Policy 39, no. 1 (Winter 2010): 101-138.

Von Drehle, David. “Meet Dr. Robot.” Time 176, no. 24 (2010): 44-50.

von Neumann, John. The Computer and the Brain. New Haven: Yale University Press, 1958.

Wiener, Nobert. “From the Archives.” Science, Technology, & Human Values 8, no. 3 (Summer,

       1983): 36-38.

Wiener, Nobert. The Human Use of Human Beings: Cybernetics and Society. New York: Avon

       Press, 1950.

Contenu connexe

Tendances

Artificial Intelligence
Artificial Intelligence Artificial Intelligence
Artificial Intelligence XashAxel
 
Artificial intelligence .pptx
Artificial intelligence .pptxArtificial intelligence .pptx
Artificial intelligence .pptxGautamMishra79
 
ARTIFICIAL INTELLIGENCE
ARTIFICIAL INTELLIGENCEARTIFICIAL INTELLIGENCE
ARTIFICIAL INTELLIGENCEMidhuti
 
Future & Technology - What's Next?
Future & Technology - What's Next? Future & Technology - What's Next?
Future & Technology - What's Next? Massive Media
 
Artificial Intelligence (A.I.) || Introduction of A.I. || HELPFUL FOR STUDENT...
Artificial Intelligence (A.I.) || Introduction of A.I. || HELPFUL FOR STUDENT...Artificial Intelligence (A.I.) || Introduction of A.I. || HELPFUL FOR STUDENT...
Artificial Intelligence (A.I.) || Introduction of A.I. || HELPFUL FOR STUDENT...Shivangi Singh
 
Artificial intelligence to a better future.
Artificial intelligence to a better future.Artificial intelligence to a better future.
Artificial intelligence to a better future.Sriganesh sankar
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligenceravijain90
 
What Is The Next Level Of AI Technology?
What Is The Next Level Of AI Technology?What Is The Next Level Of AI Technology?
What Is The Next Level Of AI Technology?Bernard Marr
 
ARTIFICIAL INTELLIGENCE SLIDESHARE.pptx
ARTIFICIAL INTELLIGENCE SLIDESHARE.pptxARTIFICIAL INTELLIGENCE SLIDESHARE.pptx
ARTIFICIAL INTELLIGENCE SLIDESHARE.pptxmatsiemokgalabong
 
Artificial intelligence tapan
Artificial intelligence tapanArtificial intelligence tapan
Artificial intelligence tapanTapan Khilar
 
Artificial inteligence
Artificial inteligenceArtificial inteligence
Artificial inteligenceSharath Raj
 
Artificial Intelligence and Machine Learning
Artificial Intelligence and Machine LearningArtificial Intelligence and Machine Learning
Artificial Intelligence and Machine LearningMykola Dobrochynskyy
 
Ethical issues facing Artificial Intelligence
Ethical issues facing Artificial IntelligenceEthical issues facing Artificial Intelligence
Ethical issues facing Artificial IntelligenceRah Abdelhak
 
Presentation on Artificial Intelligence
Presentation on Artificial IntelligencePresentation on Artificial Intelligence
Presentation on Artificial IntelligenceIshwar Bulbule
 
Risks in artificial intelligence
Risks in artificial intelligenceRisks in artificial intelligence
Risks in artificial intelligenceJean-Luc Scherer
 
Artificial Intelligence in Future India
Artificial Intelligence in Future IndiaArtificial Intelligence in Future India
Artificial Intelligence in Future IndiaSnehenduDatta1
 
Artificial Intelligence - Forwarded by Jeff Campau
Artificial Intelligence - Forwarded by Jeff CampauArtificial Intelligence - Forwarded by Jeff Campau
Artificial Intelligence - Forwarded by Jeff CampauJeff Campau
 

Tendances (20)

Artificial Intelligence
Artificial Intelligence Artificial Intelligence
Artificial Intelligence
 
Artificial intelligence .pptx
Artificial intelligence .pptxArtificial intelligence .pptx
Artificial intelligence .pptx
 
ARTIFICIAL INTELLIGENCE
ARTIFICIAL INTELLIGENCEARTIFICIAL INTELLIGENCE
ARTIFICIAL INTELLIGENCE
 
Future & Technology - What's Next?
Future & Technology - What's Next? Future & Technology - What's Next?
Future & Technology - What's Next?
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligence
 
Hackers
HackersHackers
Hackers
 
Artificial Intelligence (A.I.) || Introduction of A.I. || HELPFUL FOR STUDENT...
Artificial Intelligence (A.I.) || Introduction of A.I. || HELPFUL FOR STUDENT...Artificial Intelligence (A.I.) || Introduction of A.I. || HELPFUL FOR STUDENT...
Artificial Intelligence (A.I.) || Introduction of A.I. || HELPFUL FOR STUDENT...
 
Artificial intelligence to a better future.
Artificial intelligence to a better future.Artificial intelligence to a better future.
Artificial intelligence to a better future.
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligence
 
What Is The Next Level Of AI Technology?
What Is The Next Level Of AI Technology?What Is The Next Level Of AI Technology?
What Is The Next Level Of AI Technology?
 
ARTIFICIAL INTELLIGENCE SLIDESHARE.pptx
ARTIFICIAL INTELLIGENCE SLIDESHARE.pptxARTIFICIAL INTELLIGENCE SLIDESHARE.pptx
ARTIFICIAL INTELLIGENCE SLIDESHARE.pptx
 
Artificial intelligence tapan
Artificial intelligence tapanArtificial intelligence tapan
Artificial intelligence tapan
 
Artificial inteligence
Artificial inteligenceArtificial inteligence
Artificial inteligence
 
SARANRAJ(AI).pptx
SARANRAJ(AI).pptxSARANRAJ(AI).pptx
SARANRAJ(AI).pptx
 
Artificial Intelligence and Machine Learning
Artificial Intelligence and Machine LearningArtificial Intelligence and Machine Learning
Artificial Intelligence and Machine Learning
 
Ethical issues facing Artificial Intelligence
Ethical issues facing Artificial IntelligenceEthical issues facing Artificial Intelligence
Ethical issues facing Artificial Intelligence
 
Presentation on Artificial Intelligence
Presentation on Artificial IntelligencePresentation on Artificial Intelligence
Presentation on Artificial Intelligence
 
Risks in artificial intelligence
Risks in artificial intelligenceRisks in artificial intelligence
Risks in artificial intelligence
 
Artificial Intelligence in Future India
Artificial Intelligence in Future IndiaArtificial Intelligence in Future India
Artificial Intelligence in Future India
 
Artificial Intelligence - Forwarded by Jeff Campau
Artificial Intelligence - Forwarded by Jeff CampauArtificial Intelligence - Forwarded by Jeff Campau
Artificial Intelligence - Forwarded by Jeff Campau
 

Similaire à Dangers of Over-Reliance on AI in the Military

Developments in Artificial Intelligence - Opportunities and Challenges for Mi...
Developments in Artificial Intelligence - Opportunities and Challenges for Mi...Developments in Artificial Intelligence - Opportunities and Challenges for Mi...
Developments in Artificial Intelligence - Opportunities and Challenges for Mi...Andy Fawkes
 
WHAT IS ARTIFICIAL INTELLIGENCE IN SIMPLE WORDS.pdf
WHAT IS ARTIFICIAL INTELLIGENCE IN SIMPLE WORDS.pdfWHAT IS ARTIFICIAL INTELLIGENCE IN SIMPLE WORDS.pdf
WHAT IS ARTIFICIAL INTELLIGENCE IN SIMPLE WORDS.pdfSyedZakirHussian
 
The Disadvantages Of Artificial Intelligence
The Disadvantages Of Artificial IntelligenceThe Disadvantages Of Artificial Intelligence
The Disadvantages Of Artificial IntelligenceAngela Hays
 
Artificial Intelligence and Human Computer Interaction
Artificial Intelligence and Human Computer InteractionArtificial Intelligence and Human Computer Interaction
Artificial Intelligence and Human Computer Interactionijtsrd
 
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCEHUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCEeraser Juan José Calderón
 
seminar Report-BE-EEE-8th sem-Artificial intelligence in security managenent
seminar Report-BE-EEE-8th sem-Artificial intelligence in security managenentseminar Report-BE-EEE-8th sem-Artificial intelligence in security managenent
seminar Report-BE-EEE-8th sem-Artificial intelligence in security managenentMOHAMMED SAQIB
 
9694 thinking skills ai rev qr
9694 thinking skills ai rev qr9694 thinking skills ai rev qr
9694 thinking skills ai rev qrmayorgam
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligenceArpitChechani
 
ArtificialIntelligencein Automobiles.pdf
ArtificialIntelligencein Automobiles.pdfArtificialIntelligencein Automobiles.pdf
ArtificialIntelligencein Automobiles.pdfabhi49694969
 
Regulating Artificial Intelligence Systems, Risks, Challenges, Competencies, ...
Regulating Artificial Intelligence Systems, Risks, Challenges, Competencies, ...Regulating Artificial Intelligence Systems, Risks, Challenges, Competencies, ...
Regulating Artificial Intelligence Systems, Risks, Challenges, Competencies, ...Luis Taveras EMBA, MS
 
Artificial intelligence (AI) 2022
Artificial intelligence (AI) 2022Artificial intelligence (AI) 2022
Artificial intelligence (AI) 2022findeverything
 
artificial intelligence
artificial intelligenceartificial intelligence
artificial intelligencevallibhargavi
 
artificial intelligence
artificial intelligenceartificial intelligence
artificial intelligencevallibhargavi
 
Tinay Artificial Intelligence
Tinay Artificial IntelligenceTinay Artificial Intelligence
Tinay Artificial IntelligenceCristina Faalam
 
Artificial intelligence-full -report.doc
Artificial intelligence-full -report.docArtificial intelligence-full -report.doc
Artificial intelligence-full -report.docdaksh Talsaniya
 

Similaire à Dangers of Over-Reliance on AI in the Military (20)

Developments in Artificial Intelligence - Opportunities and Challenges for Mi...
Developments in Artificial Intelligence - Opportunities and Challenges for Mi...Developments in Artificial Intelligence - Opportunities and Challenges for Mi...
Developments in Artificial Intelligence - Opportunities and Challenges for Mi...
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligence
 
WHAT IS ARTIFICIAL INTELLIGENCE IN SIMPLE WORDS.pdf
WHAT IS ARTIFICIAL INTELLIGENCE IN SIMPLE WORDS.pdfWHAT IS ARTIFICIAL INTELLIGENCE IN SIMPLE WORDS.pdf
WHAT IS ARTIFICIAL INTELLIGENCE IN SIMPLE WORDS.pdf
 
The Disadvantages Of Artificial Intelligence
The Disadvantages Of Artificial IntelligenceThe Disadvantages Of Artificial Intelligence
The Disadvantages Of Artificial Intelligence
 
Artificial Intelligence and Human Computer Interaction
Artificial Intelligence and Human Computer InteractionArtificial Intelligence and Human Computer Interaction
Artificial Intelligence and Human Computer Interaction
 
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCEHUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
 
seminar Report-BE-EEE-8th sem-Artificial intelligence in security managenent
seminar Report-BE-EEE-8th sem-Artificial intelligence in security managenentseminar Report-BE-EEE-8th sem-Artificial intelligence in security managenent
seminar Report-BE-EEE-8th sem-Artificial intelligence in security managenent
 
2016 promise-of-ai
2016 promise-of-ai2016 promise-of-ai
2016 promise-of-ai
 
AI and disinfo (1).pdf
AI and disinfo (1).pdfAI and disinfo (1).pdf
AI and disinfo (1).pdf
 
9694 thinking skills ai rev qr
9694 thinking skills ai rev qr9694 thinking skills ai rev qr
9694 thinking skills ai rev qr
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligence
 
ArtificialIntelligencein Automobiles.pdf
ArtificialIntelligencein Automobiles.pdfArtificialIntelligencein Automobiles.pdf
ArtificialIntelligencein Automobiles.pdf
 
Regulating Artificial Intelligence Systems, Risks, Challenges, Competencies, ...
Regulating Artificial Intelligence Systems, Risks, Challenges, Competencies, ...Regulating Artificial Intelligence Systems, Risks, Challenges, Competencies, ...
Regulating Artificial Intelligence Systems, Risks, Challenges, Competencies, ...
 
artificial intelligence
artificial intelligenceartificial intelligence
artificial intelligence
 
Artificial intelligence (AI) 2022
Artificial intelligence (AI) 2022Artificial intelligence (AI) 2022
Artificial intelligence (AI) 2022
 
artificial intelligence
artificial intelligenceartificial intelligence
artificial intelligence
 
artificial intelligence
artificial intelligenceartificial intelligence
artificial intelligence
 
AI, people, and society
AI, people, and societyAI, people, and society
AI, people, and society
 
Tinay Artificial Intelligence
Tinay Artificial IntelligenceTinay Artificial Intelligence
Tinay Artificial Intelligence
 
Artificial intelligence-full -report.doc
Artificial intelligence-full -report.docArtificial intelligence-full -report.doc
Artificial intelligence-full -report.doc
 

Dangers of Over-Reliance on AI in the Military

  • 1. Joe Hanson Senior Project 2012 Dr. Call Taking Man Out of the Loop: The Dangers of Exponential Reliance On Artificial Intelligence A 2012 Time Magazine article dubbed “The Drone” the 2011 Weapon of the Year.1 With over 7,000 drones in the air, military use of unmanned vehicles is exponentially rising. Why is drone technology progressing at such a fast rate? Artificial Intelligence (AI) is at the forefront of drone technology development. Exponential technological developments in the last century have changed society in numerous ways. Mankind is beginning to rely increasingly on technology in everyday life, with many of these technologies bringing beneficial progress to all aspects of society. Exponential growth in computer, robotic, and electronic technology has led to the integration of this technology into social, economic, and military systems. Artificial intelligence is a part of computer science that is the intelligence and cause of action of a machine, both in hardware and software form. Using AI this machine can act autonomously and function in an environment using rapid data processing, pattern recognition, and environmental perception sensors to make decisions and carry out goals and tasks. AI seeks to emulate human intelligence, using these sensors to understand and process to solve and adapt to problems in real time. There is debate over whether AI is even plausible; if it is even possible to create a machine that can emulate human thought. Both humans and computers are able the process information, but humans have the ability to understand that information. Humans are able to make sense out of what they see and hear, involving the use of intelligence.2 Some 1 Feifel Sun, TIME Magazine 178, no. 25 (2011): 26. 2 Henry Mishkoff, Understanding Artificial Intelligence (Texas: Texas Instruments, 1985), 5.
  • 2. 2 characteristics of intelligence include the ability to: “respond to situations flexibly, make sense out of ambiguous or contradictory messages, recognize importance of different elements of a situation, and draw distinctions.”3 When discussing the possibilities of AI and the creation of a thinking machine, the main issue is whether or not a computer is able to possess intelligence. Supporters of AI development argue that because of exponential progress in computer and robotic technology, AI is developing further than just simple data processing, to the creation of autonomous AI that can emulate and surpass the intelligence of a human. According to University of Michigan Professor Paul Edwards, scientists are beginning to “simulate some of the functional aspects of biological neurons and their synaptic connections, neural networks could recognize patterns and solve certain kinds of problems without explicitly encoded knowledge or procedures,” meaning that AI is beginning to incorporate human biology to make it think.4 On the other side of the debate, AI skeptics and deniers argue that AI will never have the ability to surpass human intelligence. They argue that the human brain is far too advanced, that though a machine can calculate data faster, it will never match the complexity of a human brain. In order to emulate human thought, computer systems rely on programmed “expert systems,” an kind of AI that, “acts as an intelligent assistant,” to the AI's human user.5 An expert system is not just a computer program that can search and retrieve knowledge. Instead, an expert system possesses expertise, pools information and creates its own conclusion, “emulating human reason.”6 An expert system has three components that makes it more technologically advanced 3 Mishkoff, 5. 4 PaulEdwards, The Closed World (Cambridge: The MIT Press, 1997), 356. 5 Edwards, 356. 6 Mishkoff, 5.
  • 3. 3 than a simple informational retrieval system. One of these components is “knowledge base,” a collection of declarative knowledge (facts) and procedural knowledge (courses of action), acting as the expert system's memory bank. An expert system can integrate the two types of knowledge when making a conclusion.7 Another component is an “user interface,” hardware that a human user can communicate with the system, forming a two-way communication channel. The last component is the interface engine, which is the most advanced part of the expert system. This program knows when and how to apply knowledge, and also directs the implementation of that knowledge. These three components allow the expert system to exceed the capabilities of a simple information retrieval system. The capabilities of expert systems have opened up doors for military application. These functions can be applied to a number of military situations, from battlefield management, to surveillance, to data processing. Integrating expert systems into military AI technology gives those systems the ability to interpret, monitor, plan, predict, and control system behavior.8 A system is able to monitor its behavior, comparing and interpreting observations collected through sensory data. The ability to monitor and interpret is important for AI specializing in surveillance and image analysis, a vital capability for unmanned aerial vehicles. Expert systems also function as battlefield aids, helping to plan by designing actions, while also prediction, inferring consequences based on large amounts of data.9 Military application of expert systems in their AI systems give them an advantage on and off the battlefield, aiding in decision making and streamlining battlefield management and surveillance. 6 Mishkoff, 55. 8 Mishkoff, 59. 9 Mishkoff, 59.
  • 4. 4 AI benefits society in a number of ways, including socially, economically, and technologically. AI's rapid data processing and accuracy can help in many different sectors of society. Although these benefits are progressive and necessary in connection with other emerging technologies, specifically computer technology, society must be wary of over-reliance on AI technology and integration. Over integration of AI into society has begun the trend of taking a human out of loop, relying more on AI to carry out tasks, ranging small to large. And as AI technology continues to develop, autonomous AI systems will further be relied on to carry out tasks in all aspects of society, especially in military systems and weapons, and as there is less human control, humans must be cautious of putting all the eggs in one basket. The dangers of using AI in the military often outweigh the benefits, dangers including malfunction, unethical use, lack of testing, and the unpredictable nature and actions of AI systems. The possibility of a loss of control over an AI system, of humans giving a thinking machine too much responsibility, increases the chances of that reliance backfiring on its human creators. The backfire isn't just inconvenient, it can also be dangerous, especially if the backfire takes place in a military system. Missiles, unmanned drones, and other advanced forms of weaponry are relying on AI to aid them in functioning, and as AI technology becomes faster and smarter, humans are relying on the AI technology more and more. These systems have the ability to cause catastrophic damage, and taking humans out of the loop is especially dangerous. There has been extensive research and debate over AI in numerous regards. From the birth of AI as a field at the 1956 Dartmouth Conference, there has been support and opposition, optimists, skeptics, and deniers from all fields including physics, philosophy, computer science, and engineering. I will recognize all these different viewpoints, but my argument is that of the
  • 5. 5 skeptics, recognizing the benefits and progress that AI can bring, but still being wary of over- reliance on AI, specifically its integration into military systems. The idea of putting technology, such as advanced weaponry and missiles, under the responsibility of an AI system, whether it be AI software or hardware is especially dangerous. AI machines may lack the ability to think morally, ethically, or understand morality at all, so giving it the ability to kill while overly relying on it is a danger. Optimists such as founders of AI Marvin Minsky and John McCarthy fully support, embrace, and trust the integration of AI into society. On the other side of the spectrum are the deniers, the most famous being Hubert Dreyfus, who believe that a machine will never have the capabilities to emulate human intelligence, denying the existence of AI all together. This section of my paper reviews the existing literature on AI and the diverse views of its critics and supporters. The supporters of AI come from diverse fields of study, but all embrace the technology and have an optimism and trust for it. Alan Turing, an English computer scientist, was one of the first scholars to write about AI, even before it was declared as a field. Turing's paper, “Computing Machinery and Intelligence,” is mainly concerned with the question of “Can Machines Think?”.10 Turing's work was some of the first looking into computer and AI theory. Turing introduces the “Turing Test,” which tests a machine, both software and hardware, to see if it can exhibit intelligent behavior. Turing doesn't just introduce the Turing Test, but also shows his optimism for AI by refuting the “Nine Objections,” which were nine possible objections of a machine's ability to think. Some of these objections include a theological objection, the inability for computers to think independently, mathematical limitations, and complete denial of the 10 Alan Turing, “Computing Machinery and Intelligence,” Mind 59 (1950): 433-460.
  • 6. 6 existence of thinking machines. Turing refutes these objections through both philosophical and scientific arguments supporting the possibility of a thinking machine. Turing argues that a reason that people deny the possibility of thinking machines is not because they think it is impossible, but rather because they fear it and that, “we like to believe that Man is in some subtle way superior to the rest of creation.”11 Turing argues that computers will have the ability to think independently and have conscious experiences. Another notable early AI developer was Norbert Wiener, an American mathematician, who was the originator of cybernetics theory. In The Use of Human Beings: Cybernetics and Society, Weiner argues that the automation of society is beneficial. Wiener shows that there shouldn't be a fear of integrating technology into society, but instead people should embrace the integration. Wiener says that cybernetics and the continuation of technological progress rely on a human trust in autonomous machines. Though Wiener recognizes the benefits and progress that automation brings, he still does warn of relying too heavily on it. After the establishment of AI as a field at the Dartmouth Conference, the organizer of the conference, John McCarthy, wrote Defending AI Research. In this book, McCarthy collected numerous essays that support the development of AI and its benefits to society. McCarthy reviews the existing literature of notable early AI developers and either refutes or supports their claims. In the book, McCarthy reviews the article “Artificial Intelligence: A General Survey.”12 The article was written by James Lighthill, a British mathematician. In the article, Lighthill is critical of the existence of AI as a field. McCarthy refutes Lighthill's claims and defends AI existence and development. McCarthy also defends AI research from those who claim “AI as an 11 Turing, 444 12 John McCarthy, Defending AI Research (California: CSLI Publications, 1996), 27-34.
  • 7. 7 incoherent concept philosophically,” specifically refuting the arguments of Dreyfus. McCarthy argues that philosophers often “say that no matter what it [AI] does, it wouldn't count as intelligent.”13 Lastly, McCarthy refutes the arguments of those who claim that AI research is immoral and antihuman, saying that these skeptics and opponents are against pure science and research motivated solely by curiosity.14 McCarthy argues that research in computer science is necessary for opening up options for mankind. 15 Hubert Dreyfus is been a prominent denier of the existence of AI for decades. A professor of philosophy at UC Berkeley, Dreyfus has written numerous books in opposition to and critiquing the foundations of AI as a field. Dreyfus's main critique of AI is the idea that a machine can never have the capability to fully emulate human intelligence. Dreyfus argues that the power of a biological brain can not be matched, even if a machine has superior data processing capabilities. A biological brain not only reacts to what it perceives in the environment, but relies on background knowledge and experience to think. 16 Humans also incorporate ethics and morals into their decisions, and a machine can only use what it is programmed to think. What Dreyfus is arguing is that the human brain is superior to AI, and that a machine can't emulate human intelligence. Dreyfus's view that, “scientists are only beginning to understand the workings of the human brain, with its billions of interconnected neurons working together to produce thought. How can a machine be built based on something of which scientists have so little understanding?”17 shows his view on AI. When looking at the relationship between the military and computer technology and AI, 13 McCarthy, vii. 14 McCarthy, 2. 15 McCarthy, 20. 16 Hubert Dreyfus, Mind Over Machine, 31. 17 David Masci. 1997. “Artificial Intelligence.” CQ Researcher, 7.
  • 8. 8 there has been much debate over how much integration is safe. As the military integrates autonomous systems in their communication, information, and weapon systems, the danger of over reliances rises. One of the first people to recognize this danger was the previously mentioned Norbert Wiener. Even though Wiener was supportive of AI and its integration into society, he had a very different viewpoint concerning its use in military and weapon technology. Wiener wrote a letter in 1947 called “A Scientist Rebels,” which argues and resists government and military influence on AI and computer research. Wiener warns of the “gravest consequences” of the government's influence on development of AI.18 Wiener looks at the development of the atomic bomb as an example, and how the scientists' work falls into the hands of “he is least inclined to trust,” in this case the government and military. The idea that civilian scientific research can be integrated by the military and used in weaponry is a critique of the military's influence on AI development. Scientific research may seem innocent, but as it is manipulated through military influence, purely scientific research is integrated into war technology. Paul Edwards's The Closed World gives a history of the relationship and the impact that the military had on AI research and development and vise versa. Edwards looks at why the military put so much time and effort into computers. Edwards looks at the effects that computer technology and the integration of AI data processing systems had on the history of the Cold War. Edwards's broad historic look at computer and AI development gives insight to a military connection to the progressing technology that still exists today. Computer development began in the early 1940s, and from that time to the early 1960s, the U.S. military played an important role 18 Norbert Wiener. “From the Archives.” 38.
  • 9. 9 in the progressing computer technologies. After WWII, the military's role in computer research grew exponentially. The U.S. Army and Air Force began to fund research projects, contracting large commercial technology corporations such as Northrop and Bell Laboratories. 19 This growth in military funding and purchases enabled American computer research to progress at an extremely fast pace, however, due to secrecy, the military was able to keep control over the spread of research.20 Because of this secrecy, military sponsored computer projects were tightly controlled and censored. Due to heavy investment, the military did have a role in the “nurturance” of AI due to their relationship with the government controlled Advanced Research Projects Agency (ARPA). AI research received over 80% of its funding from ARPA, keeping the military in tune with AI research and development. 21 The idea that, “the computerization of society has essentially been a side effect of the computerization of war,” sums up the effect of the military on computer and AI development. Paul Lehner's Artificial Intelligence and National Defense looks at how AI can benefit the military, specifically through software applications. Written in 1989, Lehner’s view represents that of the later years of the Cold War, one where the technology had not fully developed, but the technology was exponentially progressing. Lehner discusses the integration of “expert systems,” software that can be used to aid and replace human decision makers. Lehner recognizes AI's data processing speed and accuracy and the benefits that the “expert system” could bring when applied to the military. Armin Krishnan's Killer Robots looks at the other way that AI is being integrated into the military, through hardware and weapons, also evaluating the moral and ethical 19 Edwards, 60. 20 Edwards, 63. 21 Edwards, 64.
  • 10. 10 issues surrounding the use of AI weaponry. Krisnan’s book was written in 2009, and looks at AI in the military currently, specifically looking at the ethical and legal problems associated with drone warfare and other robotic soldier systems. Some of the ethical concerns Krishnan brings up are: diffusion of responsibility for mistakes or civilian deaths, moral disengagement of soldiers, unnecessary war, and automated killing. Recently there has been much debate over the legal concerns regarding the use of AI in military systems and weaponry. One of the leading experts on the legality of AI integration is Peter W. Singer, the Director of 21st Century Defense Initiative at Brookings. In his article “Robots At War: The New Battlefield,”(2009) Singer raises the numerous legal concerns. The laws of war were outlined in the Geneva Convention laws in the middle of the 20th century. However, due to the progressing and changing war technologies, these 20th century laws of war are having trouble keeping up with 21st century war technology. Singer argues that the laws of war need to be updated to include new, AI systems and their integration. Due to high numbers of civilian deaths from AI systems, specifically drones, Singer also argues that these can be seen war crimes. Lastly, Singer brings up the question of who is responsible lawfully for an autonomous machine: the commander, the programmer, the designer, the pilot, or the drone itself? Singer's interesting look at the legal concerns over changing war technology is also stated in his participation on the U.S. Congressional hearings on unmanned military systems. Many scholars have also looked at what the future holds for AI. In 1993, Vernor Vinge coined the term “singularity” to describe the idea that one day, AI technology will surpass human intelligence. This is when computers will become more advanced than human intelligence, moving human kind into a post-human state. This is the point where AI “wakes up,” gaining the
  • 11. 11 ability to think for itself. This idea of “singularity” is expanded on in Katherine Hayles's How We Became Posthuman. Hayles looks at this as a time period in the near future where information is separated from the body, where information becomes materialized and can be moved through different bodies. Hayles's view shows that AI isn't just mechanically advancing, but also mentally and psychologically advancing. In the view of singularity, humans are heading in a direction where computers and humans will have to integrate with each other. But as technology continues to progress and AI systems become more advanced, it is important to recognize that the future may be integrated with AI technology. I. The History of AI: The Early 1900s to 1956 Beginning in the early 1900s, computer scientists, mathematicians, and engineers began to experiment with creating a thinking machine. During World War II, the military began using computers to break codes, ushering in the development of calculating computers. ENIAC, the Electronic Numerical Integrator And Computer, was the first electronic computer to successfully function.22 Early on, the majority of computer and AI projects were military funded, giving the military major influence over allocation and integration of the technology. As computer technology began to progress, so did AI as a branch of computer science. The first person to consider the possibilities of creating AI in the form of a thinking machine was Alan Turing. In his article “Computing Machinery and Intelligence,” Turing recognized the possibilities that a machine could plausibly emulate human thought. Turing's paper was very important to the development of AI as a field, being the first to argue the plausibility of AI existence, while also establishing a base for the field. Turing's refuting of the 22 Arthur Burks, “The ENIAC,” Annals of the History of Computing 3, no. 4 (1981): 389.
  • 12. 12 nine objections goes against the views of the skeptics and deniers, recognizing a diverse variety of arguments against AI. Another major figure in the development of computers and artificial intelligence was Hungarian mathematician John von Neumann. Von Neumann made many important contributions in a variety of fields, but had a very large impact on computer science. Today's computers are based on “von Neumann architecture,” building a computer to, “use a sequential 'program' to held in the machines 'memory' to dictate the nature and the order of the basic computational steps carried out by the machine's central processor.”23 He also used this architecture and compared it to a human brain, arguing that their functions are very similar. Von Neumann's 1950 “The Computer and the Brain,” was an important work concerning artificial intelligence, strengthening Turing's claim that computers could emulate human thought.24 In his book, Von Neumann compares the human brain to a computer, pointing out similarities in the their architecture and function. In some cases, the brain acts digitally, because its neurons themselves operate digitally. Similar to a computer, the neurons fire depending on an order to activate them.25 The result of von Neumann's work strengthened the plausibility of creating a thinking machine. Ultimately, the work of Turing, Wiener, and Von Neumann show an optimism that the early computer developers had. All three of them shared a faith in computer science and AI and supported its progress. Turing finished his paper with, “We can only see a short distance ahead, but we can see plenty there that needs to be done.”26 Even though these early computer 23 von Neumann, The Computer and the Brain, xii. 24 von Neumann 25 von Neumann, 29. 26 Turing, 460.
  • 13. 13 developers shared this optimism, they were also wary of the dangers of the progressing computer technology. Specifically Wiener, who had earlier written his letter “A Scientist Rebels,” had a skeptical view of the future of computer technology. In Cybernetics, Wiener states, What many of us fail to realize is that the last four hundred years are a highly special period in the history of the world. The pace at which changes during these years have taken place is unexampled in earlier history, as is the very nature of these changes. This is partly the results of increased communication, but also of an increased mastery over nature, which on a limited planet like the earth, may prove in the long run to be an increased slavery to nature. For the more we get out of the world the less we leave, and in the long run we shall have to pay our debts at a time that may be very inconvenient for our own survival.27 This quote from Wiener reflects the skepticism of Wiener. He understood the benefits that AI and computer science could bring to society, but were wary of over-reliance on the technology. Wiener’s quote is a warning of how fragile the world is, and that we need to be careful of the rapid development of AI technology. As humans “master nature” through technology, they become more and more vulnerable to their own creations. II. The History of AI: 1956, The Cold War, and an Optimistic Outlook Following the work of Turing, Von Neumann, and Wiener, computer scientists John McCarthy and Marvin Minsky organized the Dartmouth conference in the summer of 1956. This conference would lead to the birth of AI as a field, a branch of computer science. The 27 von Neumann, 46.
  • 14. 14 conference was based on the idea that, “machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”28 Using this idea, the goal of the conference was to establish AI as a field and show that it was plausible. As a result, AI began to gain momentum as a field. The military had a major influence over the research and development of AI and computer science beginning in the 1940s. Shortly after World War II, as the Cold War era began, AI research and development began to grow exponentially. Military agencies had the financial backing to provide the majority of the funding, as U.S. Army, Navy, and Air Force began to fund research projects and contract civilian science and research labs for computer science development. Between 1951 and 1961, military funding for research and development rose from $2 billion to over $8 billion. By 1961, research and development companies Raytheon and Sperry Rand were receiving over 90% of their funding from military sources. The large budget for research and development enabled AI research to take off, as ARPA received 80% of its funding from the federal government.29 Because of the massive amount of funding from military sources, American computer research was able to surpass the competition and progress at an exponential rate. The U.S. Military was able to beat out Britain, their only plausible rival, making the U.S. the leaders in computer technology. There were numerous consequences of the military influence of having their hand in research and development of computer science early in the Cold War. As a result of their overwhelming funding, the military was able to keep tight control over the research and 28John McCarthy, Marvin Minsky “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence” (proposal, Dartmouth College, August 31, 1955). 29 Edwards, 64.
  • 15. 15 development, directing it in the direction they desired. This direction was primarily concerned with developing technology that could benefit the military themselves, whether it be for communication or weaponry or national defense. Wanting to keep their influence as strong as possible, the military kept tight control through secrecy of the research. 30 The military wanted to make sure that researchers they had on contract were always aware of the interests of national security, censoring the communication between researchers and scientists in different organizations. A problem that arose from this censorship was that researchers could no longer openly share ideas, impeding and slowing down development. This showed that the military was willing to wait longer to ensure that national security measures were followed. As a result of the heavy funding from the military, AI turned from being just theory to having commercial interests. Parallel to the rapidly progressing computer technology, military research agencies began to also progress in AI development, studying cognitive processes and computer simulation.31 The main military research agency to look into AI was the Advanced Research Projects Agency (ARPA, renamed DARPA in 1972). Joseph Licklider, head of ARPA's Information Processing Techniques Office, was a crucial figure in increasing development of AI technology, establishing his office as the primary supporter of “the closed world military goals of decision support and computerized command and control,” which found “a unique relationship to the cyborg discourses of cognitive psychology and AI.”32 Thus unique relationship is the basis of AI, mastering cognitive psychology and then integrating and emulating that psychology into a machine. This branch of ARPA not only shows the military's interest and impact on research and 30 Edwards, 62. 31 Edwards, 259. 32 Edwards, 260.
  • 16. 16 development of AI, but also the optimism that the military had for its development. ARPA was able to mix basic computer research with military ventures, specifically for national defense, allowing the military to control the research and development of AI technology. The military influence over DARPA continued into the 1970s, as DARPA became the most important research agency for military projects. The military began to rely on AI for military use at an exponential rate. DARPA began to integrate AI technology into a number of military systems including soldier aids for both pilots and ground soldiers and battlefield management systems that relied on expert systems. 33 All these aspects of AI's integration into warfare are known as the “robotic battlefield” or the “electronic battlefield.” AI research opened the doors for this new warfare technology, integrating AI and computer technology to create electronic, robotic warfare and automated command and sensor networks for battlefield management. During the Vietnam War, military leaders shared an optimism for new AI technology. General William Westmoreland, head of military operations for the U.S. in Vietnam from 1964 to 1968 predicted that, “on the battlefield of the future, enemy forces will be located, tracked, and targeted almost instantaneously through the use of data-links, computer assisted intelligence evaluation and automated fire control.”34 Westmoreland also saw that as the military began to increasingly rely on AI technology, the need for human soldiers would decrease. Westmoreland’s prediction not only shows the optimism that military leaders had of AI technology, but also the over reliance that the military would have on those weapons. From the 1950s to the 1980s, DARPA continued to be the military’s main research and 33 Edwards, 297. 34 Armin Krishnan, Killer Robots: Legality and Ethicality of Autonomous Weapons, 19.
  • 17. 17 development agency. DARPA received heavy funding from the federal government, as military leaders continued to support the need for the integration of new AI technology. The military leader’s optimism in AI technology is reflected by the ambitious goals that DARPA had. In 1981, DARPA aimed to create a “fifth generation system,” one that would “have knowledge information processing systems of a very high level. In these systems, intelligence will be greatly improved to approach that of a human being.”35 Three years later in 1984, DARPA’s “Strategic Computing” stressed the need for the new technology stating, “Using this new technology [of artificial intelligence], machines will perform complex tasks with little human intervention, or even with complete autonomy.”36 It was in 1984 that the U.S. military began not just researching and developing AI, but actually integrating it into military applications for use on the battlefield. DARPA announced the creation of three different projects, an all purpose autonomous land vehicle, a “pilot’s associate” to assist pilots during missions, and a battlefield management system for aircraft carriers. The military was beginning to rely on this AI technology, using it to assist human military leaders and soldiers. Fearing they would lose ground in their progress to Britain, China and Japan, DARPA spent over $1 billion to maintain their lead.37 President Ronald Reagan continued the trend of the federal government using DARPA for advanced weapon development and showed the military’s commitment to developing AI military weapons and systems. Reagan’s Strategic Defense Initiative (SDI), later nicknamed “Star Wars,” was a proposed network of hundreds of orbiting satellites with advanced weaponry and battle 35 Paul Lehner, Artificial Intelligence and National Defense: Opportunity and Challenge, 164. 36 David Bellin, Computers in Battle: Will They Work?, 171. 37 Lehner, 166.
  • 18. 18 management capabilities. These satellites would be equipped with layers of computers, “where each layer of defense handles its own battle management and weapon allocation decisions.”38 Reagan’s SDI is a perfect example of the government and military’s overly ambitious integration of AI technology. Reagan was willing to put both highly advanced and nuclear weapons in the partial control of AI technology. Overall, Reagan’s SDI was a reckless proposition by the military, taking man out of the loop while putting weapons of mass destruction under the control of computer systems. As a result of the military’s commitment to the research and development of AI, AI technology has developed rapidly and its integration into both society and military applications. Before looking at the future of AI, it is important to first look at the different levels of autonomy, and where the technology currently is present day. In a nutshell, autonomy is the ability of a machine to function on its own with little to no human control or supervision. There are three types of machine autonomy: pre-programmed autonomy, limited autonomy, and complete autonomy. Preprogrammed autonomy is when a machine follows instructions and has no capacity to think for themselves.39 An example of preprogrammed autonomy is in a factory machine programmed for one job, such as welding or painting. Limited autonomy is the technology level that exists today, one where the machine is capable of carrying out most functions on its own, but still relies on a human operator for more complex behaviors and decisions. Current U.S. UAVs possess limited autonomy, using sensors and data processing to come up with solutions, but still relying on human decision making. Complete autonomy is the 38 Lehner, 159. 39 Krishnan, 44.
  • 19. 19 most advanced level, operating themselves with no human input or control.40 Although complete autonomy is still being developed, AI technology continues to progress at a rapid pace, opening the doors for complete autonomy, with DARPA estimating that complete autonomy will be achieved before 2030.41 In a 2007 interview with Tony Tether, the Director of DARPA, Tether showed his agency’s optimism and commitment to the development of future of AI technology. Tether refers to DARPA’s cognitive program, the program focusing on research and development of thinking machines, as “game changing,” where the computer is able to “learn” its user.42 DARPA is confident that they will be able to create fully cognitive machines, making AI smarter and more closely emulating human intelligence. Tether discusses the Command Post of the Future (CPOF), a distributed, computer run command and control system that functions 24/7, taking human operators out of the loop. The CPOF, though beneficial for its accurate and rapid data processing, is a dangerous example of over reliance on AI. Tether says, “those people who are now doing that 24-by-7 won’t be needed,” but it is important, not just for safety, but to retain full control, to have a human operator over military weapons and systems. 43 This still shows the military’s influence over the research and development, directing DARPA’s research towards an over-reliance on AI machines. But what happens when humans rely on AI so much that there is no turning back? Vinge’s Singularity Theory is the theory that AI will one day surpass human intelligence, and humans will eventually integrate with AI technology. Vinge’s Singularity points out the ultimate 40 Krishnan, 45. 41 Krishnan, 44. 42 Schatman, “Darpa Chief Speaks.” 43 Schatman.
  • 20. 20 outcome of over reliance and over optimism in AI technology: the loss of control of AI and the end of the human era. Vinge warns that between 2005 and 2030, computer networks might “wake up,” ushering in an era of the synthesis of AI and human intelligence. In her book How We Became Posthuman, Hayles continues Vinge’s Singularity Theory and looks at the separation of humans from human intelligence, an era where the human mind has advanced psychologically and mentally when integrated with AI technology.44 Hayles argues that the “the age of the human is drawing to a close.”45 Hayles looks at all the ways that humans are already beginning this integration with intelligent machines, such as computer assisted surgery in medicine and the replacing of human workers with robotic arms in labor, showing that AI machines have the ability to integrate with or replace humans in a diverse number of aspects of society.46 III. Skepticism: The Dangers of Over-Reliance on AI Although over-reliance on AI for military purposes is dangerous, AI does bring many benefits to society. Because of these benefits, humans are drawn to AI technology, becoming overly optimistic and committed to the technology. The numerous benefits are what give the military that optimism, however in this section, I will discuss AI’s benefits to civilian society followed by then by the limitations and dangers of AI on both civilian society and the military. AI has the ability to amplify human capabilities, surpassing the accuracy, expertise, and speed of a task compared to a human. Hearing, seeing, and motion through speech recognition, computer vision, and robotics are amplified by AI systems. Extremely rapid and efficient data processing and accurate data processing give AI technology the advantage to humans. In order 44 Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics, 2. 45 Hayles, 283. 46 Hayles, 284.
  • 21. 21 to look at these benefits, I will use examples of how AI can be applied to diverse sections of society. Speech recognition understands and creates speech, increasing speed, ease of access, and manual freedom when interacting with the machine. 47 In business, office automation is relying on new AI speech recognition capabilities to streamline business operations. Data entry, automatic dictation and transcription, and information retrieval all benefit from AI speech recognition. Human users benefit from this technology through easier, streamlined communication.48 AI robotics is another beneficial emerging technology for a number of reasons including: increased productivity, reduced costs, replacing skilled labor, and increased product quality. 49 AI robotics gives the AI system the ability to perform manual tasks, making them useful for integration into industrial and manufacturing sectors of society, such as automobile and computer chip factories. In medicine, surgeons and doctors are now integrating AI technology to assist in challenging surgery operations and to identify and treat diseases.50 AI has even found its way into everyday life, assisting the elderly in senior facilities, assisting pilots on commercial airlines, and being integrating into human homes, creating “smart houses.”51 I recognize this integration of AI is both beneficial and is not dangerous. AI is helping progress health, economic, and industrial technology, making it safer, more advanced, and more efficient. Although there are numerous benefits, it is also important to understand both the limitations and dangers of AI technology, specifically with its integration into military systems. Hubert Dreyfus leads the charge against the integration of AI, arguing both the limitations 47 Mishkoff, 108. 48 Mishkoff, 108. 49 Mishkoff, 120. 50 Von Drehle, “Meet Dr. Robot,” 44.; Velichenko. “Using Artificial Intelligence and Computer Technologies for Developing Treatment Programs for Complex Immune Diseases,” 635. 51 Anderson, “Robot Be Good,” 72.
  • 22. 22 and danger of AI machines. Dreyfus claims in What Computers Can’t Do that early AI developers were, “blinded by their early success and hypnotized by the assumption that thinking is a continuum,” meaning that Dreyfus believes this progress cannot continue. 52 Dreyfus is specifically wary of the integration of AI into systems when they have not been tested. Over optimism and reliance of AI supporters gives the AI machine the ability to function autonomously when it has not been fully tested. In Mind Over Machine, Dreyfus expands his skepticism, warning of the dangers of A.I. decision making because to him, decisions must be pre-programed into a computer, which leads to the A.I.’s “ability to use intuition [to be] forfeited and replaced by merely competent decision making. In a crisis competence is not good enough.”53 Dreyfus takes a skeptical approach by recognizing the benefits of AI on society, specifically information processing, but strongly opposes the forcing of undeveloped AI on society. He says that, “AI workers feel that some concrete results are better than none,” that AI developers continue to integrate untested AI into systems without working out all the consequences of doing so.54 Dreyfus is correct in saying that humans must not integrate untested, under developed AI into society, but rather always be cautious. This skeptical approach is important for the safe integration of AI, specifically when removing a human operator and replacing him with an autonomous machine. Since the 1940s, there has been skepticism of AI in military applications from a diverse group of opponents. The military’s commitment to the use and reliance of using autonomous machines for military functions comes with many dangers, removing human operators and 52 Hubert Dreyfus, What Computers Can’t Do, 302. 53 Hubert Dreyfus, Mind Over Machine, 31. 54 Hubert Dreyfus, What Computers Can’t Do, 304.
  • 23. 23 putting more decisions into the hands of the AI machine. Dreyfus argues the danger in implementing “questionable A.I.-based technologies” that have not been tested. To Dreyfus, allowing these automated defense systems to be implemented, “without the widespread and informed involvement of the people to be affected” is not only dangerous, but also inappropriate.55 It is inappropriate to integrate untested AI into daily life, where that AI may malfunction or make a mistake that could negatively impact human life. Dreyfus is wary of the military decision-makers being tempted to “install questionable AI-based technologies in a variety of critical contexts,” especially those applications that involve weapons and human life.56 Whether its to justify the billions of dollars spent for research and development or the temptation of the advanced capabilities of the AI machines, military leaders must be cautious of over reliance on AI technology for military applications. Dreyfus was not the first skeptic of technology and its integration into military applications. Wiener’s letter “A Scientist Rebels” showed both early scientists’ resistance and skepticism of research and development’s relationship with the military. The point that Wiener wants to make is that even if scientific information seems innocent, it can still have catastrophic consequences. Wiener’s letter was written shortly after the bombings of Hiroshima and Nagasaki, where the atomic bomb developer’s work fell into the hands of the military. To Wiener, it was even worse that the bomb was used “to kill foreign civilians indiscriminately.”57 The broad message of Wiener’s letter is that scientists should be skeptical of the military application of their research. Though their work may seem innocent and purely empirical, it can 55 Hubert Dreyfus, Mind Over Machine, 12. 56 Hubert Dreyfus, Mind Over Machine, 12. 57 Wiener, “From the Archives,” 37.
  • 24. 24 still have grave consequences by falling into the hands of the military. Though Wiener is not explicitly talking about AI research, his skepticism is important. Wiener emphasizes the need for researchers and developers to be wary of their work, and warns them of the dangers of cooperating with the military. Wiener’s criticism of the military’s relationship with research and development has not changed that relationship, and the military continues to develop and use more AI technology in its weapons and systems. The military application of AI brings a number of dangers both to friendlies, enemies, and civilians. Though AI has many benefits in the military, the dangers outweigh those benefits. The idea of taking a human out of the loop is not only dangerous, but when human life is on the line, can a thinking machine be trusted to function like a human? Functioning completely autonomously, how do we know that that machine will emulate the thought, decision making, and ethics of a human? The following are some of the dangers of integrating AI technology into military applications. As previously warned by Wiener, the government misuse of AI in the military could be a dangerous outcome of AI’s integration. Governments like the United States have massive defense budgets, giving them the resources to build large armies of thinking machines. This increases the chances of unethical use of AI by countries, specifically the U.S., giving these countries the opportunity to not just use AI technology for traditional warfare, but expanding its use for any sort of security. The use of AI opens the doors for unethical infringement upon civil liberties and privacy within the country.58 Another major danger of the use of AI in the military is the possibility of malfunctioning 58 Krishnan, 147-148.
  • 25. 25 weapons and networks, when the weapon or system acts in an unanticipated way. As previously stated, computer programming is built on the idea of programming, finding errors through malfunction, and fixing those errors. However, when using AI technology that might not be perfected, the risk of malfunction is greater. Software errors and unpredictable failures leading to malfunction are both liabilities to the AI military system. These chances of malfunction make AI military systems untrustworthy, a huge danger when heavily relying on AI software integrated into military networks.59 It is very challenging to test for errors in the military software. Software often can pass practical tests, however there are so many situations and scenarios that perfecting the software is nearly impossible. 60 The larger the networks, the greater the dangers of malfunction. Thus, when AI conventional weapons are networked and integrated into larger AI defense networks, “an error in one network component could ‘infect’ many other components.”61 The malfunction of an AI weapon is not only dangerous to those who are physically affected, but also opens up ethical and legal concerns. The malfunction of an AI system could be catastrophic, especially if that system is in control of WMDs. AI controlled military systems increase the chances of accidental war considerably. However, the danger of malfunction is not just theory. July 1988 was an example of an AI system malfunction. The U.S.S. Vincennes, a U.S. battle ship nicknamed “Robo-cruiser” because of its automated Aegis system, an automated radar and battle management system, was patrolling the Persian Gulf. An Iranian civilian airliner carrying 290 people registered on the system as an F-14 Iranian fighter, and the computer system considered it an enemy. The system 59 Bellin, 209. 60 Bellin, 209. 61 Krishnan, 152.
  • 26. 26 fired and took down the plane, killing all 290 people. This event showed that humans are always needed in the loop, especially with machine autonomy growing. Giving a machine full control over weapon systems is reckless and dangerous, and if the military continues to phase out human operators, these AI systems will be become increasingly greater liabilities.62 The weakness in the software and functioning capabilities of AI military systems also make them vulnerable to probing and hacking, exposing flaws or losing control of the unmanned system.63 Last year, Iran was able to capture a U.S. drone by hacking its GPS system and making it land in Iran instead of what it thought was Afghanistan. The Iranian engineer who worked on the team to hijack the drone said that they “electronically ambushed” the drone, "By putting noise [jamming] on the communications, you force the bird into autopilot. This is where the bird loses its brain." The Iranian’s successful hijacking of the drone shows the vulnerabilities of software on even advanced AI systems integrated into drones.64 Generally war is not predictable, and AI machines function off of programs written for what is predictable. This is a major flaw in AI military technology, as the programs that make AI function consist of rules and code. These rules and codes are precise, making it nearly impossible for AI technology to adapt to a situation and change its functions. Because war is unpredictable, computerized battle management technology lacks both experience and morality, both needed to make informed and moral decisions on the battlefield. The ability to adapt is necessary for battlefield management, and in some cases, computer programming limits the technology from making those decisions. 65 62 Peter Singer, “Robots At War: The New Battlefield,” 40. 63 Alan Brown, “The Drone Warriors,” 24. 64 Scott Peterson, “Iran Hijacked US Drone, says Iranian Engineer.” 65 Bellin, 233.
  • 27. 27 The last danger, the “Terminator Scenario” is more of a stretch, but still is a possibility. In the “Terminator Scenario,” machines become self aware, see that humans are their enemy, and take over the world, destroying humanity. As AI machines become increasingly intelligence, their ability to become self aware and intellectually evolve will also develop. The idea of AI machines beginning to “learn” their human operators and environments is the start of creating machines that will become fully self aware. If these self aware machines have enough power, for example their integration into military systems, they have the power to dispose of humanity.66 Though the full destruction of humanity is a stretch, the danger of AI turning on their human creators is still a possibility and should be recognized as an apparent consequence of integrating AI into military systems. IV. A Continuing Trend: The Military’s Exponential Use of Autonomous AI Though these dangers are apparent, and in some cases have lead to loss of human life, the U.S. military continues to exponentially rely on AI technology in its military systems, integrated into both its weapon systems and battle network systems. The military is using AI technology, such as autonomous drones, AI battlefield management systems, and AI communication and decision making networks for national security and on the battlefield, ushering in a new era of war technology. The idea of taking man out of the loop on the battlefield is dangerous and reckless. Removing human operators is not only a threat to human life, but also opens the debate over ethical, legal, and moral problems regarding the use of AI technology in battle. AI has progressively been integrated into military applications, the most common being weapons (guided missiles and drones) and expert systems for national defense and battlefield 66 Krishnan, 154.
  • 28. 28 management. This increased integration has led to both an over reliance and over optimism of the technology. The rise of drone warfare through the use of UAVs (Unmanned Aerial Vehicles) and UCAVs (Unmanned Combat Aerial Vehicles), has brought numerous benefits to military combat, but also many concerns. As UCAVs become exponentially more autonomous, their responsibilities have grown, utilizing new technology and advanced capabilities to replace human operators and take humans out of the loop.67 The U.S. military’s current level of autonomy on UCAV’s is supervised autonomy, where a machine can carry out most functions without having to use pre-programmed behaviors. With supervisor autonomy, an AI machine can make many decisions on its own, requiring little human supervision. In this case, the machine still relies on a human operator for final complex decisions such as weapon release and targeting, but is able to function mostly on its own.68 Supervised autonomy is where the military should stop its exponential integration. It is able to put complex legal and ethical decisions in the hands of a human operator, while still using the benefits that AI has. When the final decision involves human life or destruction, it is important to have a human operator making that decision, rather than allowing the computer to decide. Supervised autonomy still allows a human operator to monitor the functions of the UCAV, while keeping it ethically and legally under control. It is especially dangerous that the U.S. military is working towards the creation of completely autonomous machines, ones that can operate on their own with no human supervision or control. Complete autonomy gives the machine the ability to learn and think and adjust behavior in specific situations. 69 Giving these completely autonomous 67 Hugh McDaid, Robot Warriors: The Top Secret History of the Pilotless Plane, 162. 68 Krishnan, 44. 69 Krishnan, 44.
  • 29. 29 machines the ability to make their own decisions is dangerous, as their decisions would be unpredictable and uncontrollable. The U.S. military’s path to creating and utilizing completely autonomous machines is reckless, and supervised autonomy is farthest the military should go with AI technology and warfare. In the last decade, the use of military robotics has grown for a number of reasons, including the numerous benefits that AI robotics brings to the battlefield. Originally used for purely reconnaissance, the military is now utilizing UAVs as weapons. The use of UAVs and other AI weapons are heavily supported by the low ranking military personnel, the ones who are directly interacting with the drones. Higher ranking military officials and political leaders are split, with some fully supporting use while others recognize the dangers and concerns of their use. For now, the benefits that UAVs possess continue the integration of them into the U.S. military. One of the benefits of AI weaponry is it reduces the man power requirements. In first world countries, especially the U.S., the pool of prospective soldiers is shrinking. Both physical requirements and the attractiveness of military service are keeping Americans away from enlisted in the military. As the military budget decreases, UCAVs are able to replace human soldiers, cutting personnel costs from human soldiers.70 Another benefit of replacing human soldiers with AI robotics is that it takes humans out of the line of fire, while also eliminating human fallibility. The reduction is casualties of war is very appealing to not only the fighting soldiers, but also their family, friends, and fellow citizens. Being able to take soldiers out of the line of fire and replace them with robotics saves soldiers lives. These robotics are also able to reduce mistakes 70 Krishnan, 35.
  • 30. 30 and increase performance as compared to their human counterparts. The amplified capabilities of the machines give them the ability to outperform human soldiers.71 The ability to function 24/7, low response time, advanced communication networks, rapid data and information processing, and targeting speed and accuracy are some of the many benefits of AI robotics on the battlefield. The benefits of AI military robotics are very important to the lower ranking military personnel. These soldiers interact with the robotics on the battlefield, recognizing the benefits it brings to them personally, while failing to recognize the ethical and legal concerns that also come along with the drones. The following are quotes from enlisted, low ranking U.S. soldiers:72 • “It's surveillance, target acquisition, and route reconnaissance all in one. We saved countless lives, caught hundreds of bad guys and disabled tons of IEDs in our support of troops on the ground.” -Spc. Eric Myles, UAV Operator • “We call the Raven and Wasp our Airborne Flying Binoculars and Guardian Angels.” -GySgt. Butler • “The simple fact is this technology saves lives.” -Sgt. David Norsworthy It is understandable why low ranking soldiers embrace the technology and support their use. UCAVs have proven to be highly effective on the battlefield, saving the lives of U.S. soldiers and effectively combatting enemies, utilizing their advanced AI functions. Though UCAVs are effective on the battlefield and especially benefit the soldiers on the front line, the ethical and legal concerns are very important consequences of the overall use of AI technology. However, higher ranking military leaders and political leaders are split in their support. Some of these leaders fully support the technology, while others are skeptical of too much automation and the dangers of over reliance. German Army General Wolfgang Schneiderhan, 71Krishnan, 40. 72U.S. House of Representatives. Subcommittee on National Security and Foreign Affairs. Rise of the Drones: Unmanned Systems and the Future of War Hearing, Fagan, 63.
  • 31. 31 who also served as Chief of Staff of the German Army from 2002 to 2009 shows this skepticism in his article, “UV’s: An Indispensable Asset in Operations.” Schneiderhan not only looks at the dangers of taking a human out of the loop, but also the importance of humanitarian law, specifically involving human life. Schneiderhan explicitly warns that, “unmanned vehicles must retain a ‘man in the loop’ function in more complex scenarios or weapon employment,” especially wary of “cognitive computer failure combined with a fully automated and potentially deadly response.”73 Schneiderhan’s skepticism both recognizes the main dangers of over- reliance of AI for military use, while also stressing the importance of keeping a human operator involved in decision making. Schneiderhan argues that a machine should not be making decisions regarding human life, but rather decisions should be made by a conscious human who has both experience and situational awareness, while also understanding humanitarian law.74 Schneiderhan’s skepticism contrasts with the over-optimism that many U.S. military leaders share about the use of AI in weaponry. Navy Vice-Admiral Arthur Cebrowski, chief of the DoD’s Office for Force Transformation stressed the importance of AI technology for “the military transformation,” using the advanced capabilities and benefits to develop war technology. Cebrowski argues that AI technology is “necessary” to move money and manpower to support new technologies, including AI research and development, instead of focusing on improving old technologies.75 Navy Rear Admiral Barton Strong, DoD Head of Joint Projects argues that AI technology and drones will “revolutionize warfare.” Strong says that because “they are relatively inexpensive and can 73 Schneiderhan, “UV's, An Indispensable Asset in Operations,” 91. 74 Schneiderhan, 91. 75 U.S. Senate. Foreign Affairs, Defense, and Trade Division. Military Transformation: Intelligence, Surveillance and Reconnaissance, 7.
  • 32. 32 effectively accomplish missions without risking human life,” drones are necessary for transforming armies.76 General James Mattis, head of U.S. Joint Forces Command and NATO Transformation argues that AI robots will continue to play a larger role in future military operations. Mattis fully supports the use of AI weapons, and since commanding forces in Iraq, the UAV force has increased to over 5,300 UAV drones. Mattis even understands the relationship that can form between a soldier and a machine. Mattis embraces the reduction of risk to soldiers, the efficient gathering of intelligence, and their ability to strike stealthily. Mattis’s high ranking and support of UAVs will lead to even more use of UAVs. 77 From a soldier’s point of view, the benefits that drones bring far exceed the legal and ethical concerns that those soldiers are not responsible for. Drones are proving effective on the battlefield, leading to support from the low and high ranking military leaders. However, civilian researchers and scientists continue to be skeptical of the use of AI in the military, especially when involving human life. Looking more closely at the benefits of UCAVs, it is clear why both low ranking and military leaders are optimistic and supportive of the use of UCAVs. The most clear reason is the reduction of friendly military casualties, taking U.S. human soldiers out of the line of fire.78 When soldier causalities plays a large part in public perception of war, reducing loss of human life makes war less devastating on the home front. The advanced capabilities of AI integrated into military robots and systems is another appealing benefit of AI. Rapid information processing, accurate decision making and calculations, 24/7 functionality, and battlefield 76 McDaid, 6. 77 Brown, 23. 78 John Keller, “Air Force to Use Artificial Intelligence and Other Advanced Data Processing to Hit the Enemy Where It Hurts,” 6.
  • 33. 33 assessment amplify the capabilities of a human soldier, making UCAVs extremely efficient and dangerous. By processing large amounts of data at a rapid speed, UCAVs can, “hit the enemy where it hurts” and take advantage of calculated vulnerabilities before the enemy can prepare a defense.79 In a chaotic battle situation, where a soldier has to process numerous different environmental, physical, and mental factors, speed and accuracy of decision making is essential to a soldier. AI have the ability to cope with the chaos of a battlefield, making decisions faster and more efficiently, processing hundreds of variables, than human soldiers. 80 While soldiers are hindered by fear and pain, AI machines lack this emotion, instead being able to function solely on the battlefield. The advanced capabilities and abilities of UCAVs have proven to be extremely effective on the battlefield. Though UCAVs are efficient and deadly soldiers, they also open the doors for numerous ethical, legal, and moral concerns. V. Ethical Concerns Military ethics is a very broad concept, so in order to understand the ethical concerns caused by the use of AI in the military, I will first discuss what military ethics are. In a broad sense, ethics look at what is right and wrong. Military ethics is often a confusing and contradictory concept because war involves violence and killing against others, often considered to be immoral in general. Though some argue that military ethics cannot exist because of the killing of others, I will look at military ethics where killing is ethical. In this definition of military ethics, war is ethical if it counters hostile aggression and is conducted lawfully.81 For example, the U.S.’s planned raid on Osama Bin Laden’s compound leading to his killing could 79 Keller, 10. 80 The Economist. “No Command, and Control,” 89. 81 Krishnan, 117.
  • 34. 34 be viewed as ethical. Bin Laden was operating an international terrorist organization that had successfully killed thousands of civilians through their attacks. However, the use of WMDs, for example, the U.S.’s bombing of Hiroshima and Nagasaki is often viewed as unethical. In the case of those bombings, thousands of civilians were killed, and it can be debated that the use of WMDs is not lawful due to their catastrophic damage to a civilian population. The bombings of Hiroshima and Nagasaki can be viewed as war crimes against a civilian population, breaking numerous laws of war established in the Rules of Aerial Warfare (the Hague, 1923), including Article XXII that states: “Aerial bombardment for the purpose of terrorizing the civilian population, of destroying or damaging private property not of military character, or of injuring non-combatants is prohibited.” 82 As shown in the examples, civilian causalities are one of the most unethical concerns with war in general. As previously stated, the tragedy in the Persian Gulf in 1988 showed the consequences of an AI systems’s mistake on a large group of civilians. As the military continues to progressively utilize UCAVs for combat, civilian deaths from UCAVs have also risen. The U.S. military has relied on UCAVs heavily for counter terrorism operations in Pakistan. Because of the effectiveness of the strikes, the U.S. continues to utilize drones for airstrikes on terrorist leaders and terrorist training camps. However, with increasing drone strikes, the death toll of civilians and non militants has increased exponentially, and has even outnumbered the death toll of targeted militants.83 This is where the unethical nature of UCAV airstrikes is beginning to unfold. The effectiveness of the airstrikes is appealing to the military and they continue to utilize them, yet ignore the thousands of civilians who are also killed. Marge Van Cleef, Co-Chair of 82 The Hague. 1923. Draft Rules of Aerial Warfare. Netherlands: The Hague. 83 Leila Hudson, “Drone Warfare: Blowback From The New American Way of War,” 122.
  • 35. 35 the Women’s International League for Peace and Freedom takes the ethical argument a step further, claiming that drone warfare is terrorism itself. Van Cleef says that, “families in the targeted regions have been wipe out simply because a suspected individual happened to be near them or in their home. No proof is needed.”84 The use of UCAVs has proven to be unethical for this reason, that civilians are continuously killed in drone strikes. Whether it be through malfunction, lack of information, or another mistake, UCAVs have shown that they are not able to avoid the killing of civilians. However, civilians are not the only victims of UCAV use. Moral disengagement, changing the psychological impact of killing, is another major ethical concern of UCAV use. When a soldier is put in charge of a UCAV and gives that UCAV the order to kill, having a machine as a barrier neutralizes a soldier’s inhibition to kill. Because of this barrier, soldiers can kill the enemy from a large distance, disengaging the soldier from the actual feeling of taking a human life. Using UCAVs separates a soldier from emotional and moral consequences to killing. 85 An example of this moral disengagement is of a UCAV operator in Las Vegas spending his day operating a UCAV, carrying out airstrikes and other missions thousands of miles away, then joining his family for dinner that night. Being in these two situations daily not only leads to emotional detachment from killing, but also hides the horrors of war. Often on the virtual battlefield, “soldiers are less situationally aware and also less restrained because of emotional detachment.”86 Because of this emotional detachment to kill, UCAVs are unethical in that they make the psychological impact of killing non-existent. One of the main deterrents of war is the loss of human life. But when humans are taken 84 Marge Van Cleef, “Drone Warfare=Terrorism,” 20. 85 Krishnan, 128. 86 U.S. House of Representatives. Subcommittee on National Security and Foreign Affairs. Rise of the Drones: Unmanned Systems and the Future of War Hearing, Barrett, 13.
  • 36. 36 out of the line of fire and human causalities shrink as AI weapons increase, is it easier to go to war? An unethical result of rising use of robotic soldiers is the possibilities of unnecessary war, when the perception of war is changed due to the lack of military casualties.87 Unmanned systems in war, “further disconnect the military from society. People are more likely to support the use of force as long as they view it as costless.”88 When the people at home only see the lack of human causalities, the horrors of war are hidden and they may think that the impact of going to war is less than it really is. This false impression that, “war can be waged with fewer costs and risks” creates an illusion that the war is easy and cheap.89 This can lead nations into a war that might not be necessary, giving them the perception, “gee, warfare is easy.”90 These three ethical concerns all fall under the idea of automated killing, which is an ethical concern in itself. Giving a machine full control over the decision to end a life is unethical for a number of reasons: machines lack empathy, morals, have no concept of the finality of life, and life and human experiences. AI machines are programmed far differently from humans, so the decision to end of human life should never be left up to a machine. When looking at a machines morals, they may still have the ability to comprehend environments and situations, but will not have the ability to feel remorse or fear punishment. 91 In the event that an AI machine kills a human wrongly, will it feel remorse for that killing? It is unethical and dangerous to use AI weaponry because humans have the ability to think morally, while a machine may just “blindly pull the trigger because some algorithm says so.”92 AI machines also lack empathy, the 87 Singer, 44. 88 Singer, 44. 89 Cortright, “The Prospect of Global Drone Warfare.” 90 Singer, 44. 91 Krishnan, 132. 92 Krishnan, 132.
  • 37. 37 ability to empathize with human beings. If an AI machine can’t understand human suffering or has never experienced it themselves, it will continue to carry out unethical acts without being emotionally effected. Fitting in with empathy and morals, AI machines lack the concept of the finality of life and the idea of being mortal. Both not knowing and not experiencing death and the end of life, an AI machine doesn’t have the ability to take finality of life into consideration when making an ethical decision. With no sense of the ability to die, an AI machine lacks empathy for death, allowing it to refrain from moral decisions.93 Automated killing opens the doors for all these ethical concerns. VI. Legal Concerns However, ethical concerns are not the only problem with the use of AI machines in the military. There are also a number of legal concerns regarding the use of AI weaponry, specifically with the rise of drones. Today, modern warfare is still governed by the laws of the Geneva Convention, a series of laws to establish the laws of war, armed conflict, and humanitarian treatment. However, the Geneva Convention was drafted during the 1940s, a time when warfare was radically different. This means that the laws of war are outdated, the 20th century military laws are not able to keep up with 21st century war technology.94 The laws of armed conflict need to be updated before the use of UCAVs continues to establish the legality of using them in the first place. For example, an article of the Geneva Convention's protocol states: "effective advance warning shall be given of attacks which may affect the civilian population, unless circumstances do not permit." 95 However, the killing of civilians by UCAVs without prior 93 Krishnan, 133. 94 U.S. House of Representatives. Subcommittee on National Security and Foreign Affairs. Rise of the Drones: Unmanned Systems and the Future of War Hearing, Singer, 7. 95 Michael Newton, “Flying Into the Future: Drone Warfare and the Changing Face of Humanitarian Law.”
  • 38. 38 warning violates the humanitarian protections established by the Geneva Convention, illegally carrying out attacks resulting in civilian deaths. Only combatants can be lawfully targeted in armed conflict, and any killing of non-combatants violates armed conflict law.96 Armed conflict is changing at such a fast pace, it is hard to establish humanitarian laws for war that can adapt to changing technologies. As of now, the actions of UCAVs could be deemed as war crimes, the violation of armed conflict laws. One legal concern with the use of UCAVs is the debate over whether they are considered “state sanctioned lethal force” or not. If they are state sanctioned, such as a soldier in the U.S. Army, they are legal and must follow the laws of armed conflict. However, numerous drones are operated by the CIA, meaning they are not state sanctioned. Because these drones are not state sanctioned, they are violating international armed law, as being state sanctioned gives the U.S. military the right to use lethal force. The killing of civilians in general, but specifically by non-state sanctioned weapons can be seen as war crimes. 97 Another legal problem of drone warfare concerns liability of the weapon, who is to blame for an AI malfunction or mistake. There are so many people involved in the development, building, and operation of a drone, making it hard to decide who is responsible for an error. Is it the computer scientist who programmed the drone, the engineer who built the drone, the operator of the drone, or the military leader who authorized the attack? It can even be argued that the drone is solely responsible for its own actions, and should be tried and punished as though it is a human soldier. Article 1 of the Hague Convention requires combatants to be, “commanded by a 96 Ryan Vogel, “Drone Warfare and the Law of Armed Conflict,” 105. 97 Van Cleef, 20.
  • 39. 39 person responsible for his subordinates.”98 This makes sense for human soldiers, but makes it very hard to legally control an autonomous machine, one that cannot take responsibility for its own actions when acting autonomously. Because UCAV use is rising, there needs to be established legal accountability laws in the event of a robotic malfunction or mistake leading to human or environmental damage.99 The field of AI continues to develop at an extremely rapid pace, opening up the door for increased optimism and reliance on the new technologies. However, this exponential growth comes with numerous ethical, legal, and moral concerns, especially in regards to its relationship with the military. The military has influenced the research and development of AI since it was established in the 1950s, and continues to have a hand in AI growth through heavy funding and involvement. Though AI brings great benefits to society politically, socially, economically, and technologically, we should be warned of over reliance on the technology. It is important to always keep a human in the loop, whether it be for civilian or military purposes. AI technology has the power to shape the society we live in today, but each increase in autonomy should be taken with a grain of salt. 98 Krishnan, 103. 99 Krishnan, 103.
  • 40. 40 Bibliography Adler, Paul S. and Terry Winograd. Usability: Turning Technologies Into Tools. New York: Oxford University Press, 1992. Anderson, Alan Ross. Minds and Machines. New Jersey: Prentice-Hall Inc.. 1964. Anderson, Michael, and Susan Leigh Anderson. “Robot Be Good.” Scientific American 303, no. 4 (2010): 72-77. Bellin, David and Gary Chapman. Computers in Battle: Will They Work?. New York: Harcourt Brace Jovanovich Publishers, 1987. Brown, Alan S. “The Drone Warriors.” Mechanical Engineering 132, no. 1 (January 2010): 22-27. Burks, Arthur W. “The ENIAC: The First General-Purpose Electronic Computer,” Annals of the History of Computing 3, no. 4 (1981): 310–389. Cortright, David. “The Prospect of Global Drone Warfare.” CNN Wire (Oct 19, 2011). Dhume, Sadanand. “The Morality of Drone Warfare: The Reports About Civilian Casualties are Unreliable.” Wall Street Journal Online, (Aug 17, 2011). Dreyfus, Hubert L. Mind Over Machine. New York: The Free Press, 1986. Dreyfus, Hubert L. What Computers Can't Do: The Limits of Artificial Intelligence. New York: Harper Colophon Books, 1979. Edwards, Paul N. The Closed World: Computers and the Politics of Discourse in Cold War America. Massachusetts: MIT Press, 1996. Ford, Nigel. How Machines Think. Chichester, England: John Wiley and Sons, 1987.
  • 41. 41 Hayles, Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: The University of Chicago Press, 1999. Heims, Steve J. John Von Neumann and Norbert Wiener: From Mathematics to the Technologies of Life and Death. Massachusetts: MIT Press, 1980. Hogan, James P. Mind Matters. New York: Ballantine Publishing Group, 1997. Hudson, Leila, Colin Owens, and Matt Flannes. “Drone Warfare: Blowback From The New American Way of War.” Middle East Policy 18, no. 3 (Fall 2011): 122-132. Keller, John. “Air Force to Use Artificial Intelligence and Other Advanced Data Processing to Hit the Enemy Where It Hurts,” Military & Aerospace Electronics 21, no. 3 (2010): 6-10. Krishnan, Armin. Killer Robots: Legality and Ethicality of Autonomous Weapons. Vermont: Ashgate, 2009. Lehner, Paul. Artificial Intelligence and National Defense: Opportunity and Challenge. Pennsylvania: Tab Books Inc., 1989. Le Page, Michael. “What Happens When We Become Obsolete?” New Scientist 211, no. 2822 (July 2011): 40-41. Lyons, Daniel. "I, ROBOT." Newsweek 153, no. 21 (May 25, 2009): 66-73. Military & Government Collection.. Masci, David. “Artificial Intelligence”. CQ Researcher. 7, no. 42 (1997): 985-1008. McCarthy, John, and Marvin Minsky. “A Proposal for the Darthmouth Summer Project on Artificial Intelligence.” AI Magazine 27, no. 4 (Winter 2006): 12-14. McCarthy, John. Defending A.I. Research. California: CSLI Publications, 1996.
  • 42. 42 McDaid, Hugh and David Oliver. Robot Warriors: The Top Secret History of the Pilotless Plane. London: Orion Books Ltd., 1997. McGinnis, John. “Accelerating AI.” Northwestern University Law Review 104, no. 3 (2010): 1253-1269. Michie, Donald. Machine Intelligence and Related Topics. New York: Gordon and Breach, 1982. Minsky, Marvin and Seymour Papert. “Artificial Intelligence.” Lecture to Oregon State Systems's 1974 Condon Lecture, Eugene, OR, 1974. Mishkoff, Henry C. Understanding Artificial Intelligence. Dallas, Texas: Texas Instruments, 1985. Newton, Michael A. “Flying Into the Future: Drone Warfare and the Changing Face of Humanitarian Law.” Keynote Address to University of Denver's 2010 Sutton Colloquium, Denver, CO, November 6, 2010. Perlmutter, David D. Visions of War: Picturing War From The Stone Age to the Cyber Age. New York: St. Martin's Press, 1999. Pelton, Joseph N. “Science Fiction vs. Reality.” Futurist 42, no. 5 (Sept/Oct 2008): 30-37. Peterson, Scott. “Iran Hijacked US Drone, says Iranian Engineer.” Christian Science Monitor, (15 December, 2011). Schneiderhan, Wolfgang. “UV's, An Indispensable Asset in Operations.” NATO's Nations and Partners for Peace 52, no. 1 (2007): 88-92. Shachtman, Noah. “Darpa Chief Speaks.” Wired, (20 February 2007). Shapiro, Kevin. “How the Mind Works.” Commentary 123, no. 5 (May 2007): 55-60.
  • 43. 43 Singer, P.W. “Robots At War: The New Battlefield.” Wilson Quarterly 33, no. 1 (Winter 2009): 30-48. The Economist. “Drones and the man: The Ethics of Warfare.” The Economist 400, no. 8744 (July 2010): 10. The Economist. “No Command, and Control.” The Economist 397, no. 8710 (Nov 2010): 89. The Hague. 1923. Draft Rules of Aerial Warfare. Netherlands: The Hague. Triclot, Mathieu. “Norbert Wiener's Politics and the History of Cybernetics.” Lecture to ICESHS's 2006 The Global and the Local: The History of Science and the Cultural Integration of Europe, Cracow, Poland, September 6-9, 2006. Tucker, Patrick. “Thank You Very Much, Mr. Roboto.” Futurist 45, no. 5 (2011): 24-28. Turing, Alan. “Computing Machinery and Intelligence.” Mind 59 (1950): 433-460. U.S. House of Representatives. Subcommittee on National Security and Foreign Affairs. Rise of the Drones: Unmanned Systems and the Future of War Hearing, 23 March 2010. Washington: Government Printing Office, 2010. U.S. Senate. Foreign Affairs, Defense, and Trade Division. Military Transformation: Intelligence, Surveillance and Reconnaissance. (S. Rpt RL31425). Washington: The Library of Congress, 17 January 2003. U.S. Senate. Foreign Affairs, Defense, and Trade Division. Unmanned Aerial Vehicles: Background and Issues for Congress Report. (S. Rpt RL31872). Washington: The Library of Congress, 25 April 2003.
  • 44. 44 U.S. Senate. Foreign Affairs, Defense, and Trade Division. Unmanned Aerial Vehicles: Background and Issues for Congress Report. (S. Rpt RL31872). Washington: The Library of Congress, 21 November 2005. Van Cleef, Marge. “Drone Warfare=Terrorism.” Peace and Freedom 70, no. 1 (Spring 2010): 20. Velichenko, V., and D. Pritykin. “Using Artificial Intelligence and Computer Technologies for Developing Treatment Programs for Complex Immune Diseases.” Journal of Mathematical Sciences 172, no. 5 (2011): 635-649. Vinge, Vernor. “Singularity.” Lecture at the VISION-21 Symposium, Cleveland, OH, March 30-31, 1993. Vogel, Ryan J. “Drone Warfare and the Law of Armed Conflict.” Denver Journal of International Law and Policy 39, no. 1 (Winter 2010): 101-138. Von Drehle, David. “Meet Dr. Robot.” Time 176, no. 24 (2010): 44-50. von Neumann, John. The Computer and the Brain. New Haven: Yale University Press, 1958. Wiener, Nobert. “From the Archives.” Science, Technology, & Human Values 8, no. 3 (Summer, 1983): 36-38. Wiener, Nobert. The Human Use of Human Beings: Cybernetics and Society. New York: Avon Press, 1950.