SlideShare une entreprise Scribd logo
1  sur  26
MASOOD AHMAD BHAT
S.ID:-S1400D9700032
Batch Code:-B140045
NIITRESIDENCY ROAD SRINAGAR
ARTIFICIAL INTELLIGENCE
ARTIFICIAL INTELLIGENCE
1
CONTENTS
1) Intelligence
a) Introduction
i) Knowledge
ii) Learning
iii) Understanding
2) Artificial Intelligence
a) Introduction
b) Major Branches of AI
i) Robotics
ii) Vision Systems
iii) Natural Language Processing
iv) Learning Systems
v) Neural Networks
vi) Expert Systems
3) History of AI
a) Great Achievements
i) Robocop
ii) Deep Blue
iii) DARPA Grand Challenge
4) Today’s AI Applications
a) Driver-Less Trains
b) Burglary Alarm Systems
c) Automatic Grading System in Education
ARTIFICIAL INTELLIGENCE
2
INTELLIGENCE
Introduction
Despite a long history of research and debate, there is still no standard
definitionof intelligence. This has led some to believe that intelligence may be
approximatelydescribed, but cannot be fully defined. Some definitions of
Intelligence from different sources are:
As many dictionaries source their definitions from other dictionaries, we have
endeavored to always list theoriginal source.
1.“The ability to use memory, knowledge, experience, understanding,
reasoning, imagination and judgment in order to solve problems and adapt to
new situations.” All Words Dictionary, 2006
2.“The capacity to acquire and apply knowledge.” The American Heritage
Dictionary, fourth edition, 2000
3. “Individuals differ from one another in their ability to understand complex
ideas, to adapt effectively to the environment, to learn from experience, to
engage in various forms of reasoning, to overcome obstacles by taking
thought.” American Psychological Association
4. “The ability to learn, understand and make judgments or have opinions that
are based on reason” Cambridge Advance Learner’s Dictionary, 2006
5. “Intelligence is a very general mental capability that, among other things,
involves the ability to reason, plan, solve problems, think abstractly,
comprehend complex ideas, learn quickly and learn from experience.”
Common statement with 52 expert signatories
6. “The ability to learn facts and skills and apply them, especially when this
ability is highly developed.” Encarta World English Dictionary, 2006
7. “Ability to adapt effectively to the environment, either by making a change
in oneself or by changing the environment or finding a new one intelligence
ARTIFICIAL INTELLIGENCE
3
is not a single mental process, but rather a combination of many mental
processes directed toward effective adaptation to the environment.”
Encyclopedia Britannica, 2006
8. “the general mental ability involved in calculating, reasoning, perceiving
relationships and analogies, learning quickly, storing and retrieving
information, using language fluently, classifying, generalizing, and adjusting
to new situations.” Columbia Encyclopedia, sixth edition, 2006
9. “Capacity for learning, reasoning, understanding, and similar forms of
mental activity; aptitude in grasping truths, relationships, facts, meanings, etc.”
Random House Unabridged Dictionary, 2006
10. “The ability to learn, understand, and think about things.” Longman
Dictionary or Contemporary English, 2006
11. “ The ability to learn or understand or to deal with new or trying situations
the skilled use of reason (2) : the ability to apply knowledge to manipulate
one’s environment or to think abstractly as measured by objective criteria (as
tests)” Merriam-Webster Online Dictionary, 2006
12. “The ability to acquire and apply knowledge and skills.” Compact Oxford
English Dictionary, 2006
13. “. . . the ability to adapt to the environment.” World Book Encyclopedia,
2006
14. “Intelligence is a property of mind that encompasses many related mental
abilities, such as the capacities to reason, plan, solve problems, think abstractly,
comprehend ideas and language, and learn.” Wikipedia, 4 October, 2006
15. “Capacity of mind, especially to understand principles, truths, facts or
meanings, acquire knowledge, and apply it to practice; the ability to learn and
comprehend.” Wiktionary, 4 October, 2006
16. “The ability to learn and understand or to deal with problems.” Word
Central Student Dictionary, 2006
17. “The ability to comprehend; to understand and profit from experience.”
Word net2.1, 2006
18. “The capacity to learn, reason, and understand.” Words myth Dictionary,
2006
ARTIFICIAL INTELLIGENCE
4
Intelligence has been defined in many different ways including Knowledge,
Learning, and Understanding.
1.Knowledge:Knowledge is a familiarity, awareness or understanding of
someone or something, such as facts, information, descriptions, or skills, which
is acquired through experience or education byperceiving, discovering,
or learning. Knowledge can refer to a theoretical or practical understanding
of a subject. It can be implicit (as with practical skill or expertise) or explicit
(as with the theoretical understanding of a subject); it can be more or less
formal or systematic. In philosophy, the study of knowledge is
called epistemology; the philosopher Plato famously defined knowledge as
"justified true belief". However, no single agreed upon definition of knowledge
exists, though there are numerous theories to explain it.
Knowledge acquisition involves complex cognitive processes: perception,
communication, association and reasoning; while knowledge is also said to be
related to the capacity of acknowledgment in human beings.
The definition of knowledge is a matter of
ongoing debate among philosophers in the field of epistemology. The classical
definition, described but not ultimately endorsed by Plato,specifies that
a statement must meet three criteria in order to be considered knowledge: it
must be justified, true, and believed. Some claim that these conditions are not
sufficient, as Gettier case examples allegedly demonstrate. There are a number
of alternatives proposed, includingRobert Nozick's arguments for a
requirement that knowledge 'tracks the truth' and Simon
Blackburn's additional requirement that we do not want to say that those who
meet any of these conditions 'through a defect, flaw, or failure' have
knowledge. Richard Kirkham suggests that our definition of knowledge
requires that the evidence for the belief necessitates its truth.
2.Understanding:Understanding is a psychological process related to an
abstract or physical object, such as a person, situation, or message whereby
one is able to think about it and use concepts to deal adequately with that
ARTIFICIAL INTELLIGENCE
5
object.Understanding is a relation between the knower and an object of
understanding. Understanding implies abilities and dispositions with respect to
an object of knowledge sufficient to support intelligent behavior. An
understanding is the limit of a conceptualization. To understand something is
to have conceptualized it to a given measure.
Examples
1. One understands the weather if one is able to predict and to give
an explanation of some of its features, etc.
2. A psychiatrist understands another person's anxieties if he/she knows
that person's anxieties, their causes, and can give useful advice on how
to cope with the anxiety.
3. A person understands a command if he/she knows who gave it, what is
expected by the issuer, and whether the command is legitimate, and
whether one understands the speaker.
4. One understands a reasoning, an argument, or a language if one can
consciously reproduce the information content conveyed by the
message.
5. One understands a mathematical concept if one can solve problems
using it, especially problems that are not similar to what one has seen
before.
3.Learning:Learning is acquiring new, or modifying and reinforcing,
existing knowledge, behaviors, skills, values, or preferencesand may involve
synthesizing different types of information.The ability to learn is possessed by
humans, animals and some machines. Progress over time tends to
follow learning curves. Learning is not compulsory; it is contextual. It does not
happen all at once, but builds upon and is shaped by what we already know.
To that end, learning may be viewed as a process, rather than a collection of
ARTIFICIAL INTELLIGENCE
6
factual and procedural knowledge. Learning produces changes in the
organism and the changes produced are relatively permanent.
Human learning may occur as part of education, personal development,
schooling, or training. It may be goal-oriented and may be aided
by motivation. The study of how learning occurs is part
of neuropsychology, educational psychology, learning theory, and pedagogy.
Learning may occur as a result of habituation or classical conditioning, seen in
many animal species, or as a result of more complex activities such as play,
seen only in relatively intelligent animals. Learning may occur consciously or
without conscious awareness. Learning that an aversive event can't be avoided
nor escaped is called learned helplessness. There is evidence for human
behavioral learning prenatally, in which habituation has been observed as
early as 32 weeks into gestation, indicating that the central nervous system is
sufficiently developed and primed for learning and memory to occur very
early on in development.
Play has been approached by several theorists as the first form of learning.
Children experiment with the world, learn the rules, and learn to interact
through play. Lev Vygotsky agrees that play is pivotal for children's
development, since they make meaning of their environment through play. 85
percent of brain development occurs during the first five years of a child's
life. The context of conversation based on moral reasoning offers some proper
observations on the responsibilities of parents.
ARTIFICIAL INTELLIGENCE
Introduction
Artificial term is given by John McCarthy in 1950.He is known as the Father of
Artificial intelligence.AI is both the intelligence of machines and the branch
ARTIFICIAL INTELLIGENCE
7
of computer sciencewhich aims to create it, through "the study and design
of intelligent agents" or "rational agents", where an intelligent agent is a
system that perceives its environment and takes actions which maximize its
chances of success. Among the traits that researchers hope machines will
exhibitare reasoning, knowledge,planning, learning, communication andthe
ability to move and manipulate objects. In the field of artificial intelligence
there is no consensus on how closely the brain should be simulated.
Artificial intelligence (AI) is the intelligence exhibited by machines or
software, and the branch of computer science that develops machines and
software with human-like intelligence. Major AI researchers and textbooks
define the field as "the study and design of intelligent agents”, where
an intelligent agent is a system that perceives its environment and takes
actions that maximize its chances of success. John McCarthy, who coined the
term in 1955, defines it as "the science and engineering of making intelligent
machines".
AI research is highly technical and specialized, and is deeply divided into
subfields that often fail to communicate with each other. Some of the division
is due to social and cultural factors: subfields have grown up around
particular institutions and the work of individual researchers. AI research is
also divided by several technical issues. Some subfields focus on the solution of
specific problems. Others focus on one of several possible approaches or on
the use of a particular tool or towards the accomplishment of
particular applications.
The central goals of AI research include reasoning, knowledge, planning,
learning natural language processing (communication), perception and the
ability to move and manipulate objects. General intelligence (or "strong AI") is
still among the field's long term goals. Currently popular approaches
include statistical methods, computational intelligence and traditional
symbolic AI. There are an enormous number of tools used in AI, including
ARTIFICIAL INTELLIGENCE
8
versions of search and mathematical optimization, logic, methods based on
probability and economics, and many others.
The field was founded on the claim that a central property of humans,
intelligence—the sapience ofHomo sapiens—can be sufficiently well described
to the extent that it can be simulated by a machine. This raises philosophical
issues about the nature of the mind and the ethics of creating artificial beings
endowed with human-like intelligence, issues which have been addressed by
myth, fiction and philosophy since antiquity. Artificial intelligence has been
the subject of tremendous optimism but has also suffered
stunning setbacks. Today it has become an essential part of the technology
industry and defines many challenging problems at the forefront of research
in computer science.
Major Branches
1.Robotics:Robotics is the branch of technology that deals with the design,
construction, operation, structural disposition, manufacture and application
of robots as well as computer systems for their control, sensory feedback, and
information processing. These technologies deal with automated machines that
can take the place of humans in dangerous environments or manufacturing
processes, or resemble humans in appearance, behavior, and/or cognition.
Many of today's robots are inspired by nature contributing to the field of bio-
inspired robotics.
The concept of creating machines that can operate autonomously dates back
to classical times, but research into the functionality and potential uses of
robots did not grow substantially until the 20th century. Throughout history,
robotics has been often seen to mimic human behavior, and often manage
tasks in a similar fashion. Today, robotics is a rapidly growing field, as
technological advances continue, research, design, and building new robots
serve various practical purposes, whether domestically, commercially,
ARTIFICIAL INTELLIGENCE
9
or militarily. Many robots do jobs that are hazardous to people such as
defusing bombs, mines and exploring shipwrecks.
The word robotics was derived from the word robot, which was introduced to
the public by Czech writer Karel Čapek in his play R.U.R. (Rossum's Universal
Robots), which was published in 1920. The word robot comes from the Slavic
word robota, which means labour. The play begins in a factory that makes
artificial people called robots, creatures who can be mistaken for humans –
similar to the modern ideas of androids. Karel Čapek himself did not coin the
word. He wrote a short letter in reference to an etymology in the Oxford
English Dictionary in which he named his brother Josef Čapek as its actual
originator.
History of Robotics
In 1927 the Maschinenmensch ("machine-human") gynoid humanoid
robot (also called "Parody", "Futura", "Robotrix", or the "Maria impersonator")
was the first depiction of a robot ever to appear on film was played by German
actress Brigitte Helm in Fritz Lang's film Metropolis.
In 1942 the science fiction writer Isaac Asimov formulated his Three Laws of
Robotics.
In 1948 Norbert Wiener formulated the principles of cybernetics, the basis of
practical robotics.
Fully autonomous robots only appeared in the second half of the 20th century.
The first digitally operated and programmable robot, the Unimate, was
installed in 1961 to lift hot pieces of metal from a die casting machine and
stack them. Commercial and industrial robots are widespread today and used
to perform jobs more cheaply, or more accurately and reliably, than humans.
They are also employed in jobs which are too dirty, dangerous, or dull to be
suitable for humans. Robots are widely used in manufacturing, assembly,
packing and packaging, transport, earth and space exploration, surgery,
ARTIFICIAL INTELLIGENCE
10
weaponry, laboratory research, safety, and the mass production of consumer
and industrial goods.
2.Vision System:It branch of Artificial Intelligence concerned with computer
processing of images from the real world.Machine vision (MV) is the
technology and methods used to provide imaging-based automatic inspection
and analysis for such applications as automatic inspection, process control,
and robot guidance in industry. The scope of MV is broad. MV is related to,
though distinct from, computer. The primary uses for machine vision are
automatic inspection and industrial robot guidance. Common machine vision
applications include quality assurance, sorting, material handling, robot
guidance, and optical gauging.
Machine vision methods are defined as both the process of defining and
creating an MV solution, and as the technical process that occurs during the
operation of the solution. Here the latter is addressed. As of 2006, there was
little standardization in the interfacing and configurations used in MV. This
includes user interfaces, interfaces for the integration of multi-component
systems and automated data interchange.Nonetheless, the first step in the MV
sequence of operation is acquisition of an image, typically using cameras,
lenses, and lighting that has been designed to provide the differentiation
required by subsequent processing. MV software packages then employ
various digital image processing techniques to extract the required
information, and often make decisions (such as pass/fail) based on the
extracted information. A common output from machine vision systems is
pass/fail decisions. These decisions may in turn trigger mechanisms that reject
failed items or sound an alarm. Other common outputs include object position
and orientation information from robot guidance systems. Additionally, output
types include numerical measurement data, data read from codes and
characters, displays of the process or results, stored images, alarms from
automated space monitoring MV systems, and process control signals.As
recently as 2006, one industry consultant reported that MV represented a $1.5
billion market in North America. However, the editor-in-chief of an MV trade
magazine asserted that "machine vision is not an industry per se" but rather
ARTIFICIAL INTELLIGENCE
11
"the integration of technologies and products that provide services or
applications that benefit true industries such as automotive or consumer goods
manufacturing, agriculture, and defense."
As of 2006, experts estimated that MV had been employed in less than 20% of
the applications for which it is potentially useful.
3.Natural Language Processing:Natural language processing(NLP) is a field
of artificial intelligence concerned with the interactions
between computers and human (natural) languages. As such, NLP is related to
the area of human–computer interaction. Many challenges in NLP
involve natural language understanding, that is, enabling computers to derive
meaning from human or natural language input, and others involve natural
language generation. Modern NLP algorithms are based on machine learning,
especially statistical machine learning. The paradigm of machine learning is
different from that of most prior attempts at language processing. Prior
implementations of language-processing tasks typically involved the direct
hand coding of large sets of rules. The machine-learning paradigm calls
instead for using general learning algorithms — often, although not always,
grounded in statistical inference — to automatically learn such rules through
the analysis of large corpora of typical real-world examples. Corpus (plural,
"corpora") is a set of documents (or sometimes, individual sentences) that have
been hand-annotated with the correct values to be learned.
Many different classes of machine learning algorithms have been applied to
NLP tasks. These algorithms take as input a large set of "features" that are
generated from the input data. Some of the earliest-used algorithms, such
as decision trees, produced systems of hard if-then rules similar to the systems
of hand-written rules that were then common. Increasingly, however,
research has focused on statistical models, which make
soft, probabilistic decisions based on attaching real-valued weights to each
input feature. Such models have the advantage that they can express the
relative certainty of many different possible answers rather than only one,
ARTIFICIAL INTELLIGENCE
12
producing more reliable results when such a model is included as a
component of a larger system.
Systems based on machine-learning algorithms have many advantages over
hand-produced rules:
 The learning procedures used during machine learning automatically
focus on the most common cases, whereas when writing rules by hand it is
often not obvious at all where the effort should be directed.
 Automatic learning procedures can make use of statistical
inference algorithms to produce models that are robust to unfamiliar input
(e.g. containing words or structures that have not been seen before) and to
erroneous input (e.g. with misspelled words or words accidentally omitted).
Generally, handling such input gracefully with hand-written rules — or
more generally, creating systems of hand-written rules that make soft
decisions — is extremely difficult, error-prone and time-consuming.
 Systems based on automatically learning the rules can be made more
accurate simply by supplying more input data. However, systems based on
hand-written rules can only be made more accurate by increasing the
complexity of the rules, which is a much more difficult task. In particular,
there is a limit to the complexity of systems based on hand-crafted rules,
beyond which the systems become more and more unmanageable.
However, creating more data to input to machine-learning systems simply
requires a corresponding increase in the number of man-hours worked,
generally without significant increases in the complexity of the annotation
process.
4.Learning Systems:Machine learning, a branch of artificial intelligence,
concerns the construction and study of systems that can learn from data. For
example, a machine learning system could be trained on email messages to
learn to distinguish between spam and non-spam messages. After learning, it
can then be used to classify new email messages into spam and non-spam
folders.
ARTIFICIAL INTELLIGENCE
13
The core of machine learning deals with representation and generalization.
Representation of data instances and functions evaluated on these instances
are part of all machine learning systems. Generalization is the property that
the system will perform well on unseen data instances; the conditions under
which this can be guaranteed are a key object of study in the subfield
of computational learning theory.
There are a wide variety of machine learning tasks and successful
applications. Optical character recognition, in which printed characters are
recognized automatically based on previous examples, is a classic example of
machine learning.
These two terms are commonly confused, as they often employ the same
methods and overlap significantly. They can be roughly defined as follows:
 Machine learning focuses on prediction, based on known properties
learned from the training data.
 Data mining focuses on the discovery of (previously) unknown properties
in the data. This is the analysis step of Knowledge Discovery in Databases.
The two areas overlap in many ways: data mining uses many machine
learning methods, but often with a slightly different goal in mind. On the other
hand, machine learning also employs data mining methods as "unsupervised
learning" or as a preprocessing step to improve learner accuracy. Much of the
confusion between these two research communities (which do often have
separate conferences and separate journals, ECML PKDD being a major
exception) comes from the basic assumptions they work with: in machine
learning, performance is usually evaluated with respect to the ability
to reproduce known knowledge, while in Knowledge Discovery and Data
Mining (KDD) the key task is the discovery of previously unknown knowledge.
Evaluated with respect to known knowledge, an uninformed (unsupervised)
method will easily be outperformed by supervised methods, while in a typical
KDD task, supervised methods cannot be used due to the unavailability of
training data.Some machine learning systems attempt to eliminate the need for
human intuition in data analysis, while others adopt a collaborative approach
ARTIFICIAL INTELLIGENCE
14
between human and machine. Human intuition cannot, however, be entirely
eliminated, since the system's designer must specify how the data is to be
represented and what mechanisms will be used to search for a
characterization of the data.
5.Neural Networks: A neural network is, in essence, an attempt to simulate
the brain. Neural network theory revolves around the idea that certain key
properties of biological neurons can be extracted and applied to simulations,
thus creating a simulated (and very much simplified) brain.An artificial neural
network (ANN) learning algorithm, usually called "neural network" (NN), is a
learning algorithm that is inspired by the structure and functional aspects
of biological neural networks. Computations are structured in terms of an
interconnected group of artificial neurons, processing information using
a connectionist approach to computation. Modern neural networks arenon-
linear statistical data modeling tools. They are usually used to model complex
relationships between inputs and outputs, to find patterns in data, or to
capture the statistical structure in an unknown joint probability
distribution between observed variables.
In computer science and related fields, artificial neural networks are
computational models inspired by animals' central nervous systems (in
particular the brain) that are capable of machine learning and pattern
recognition. They are usually presented as systems of interconnected "neurons"
that can compute values from inputs by feeding information through the
network.
For example, in a neural network for handwriting recognition, a set of input
neurons may be activated by the pixels of an input image representing a letter
or digit. The activations of these neurons are then passed on, weighted and
transformed by some function determined by the network's designer, to other
neurons, etc., until finally an output neuron is activated that determines which
character was read.
ARTIFICIAL INTELLIGENCE
15
Like other machine learning methods, neural networks have been used to solve
a wide variety of tasks that are hard to solve using ordinary rule-based
programming, including computer vision and speech recognition.
6.Expert Systems:In artificial intelligence, an expert system is a computer
system that emulates the decision-making ability of a human expert.Expert
systems are designed to solve complex problems by reasoning about
knowledge, represented primarily as IF-THEN rules rather than through
conventional procedural code. The first expert systems were created in the
1970s and then proliferated in the 1980s. Expert systems were among the first
truly successful forms of AI software.
An expert system is divided into two sub-systems: the inference engine and
the knowledge base. The knowledge base represents facts and rules. The
inference engine applies the rules to the known facts to deduce new facts.
Inference engines can also include explanation and debugging capabilities.
Expert systems were introduced by the Stanford Heuristic Programming
Project led by Edward Feigenbaum, who is sometimes referred to as the "father
of expert systems". The Stanford researchers tried to identify domains where
expertise was highly valued and complex, such as diagnosing infectious
diseases (Mycin) and identifying unknown organic molecules
(Dendral).Dendral was a tool to study hypothesis formation in the
identification of organic molecules. The general problem it solved—designing
a solution given a set of constraints—was one of the most successful areas for
early expert systems applied to business domains such as sales people
configuring Dec Vax computers and mortgage loan application development.
SMH.PAL is an expert system for the assessment of students with multiple
disabilities.
Mistral is an expert system for the monitoring of dam safety developed in the
90's by Ismes (Italy). It gets data from an automatic monitoring system and
performs a diagnosis of the state of the dam.
ARTIFICIAL INTELLIGENCE
16
HISTORY OF AI
1950: Turing Test: In 1950 Alan Turing published a landmark paper in which
he speculated about the possibility of creating machines with true
intelligence. He noted that "intelligence" is difficult to define and devised his
famous Turing Test. If a machine could carry on a conversation (over
a teleprinter) that was indistinguishable from a conversation with a human
being, then the machine could be called "intelligent." This simplified version of
the problem allowed Turing to argue convincingly that a "thinking machine"
was at least plausible and the paper answered all the most common objections
to the proposition. The Turing Test was the first serious proposal in
the philosophy of artificial intelligence.
1956-1959:Golden Years:
The Dartmouth Conference of 1956 was organized by Marvin Minsky, John
McCarthy and two senior scientists: Claude Shannon and Nathan
Rochester of IBM. The proposal for the conference included this assertion:
"every aspect of learning or any other feature of intelligence can be so
precisely described that a machine can be made to simulate it". The
participants included Ray Solomonoff, Oliver Selfridge, Trenchard
More, Arthur Samuel, Allen Newell and Herbert A. Simon, all of whom would
create important programs during the first decades of AI research. At the
conference Newell and Simon debuted the "Logic Theorist" and McCarthy
persuaded the attendees to accept "Artificial Intelligence" as the name of the
field.[43]
The 1956 Dartmouth conference was the moment that AI gained its
name, its mission, its first success and its major players, and is widely
considered the birth of AI. In 1958,John McCarthy (Massachusetts Institute of
Technology or MIT) invented the Lisp programming language.In 1959,John
McCarthy and Marvin Minsky founded the MIT AI Lab.
ARTIFICIAL INTELLIGENCE
17
1965:ELIZA: ELIZA is a computer program and an early example of
primitive natural language processing. ELIZA operated by processing users'
responses to scripts, the most famous of which was DOCTOR, a simulation of
a Rogerian psychotherapist. Using almost no information about human
thought or emotion, DOCTOR sometimes provided a startlingly human-like
interaction. ELIZA was written atMIT by Joseph Weizenbaum between 1964
and 1966.
When the "patient" exceeded the very small knowledge base, DOCTOR might
provide a generic response, for example, responding to "My head hurts" with
"Why do you say your head hurts?" A possible response to "My mother hates
me" would be "Who else in your family hates you?" ELIZA was implemented
using simple pattern matching techniques, but was taken seriously by several
of its users, even after Weizenbaum explained to them how it worked. It was
one of the first chatterbots in existence.
1972:PROLOG Prologis a general purpose logic programming language
associated withartificial intelligence and computational linguistics.
Prolog has its roots in first-order logic, a formal logic, and unlike many
other programming languages, Prolog is declarative: the program logic is
expressed in terms of relations, represented as facts and rules. A computation
is initiated by running a query over these relations.
The language was first conceived by a group around Alain
Colmerauer in Marseille, France, in the early 1970s and the first Prolog system
was developed in 1972 by Colmerauer with Philippe Roussel.
Prolog was one of the first logic programming languages, and remains the
most popular among such languages today, with many free and commercial
implementations available. While initially aimed at natural language
processing, the language has since then stretched far into other areas
like theorem proving, expert systems, games, automated answering
systems, ontologies and sophisticated control systems. Modern Prolog
ARTIFICIAL INTELLIGENCE
18
environments support creating graphical user interfaces, as well as
administrative and networked applications.
1974:MYCIN.MYCIN was an early expert system that used artificial
intelligence to identify bacteria causing severe infections, such as bacteremia
and meningitis, and to recommend antibiotics, with the dosage adjusted for
patient's body weight — the name derived from the antibiotics themselves, as
many antibiotics have the suffix "-mycin". The Mycin system was also used for
the diagnosis of blood clotting diseases.
MYCIN was developed over five or six years in the early 1970s at Stanford
University. It was written in Lisp as the doctoral dissertation of Edward
Shortliffe under the direction of Bruce Buchanan,Stanley N. Cohen and others.
It arose in the laboratory that had created the earlier Dendral expert system.
MYCIN was never actually used in practice but research indicated that it
proposed an acceptable therapy in about 69% of cases, which was better than
the performance of infectious disease experts who were judged using the same
criteria.
1988-93:AI Winter. In the history of artificial intelligence, an AI winter is a
period of reduced funding and interest in artificial intelligence research. The
term was coined by analogy to the idea of a nuclear winter. The field has
experienced several cycles of hype, followed by disappointment and criticism,
followed by funding cuts, followed by renewed interest years or decades later.
There were two major winters in 1974–80 and 1987–93 and several smaller
episodes, including:
 1966: The failure of machine translation,
 1970: The abandonment of connectionism,
 1971–75: DARPA's frustration with the Speech Understanding
Research program at Carnegie Mellon University,
 1973: The large decrease in AI research in the United Kingdom in response
to the Light hill report,
ARTIFICIAL INTELLIGENCE
19
 1973–74: DARPA's cutbacks to academic AI research in general,
 1987: The collapse of the Lisp machine market,
 1988: The cancellation of new spending on AI by the Strategic Computing
Initiative,
 1993: Expert systems slowly reaching the bottom,
 1990s: The quiet disappearance of the fifth-generation computer project's
original goals,
The term first appeared in 1984 as the topic of a public debate at the annual
meeting of AAAI (then called the "American Association of Artificial
Intelligence"). It is a chain reaction that begins with pessimism in the AI
community, followed by pessimism in the press, followed by a severe cutback
in funding, followed by the end of serious research. At the meeting, Roger
Schank andMarvin Minsky—two leading AI researchers who had survived the
"winter" of the 1970s—warned the business community that enthusiasm for AI
had spiraled out of control in the '80s and that disappointment would
certainly follow. Three years later, the billion-dollar AI industry began to
collapse.
Hype cycles are common in many emerging technologies, such as the railway
mania or the dot-com bubble. An AI winter is primarily a collapse in
the perception of AI by government bureaucrats and venture capitalists.
Despite the rise and fall of AI's reputation, it has continued to develop new and
successful technologies. AI researcher Rodney Brooks would complain in 2002
that "there's this stupid myth out there that AI has failed, but AI is around you
every second of the day." Ray Kurzweil agrees: "Many observers still think that
the AI winter was the end of the story and that nothing since has come of the
AI field. Yet today many thousands of AI applications are deeply embedded in
the infrastructure of every industry." He adds: "the AI winter is long since
over."
ARTIFICIAL INTELLIGENCE
20
Great Achievements
1.Robocup:RoboCup is an international robotics competition founded in
1997.The aim is to promote robotics and AI research, by offering a publicly
appealing, but formidable challenge. The name Robocopis a contraction of the
competition's full name, "Robot Soccer World Cup", but there are many other
stages of the competition such as "RoboCupRescue", "RoboCup@Home" and
"RoboCup Junior". In the U.S robocup is not very big, with the national
competition being at New Jersey every year, but in other countries it is very
popular. In 2013 the world's competition was in the Netherlands. In 2014 the
world competition is in Brazil.
The official goal of the project:
"By the middle of the 21st century, a team of
fully autonomous humanoid robot soccer players shall win
a soccer game, complying with the official rules of FIFA, against the
winner of the most recent World Cup. "
2.Deep Blue:Deep Blue was a chess-playing computer developed by IBM. On
May 11, 1997, the machine, with human intervention between games, won
the second six-game match against world champion Garry Kasparov by two
wins to one with three draws.Kasparov accused IBM of cheating and
demanded a rematch. IBM refused and retired Deep Blue. Kasparov had beaten
a previous version of Deep Blue in 1996.
The project was started as ChipTest at Carnegie Mellon University by Feng-
hsiung Hsu, followed by its successor, Deep Thought. After their graduation
from Carnegie Mellon, Hsu, Thomas Anantharaman, and Murray
Campbell from the Deep Thought team were hired by IBM Research to
continue their quest to build a chess machine that could defeat the world
champion. Hsu and Campbell joined IBM in autumn 1989, with
Anantharaman following later. Anantharaman subsequently left IBM for Wall
ARTIFICIAL INTELLIGENCE
21
Street and Arthur Joseph Hoane joined the team to perform programming
tasks. Jerry Brody, a long-time employee of IBM Research, was recruited for
the team in 1990. The team was managed first by Randy Moulic, followed by
Chung-Jen (C J) Tan.
After Deep Thought's 1989 match against Kasparov, IBM held a contest to
rename the chess machine and it became "Deep Blue", a play on IBM's
nickname, "Big Blue". After a scaled down version of Deep Blue, Deep Blue Jr.,
played Grandmaster Joel Benjamin, Hsu and Campbell decided that Benjamin
was the expert they were looking for to develop Deep Blue's opening book, and
Benjamin was signed by IBM Research to assist with the preparations for Deep
Blue's matches against Garry Kasparov.
In 1995 "Deep Blue prototype" (actually Deep Thought II, renamed for PR
reasons) played in the 8th World Computer Chess Championship. Deep Blue
prototype played the computer programWchess to a draw while Wchess was
running on a personal computer. In round 5 Deep Blue prototype had
the white pieces and lost to the computer program Fritz 3 in 39 moves while
Fritz was running on an Intel Pentium 90Mhz personal computer. In the end
of the championship Deep Blue prototype was tied for second place with the
computer program Junior while Junior was running on a personal computer.
3.DARPA Grand challenge:The DARPA Grand Challenge is a prize competition
for American autonomous vehicles, funded by the Defense Advanced Research
Projects Agency,the most prominent research organization of the United
States Department of Defense. Congress has authorized DARPA to award cash
prizes to further DARPA's mission to sponsor revolutionary, high-payoff
research that bridges the gap between fundamental discoveries and military
use. The initial DARPA Grand Challenge was created to spur the development
of technologies needed to create the first fully autonomous ground
vehicles capable of completing a substantial off-road course within a limited
time. The third event, the DARPA Urban Challenge extended the initial
ARTIFICIAL INTELLIGENCE
22
Challenge to autonomous operation in a mock urban environment. The most
recent Challenge, the 2012 DARPA Robotics Challenge, focused on
autonomous emergency-maintenance robots. The most recent Challenge, the
2012 DARPA Robotics Challenge, focused on autonomous emergency-
maintenance robots.
Fully autonomous vehicles have been an international pursuit for many years,
from endeavors in Japan (starting in 1977), Germany (Ernst
Dickmanns and VaMP), Italy (the ARGO Project), the European Union
(EUREKA Prometheus Project), the United States of America, and other
countries.
The Grand Challenge was the first long distance competition for driverless
cars in the world; other research efforts in the field of Driverless cars take a
more traditional commercial or academic approach. The U.S. Congress
authorized DARPA to offer prize money ($1 million) for the first Grand
Challenge to facilitate robotic development, with the ultimate goal of making
one-third of ground military forces autonomous by 2015. Following the 2004
event, Dr. Tony Tether, the director of DARPA, announced that the prize
money had been increased to $2 million for the next event, which was claimed
on October 9, 2005. The first, second and third places in the 2007 Urban
Challenge received $2 million, $1 million, and $500,000, respectively.
The competition was open to teams and organizations from around the world,
as long as there were at least one U.S. citizen on the roster. Teams have
participated from high schools, universities, businesses and other
organizations. More than 100 teams registered in the first year, bringing a
wide variety of technological skills to the race. In the second year, 195 teams
from 36 U.S. statesand 4 foreign countries entered the race.
ARTIFICIAL INTELLIGENCE
23
TODAY’S AI APPLICATIONS
1.Driver-Less trains and Metros: Driverless metro lines are currently
operational in the variouscities, such as, London,Barcelona,Dubai
etc.Advantages of driverless metros:
 Lower expenditure for staff (staff swallows a significant part of the costs
of running a transport system). However, service and security personnel
is common in automated systems.
 Trains can be shorter and instead run more frequently without
increasing expenditure for staff.
 Service frequency can easily be adjusted to meet sudden unexpected
demands.
 Despite common psychological concerns, driverless metros are safer
than traditional ones. None of them ever had a serious accident.
 Intruder detection systems can be more effective than humans in
stopping trains if someone is on the tracks.
 Financial savings in both energy and wear-and-tear costs because trains
are driven to an optimum specification.
 Train turnover time at terminals can be extremely short (train goes into
the holding track and returns immediately), reducing the number of
train sets needed for operation.
2.Burglary Alarm System: A Burglary alarm is a system designed to detect
intrusion – unauthorized entry – into a building or area. Security alarms are used
in residential, commercial, industrial, and military properties for protection
against burglary (theft) or property damage, as well as personal protection
against intruders.Car alarms likewise protect vehicles and their
contents. Prisons also use security systems for control of inmates.
Some alarm systems serve a single purpose of burglary protection;
combination systems provide both fire and intrusion protection. Intrusion
ARTIFICIAL INTELLIGENCE
24
alarm systems may also be combined with closed-circuit
television surveillance systems to automatically record the activities of
intruders, and may interface to access control systems for electrically locked
doors. Systems range from small, self-contained noisemakers, to complicated,
multi-area systems with computer monitoring and control.
3.Automatic Essay Scoring in Education:Automated essay scoring (AES) is
the use of specialized computer programs to assign grades to essays written in
an educational setting. It is a method of educational assessment and an
application of natural language processing. Its objective is to classify a large
set of textual entities into a small number of discrete categories, corresponding
to the possible grades—for example, the numbers 1 to 6. Therefore, it can be
considered a problem of statistical classification.
Several factors have contributed to a growing interest in AES. Among them are
cost, accountability, standards, and technology. Rising education costs have led
to pressure to hold the educational system accountable for results by imposing
standards. The advance of information technology promises to measure
educational achievement at reduced cost.
The use of AES for high-stakes testing in education has generated significant
backlash, with opponents pointing to research that computers cannot yet
grade writing accurately and arguing that their use for such purposes
promotes teaching writing in reductive ways (i.e. teaching to the test).
ARTIFICIAL INTELLIGENCE
25
Submitted to :- NIIT Residency Road Srinagar
Submitted by:-Masood Ahmad Bhat
Student ID:-S1400D9700032
Batch Code:-B140045
Sig. Of HOC NIIT Sig. Of Concerned Faculty.
----------- -------------

Contenu connexe

Tendances

The nine types of intelligence
The nine types of intelligenceThe nine types of intelligence
The nine types of intelligence
Yvon Camacho
 
Intelligence
IntelligenceIntelligence
Intelligence
drburwell
 

Tendances (19)

Intelligence in physchology
Intelligence in physchologyIntelligence in physchology
Intelligence in physchology
 
Sullivan theory
Sullivan theorySullivan theory
Sullivan theory
 
Guilford's structure of intellect model
Guilford's structure of intellect modelGuilford's structure of intellect model
Guilford's structure of intellect model
 
Berger Ls 7e Ch 21
Berger Ls 7e  Ch 21Berger Ls 7e  Ch 21
Berger Ls 7e Ch 21
 
Major theories of intelligence
Major theories of intelligenceMajor theories of intelligence
Major theories of intelligence
 
The nine types of intelligence
The nine types of intelligenceThe nine types of intelligence
The nine types of intelligence
 
Intelligence
IntelligenceIntelligence
Intelligence
 
M itheory
M itheoryM itheory
M itheory
 
KNOWLEDGE
KNOWLEDGEKNOWLEDGE
KNOWLEDGE
 
Intelligence By sameena latheef
Intelligence   By sameena latheefIntelligence   By sameena latheef
Intelligence By sameena latheef
 
Intelligence
IntelligenceIntelligence
Intelligence
 
Intellgence
IntellgenceIntellgence
Intellgence
 
The nine types of intelligence
The nine types of intelligenceThe nine types of intelligence
The nine types of intelligence
 
Foundations of Knowledge
Foundations of KnowledgeFoundations of Knowledge
Foundations of Knowledge
 
Unit 3 intelligence
Unit 3 intelligenceUnit 3 intelligence
Unit 3 intelligence
 
intelligece
intelligeceintelligece
intelligece
 
chapter13
chapter13chapter13
chapter13
 
Intelligence and Achievement
Intelligence and AchievementIntelligence and Achievement
Intelligence and Achievement
 
Howard Gardner- Multiple intelligence presentation
Howard Gardner- Multiple intelligence presentationHoward Gardner- Multiple intelligence presentation
Howard Gardner- Multiple intelligence presentation
 

Similaire à Artificial Intelligence

Unit-I Epistemological Basis of Knowledge and Education
Unit-I Epistemological Basis of Knowledge and EducationUnit-I Epistemological Basis of Knowledge and Education
Unit-I Epistemological Basis of Knowledge and Education
DrGavisiddappa Angadi
 
Theories of Learning
Theories of LearningTheories of Learning
Theories of Learning
Ledor Nalecne
 
Intelligence Testing and Discrimination Assignment 3
Intelligence Testing and Discrimination Assignment 3Intelligence Testing and Discrimination Assignment 3
Intelligence Testing and Discrimination Assignment 3
Julia Hanschell
 
The Multi Store Model Of Memory And Research Into...
The Multi Store Model Of Memory And Research Into...The Multi Store Model Of Memory And Research Into...
The Multi Store Model Of Memory And Research Into...
Lindsey Campbell
 
Information processing theory abd
Information processing theory abdInformation processing theory abd
Information processing theory abd
Abdullah Mubasher
 

Similaire à Artificial Intelligence (20)

aiou code 837
 aiou code 837  aiou code 837
aiou code 837
 
Unit-I Epistemological Basis of Knowledge and Education
Unit-I Epistemological Basis of Knowledge and EducationUnit-I Epistemological Basis of Knowledge and Education
Unit-I Epistemological Basis of Knowledge and Education
 
Theories of learning
Theories of learningTheories of learning
Theories of learning
 
Theories of Learning
Theories of LearningTheories of Learning
Theories of Learning
 
12 brainmind-principles-expanded
12 brainmind-principles-expanded12 brainmind-principles-expanded
12 brainmind-principles-expanded
 
hhh.pdf
hhh.pdfhhh.pdf
hhh.pdf
 
Theories of Learning
Theories of LearningTheories of Learning
Theories of Learning
 
WEEK 7 ULOa.docx
WEEK 7 ULOa.docxWEEK 7 ULOa.docx
WEEK 7 ULOa.docx
 
Intelligence
IntelligenceIntelligence
Intelligence
 
Intelligence Testing and Discrimination Assignment 3
Intelligence Testing and Discrimination Assignment 3Intelligence Testing and Discrimination Assignment 3
Intelligence Testing and Discrimination Assignment 3
 
lec 2 beh.pptx
lec 2 beh.pptxlec 2 beh.pptx
lec 2 beh.pptx
 
Cognitive theories
Cognitive theories Cognitive theories
Cognitive theories
 
Essay On Standardized Test
Essay On Standardized TestEssay On Standardized Test
Essay On Standardized Test
 
837-1.docx
837-1.docx837-1.docx
837-1.docx
 
837-12.docx
837-12.docx837-12.docx
837-12.docx
 
The Multi Store Model Of Memory And Research Into...
The Multi Store Model Of Memory And Research Into...The Multi Store Model Of Memory And Research Into...
The Multi Store Model Of Memory And Research Into...
 
Information processing theory abd
Information processing theory abdInformation processing theory abd
Information processing theory abd
 
8609 day 3.pptx
8609 day 3.pptx8609 day 3.pptx
8609 day 3.pptx
 
Theory of Multiple Intelligences
Theory of Multiple Intelligences Theory of Multiple Intelligences
Theory of Multiple Intelligences
 
Jerome s. bruner
Jerome s. brunerJerome s. bruner
Jerome s. bruner
 

Dernier

Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills KuwaitKuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
jaanualu31
 
Hospital management system project report.pdf
Hospital management system project report.pdfHospital management system project report.pdf
Hospital management system project report.pdf
Kamal Acharya
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
ssuser89054b
 

Dernier (20)

Online electricity billing project report..pdf
Online electricity billing project report..pdfOnline electricity billing project report..pdf
Online electricity billing project report..pdf
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - V
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leap
 
Block diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptBlock diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.ppt
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
Wadi Rum luxhotel lodge Analysis case study.pptx
Wadi Rum luxhotel lodge Analysis case study.pptxWadi Rum luxhotel lodge Analysis case study.pptx
Wadi Rum luxhotel lodge Analysis case study.pptx
 
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
 
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills KuwaitKuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS Lambda
 
DC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equationDC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equation
 
Moment Distribution Method For Btech Civil
Moment Distribution Method For Btech CivilMoment Distribution Method For Btech Civil
Moment Distribution Method For Btech Civil
 
Hospital management system project report.pdf
Hospital management system project report.pdfHospital management system project report.pdf
Hospital management system project report.pdf
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
 
Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptx
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.
 
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptxA CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
 
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced LoadsFEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
 
Double Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueDouble Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torque
 

Artificial Intelligence

  • 1. MASOOD AHMAD BHAT S.ID:-S1400D9700032 Batch Code:-B140045 NIITRESIDENCY ROAD SRINAGAR ARTIFICIAL INTELLIGENCE
  • 2. ARTIFICIAL INTELLIGENCE 1 CONTENTS 1) Intelligence a) Introduction i) Knowledge ii) Learning iii) Understanding 2) Artificial Intelligence a) Introduction b) Major Branches of AI i) Robotics ii) Vision Systems iii) Natural Language Processing iv) Learning Systems v) Neural Networks vi) Expert Systems 3) History of AI a) Great Achievements i) Robocop ii) Deep Blue iii) DARPA Grand Challenge 4) Today’s AI Applications a) Driver-Less Trains b) Burglary Alarm Systems c) Automatic Grading System in Education
  • 3. ARTIFICIAL INTELLIGENCE 2 INTELLIGENCE Introduction Despite a long history of research and debate, there is still no standard definitionof intelligence. This has led some to believe that intelligence may be approximatelydescribed, but cannot be fully defined. Some definitions of Intelligence from different sources are: As many dictionaries source their definitions from other dictionaries, we have endeavored to always list theoriginal source. 1.“The ability to use memory, knowledge, experience, understanding, reasoning, imagination and judgment in order to solve problems and adapt to new situations.” All Words Dictionary, 2006 2.“The capacity to acquire and apply knowledge.” The American Heritage Dictionary, fourth edition, 2000 3. “Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought.” American Psychological Association 4. “The ability to learn, understand and make judgments or have opinions that are based on reason” Cambridge Advance Learner’s Dictionary, 2006 5. “Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.” Common statement with 52 expert signatories 6. “The ability to learn facts and skills and apply them, especially when this ability is highly developed.” Encarta World English Dictionary, 2006 7. “Ability to adapt effectively to the environment, either by making a change in oneself or by changing the environment or finding a new one intelligence
  • 4. ARTIFICIAL INTELLIGENCE 3 is not a single mental process, but rather a combination of many mental processes directed toward effective adaptation to the environment.” Encyclopedia Britannica, 2006 8. “the general mental ability involved in calculating, reasoning, perceiving relationships and analogies, learning quickly, storing and retrieving information, using language fluently, classifying, generalizing, and adjusting to new situations.” Columbia Encyclopedia, sixth edition, 2006 9. “Capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc.” Random House Unabridged Dictionary, 2006 10. “The ability to learn, understand, and think about things.” Longman Dictionary or Contemporary English, 2006 11. “ The ability to learn or understand or to deal with new or trying situations the skilled use of reason (2) : the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (as tests)” Merriam-Webster Online Dictionary, 2006 12. “The ability to acquire and apply knowledge and skills.” Compact Oxford English Dictionary, 2006 13. “. . . the ability to adapt to the environment.” World Book Encyclopedia, 2006 14. “Intelligence is a property of mind that encompasses many related mental abilities, such as the capacities to reason, plan, solve problems, think abstractly, comprehend ideas and language, and learn.” Wikipedia, 4 October, 2006 15. “Capacity of mind, especially to understand principles, truths, facts or meanings, acquire knowledge, and apply it to practice; the ability to learn and comprehend.” Wiktionary, 4 October, 2006 16. “The ability to learn and understand or to deal with problems.” Word Central Student Dictionary, 2006 17. “The ability to comprehend; to understand and profit from experience.” Word net2.1, 2006 18. “The capacity to learn, reason, and understand.” Words myth Dictionary, 2006
  • 5. ARTIFICIAL INTELLIGENCE 4 Intelligence has been defined in many different ways including Knowledge, Learning, and Understanding. 1.Knowledge:Knowledge is a familiarity, awareness or understanding of someone or something, such as facts, information, descriptions, or skills, which is acquired through experience or education byperceiving, discovering, or learning. Knowledge can refer to a theoretical or practical understanding of a subject. It can be implicit (as with practical skill or expertise) or explicit (as with the theoretical understanding of a subject); it can be more or less formal or systematic. In philosophy, the study of knowledge is called epistemology; the philosopher Plato famously defined knowledge as "justified true belief". However, no single agreed upon definition of knowledge exists, though there are numerous theories to explain it. Knowledge acquisition involves complex cognitive processes: perception, communication, association and reasoning; while knowledge is also said to be related to the capacity of acknowledgment in human beings. The definition of knowledge is a matter of ongoing debate among philosophers in the field of epistemology. The classical definition, described but not ultimately endorsed by Plato,specifies that a statement must meet three criteria in order to be considered knowledge: it must be justified, true, and believed. Some claim that these conditions are not sufficient, as Gettier case examples allegedly demonstrate. There are a number of alternatives proposed, includingRobert Nozick's arguments for a requirement that knowledge 'tracks the truth' and Simon Blackburn's additional requirement that we do not want to say that those who meet any of these conditions 'through a defect, flaw, or failure' have knowledge. Richard Kirkham suggests that our definition of knowledge requires that the evidence for the belief necessitates its truth. 2.Understanding:Understanding is a psychological process related to an abstract or physical object, such as a person, situation, or message whereby one is able to think about it and use concepts to deal adequately with that
  • 6. ARTIFICIAL INTELLIGENCE 5 object.Understanding is a relation between the knower and an object of understanding. Understanding implies abilities and dispositions with respect to an object of knowledge sufficient to support intelligent behavior. An understanding is the limit of a conceptualization. To understand something is to have conceptualized it to a given measure. Examples 1. One understands the weather if one is able to predict and to give an explanation of some of its features, etc. 2. A psychiatrist understands another person's anxieties if he/she knows that person's anxieties, their causes, and can give useful advice on how to cope with the anxiety. 3. A person understands a command if he/she knows who gave it, what is expected by the issuer, and whether the command is legitimate, and whether one understands the speaker. 4. One understands a reasoning, an argument, or a language if one can consciously reproduce the information content conveyed by the message. 5. One understands a mathematical concept if one can solve problems using it, especially problems that are not similar to what one has seen before. 3.Learning:Learning is acquiring new, or modifying and reinforcing, existing knowledge, behaviors, skills, values, or preferencesand may involve synthesizing different types of information.The ability to learn is possessed by humans, animals and some machines. Progress over time tends to follow learning curves. Learning is not compulsory; it is contextual. It does not happen all at once, but builds upon and is shaped by what we already know. To that end, learning may be viewed as a process, rather than a collection of
  • 7. ARTIFICIAL INTELLIGENCE 6 factual and procedural knowledge. Learning produces changes in the organism and the changes produced are relatively permanent. Human learning may occur as part of education, personal development, schooling, or training. It may be goal-oriented and may be aided by motivation. The study of how learning occurs is part of neuropsychology, educational psychology, learning theory, and pedagogy. Learning may occur as a result of habituation or classical conditioning, seen in many animal species, or as a result of more complex activities such as play, seen only in relatively intelligent animals. Learning may occur consciously or without conscious awareness. Learning that an aversive event can't be avoided nor escaped is called learned helplessness. There is evidence for human behavioral learning prenatally, in which habituation has been observed as early as 32 weeks into gestation, indicating that the central nervous system is sufficiently developed and primed for learning and memory to occur very early on in development. Play has been approached by several theorists as the first form of learning. Children experiment with the world, learn the rules, and learn to interact through play. Lev Vygotsky agrees that play is pivotal for children's development, since they make meaning of their environment through play. 85 percent of brain development occurs during the first five years of a child's life. The context of conversation based on moral reasoning offers some proper observations on the responsibilities of parents. ARTIFICIAL INTELLIGENCE Introduction Artificial term is given by John McCarthy in 1950.He is known as the Father of Artificial intelligence.AI is both the intelligence of machines and the branch
  • 8. ARTIFICIAL INTELLIGENCE 7 of computer sciencewhich aims to create it, through "the study and design of intelligent agents" or "rational agents", where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. Among the traits that researchers hope machines will exhibitare reasoning, knowledge,planning, learning, communication andthe ability to move and manipulate objects. In the field of artificial intelligence there is no consensus on how closely the brain should be simulated. Artificial intelligence (AI) is the intelligence exhibited by machines or software, and the branch of computer science that develops machines and software with human-like intelligence. Major AI researchers and textbooks define the field as "the study and design of intelligent agents”, where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as "the science and engineering of making intelligent machines". AI research is highly technical and specialized, and is deeply divided into subfields that often fail to communicate with each other. Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications. The central goals of AI research include reasoning, knowledge, planning, learning natural language processing (communication), perception and the ability to move and manipulate objects. General intelligence (or "strong AI") is still among the field's long term goals. Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. There are an enormous number of tools used in AI, including
  • 9. ARTIFICIAL INTELLIGENCE 8 versions of search and mathematical optimization, logic, methods based on probability and economics, and many others. The field was founded on the claim that a central property of humans, intelligence—the sapience ofHomo sapiens—can be sufficiently well described to the extent that it can be simulated by a machine. This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of tremendous optimism but has also suffered stunning setbacks. Today it has become an essential part of the technology industry and defines many challenging problems at the forefront of research in computer science. Major Branches 1.Robotics:Robotics is the branch of technology that deals with the design, construction, operation, structural disposition, manufacture and application of robots as well as computer systems for their control, sensory feedback, and information processing. These technologies deal with automated machines that can take the place of humans in dangerous environments or manufacturing processes, or resemble humans in appearance, behavior, and/or cognition. Many of today's robots are inspired by nature contributing to the field of bio- inspired robotics. The concept of creating machines that can operate autonomously dates back to classical times, but research into the functionality and potential uses of robots did not grow substantially until the 20th century. Throughout history, robotics has been often seen to mimic human behavior, and often manage tasks in a similar fashion. Today, robotics is a rapidly growing field, as technological advances continue, research, design, and building new robots serve various practical purposes, whether domestically, commercially,
  • 10. ARTIFICIAL INTELLIGENCE 9 or militarily. Many robots do jobs that are hazardous to people such as defusing bombs, mines and exploring shipwrecks. The word robotics was derived from the word robot, which was introduced to the public by Czech writer Karel Čapek in his play R.U.R. (Rossum's Universal Robots), which was published in 1920. The word robot comes from the Slavic word robota, which means labour. The play begins in a factory that makes artificial people called robots, creatures who can be mistaken for humans – similar to the modern ideas of androids. Karel Čapek himself did not coin the word. He wrote a short letter in reference to an etymology in the Oxford English Dictionary in which he named his brother Josef Čapek as its actual originator. History of Robotics In 1927 the Maschinenmensch ("machine-human") gynoid humanoid robot (also called "Parody", "Futura", "Robotrix", or the "Maria impersonator") was the first depiction of a robot ever to appear on film was played by German actress Brigitte Helm in Fritz Lang's film Metropolis. In 1942 the science fiction writer Isaac Asimov formulated his Three Laws of Robotics. In 1948 Norbert Wiener formulated the principles of cybernetics, the basis of practical robotics. Fully autonomous robots only appeared in the second half of the 20th century. The first digitally operated and programmable robot, the Unimate, was installed in 1961 to lift hot pieces of metal from a die casting machine and stack them. Commercial and industrial robots are widespread today and used to perform jobs more cheaply, or more accurately and reliably, than humans. They are also employed in jobs which are too dirty, dangerous, or dull to be suitable for humans. Robots are widely used in manufacturing, assembly, packing and packaging, transport, earth and space exploration, surgery,
  • 11. ARTIFICIAL INTELLIGENCE 10 weaponry, laboratory research, safety, and the mass production of consumer and industrial goods. 2.Vision System:It branch of Artificial Intelligence concerned with computer processing of images from the real world.Machine vision (MV) is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance in industry. The scope of MV is broad. MV is related to, though distinct from, computer. The primary uses for machine vision are automatic inspection and industrial robot guidance. Common machine vision applications include quality assurance, sorting, material handling, robot guidance, and optical gauging. Machine vision methods are defined as both the process of defining and creating an MV solution, and as the technical process that occurs during the operation of the solution. Here the latter is addressed. As of 2006, there was little standardization in the interfacing and configurations used in MV. This includes user interfaces, interfaces for the integration of multi-component systems and automated data interchange.Nonetheless, the first step in the MV sequence of operation is acquisition of an image, typically using cameras, lenses, and lighting that has been designed to provide the differentiation required by subsequent processing. MV software packages then employ various digital image processing techniques to extract the required information, and often make decisions (such as pass/fail) based on the extracted information. A common output from machine vision systems is pass/fail decisions. These decisions may in turn trigger mechanisms that reject failed items or sound an alarm. Other common outputs include object position and orientation information from robot guidance systems. Additionally, output types include numerical measurement data, data read from codes and characters, displays of the process or results, stored images, alarms from automated space monitoring MV systems, and process control signals.As recently as 2006, one industry consultant reported that MV represented a $1.5 billion market in North America. However, the editor-in-chief of an MV trade magazine asserted that "machine vision is not an industry per se" but rather
  • 12. ARTIFICIAL INTELLIGENCE 11 "the integration of technologies and products that provide services or applications that benefit true industries such as automotive or consumer goods manufacturing, agriculture, and defense." As of 2006, experts estimated that MV had been employed in less than 20% of the applications for which it is potentially useful. 3.Natural Language Processing:Natural language processing(NLP) is a field of artificial intelligence concerned with the interactions between computers and human (natural) languages. As such, NLP is related to the area of human–computer interaction. Many challenges in NLP involve natural language understanding, that is, enabling computers to derive meaning from human or natural language input, and others involve natural language generation. Modern NLP algorithms are based on machine learning, especially statistical machine learning. The paradigm of machine learning is different from that of most prior attempts at language processing. Prior implementations of language-processing tasks typically involved the direct hand coding of large sets of rules. The machine-learning paradigm calls instead for using general learning algorithms — often, although not always, grounded in statistical inference — to automatically learn such rules through the analysis of large corpora of typical real-world examples. Corpus (plural, "corpora") is a set of documents (or sometimes, individual sentences) that have been hand-annotated with the correct values to be learned. Many different classes of machine learning algorithms have been applied to NLP tasks. These algorithms take as input a large set of "features" that are generated from the input data. Some of the earliest-used algorithms, such as decision trees, produced systems of hard if-then rules similar to the systems of hand-written rules that were then common. Increasingly, however, research has focused on statistical models, which make soft, probabilistic decisions based on attaching real-valued weights to each input feature. Such models have the advantage that they can express the relative certainty of many different possible answers rather than only one,
  • 13. ARTIFICIAL INTELLIGENCE 12 producing more reliable results when such a model is included as a component of a larger system. Systems based on machine-learning algorithms have many advantages over hand-produced rules:  The learning procedures used during machine learning automatically focus on the most common cases, whereas when writing rules by hand it is often not obvious at all where the effort should be directed.  Automatic learning procedures can make use of statistical inference algorithms to produce models that are robust to unfamiliar input (e.g. containing words or structures that have not been seen before) and to erroneous input (e.g. with misspelled words or words accidentally omitted). Generally, handling such input gracefully with hand-written rules — or more generally, creating systems of hand-written rules that make soft decisions — is extremely difficult, error-prone and time-consuming.  Systems based on automatically learning the rules can be made more accurate simply by supplying more input data. However, systems based on hand-written rules can only be made more accurate by increasing the complexity of the rules, which is a much more difficult task. In particular, there is a limit to the complexity of systems based on hand-crafted rules, beyond which the systems become more and more unmanageable. However, creating more data to input to machine-learning systems simply requires a corresponding increase in the number of man-hours worked, generally without significant increases in the complexity of the annotation process. 4.Learning Systems:Machine learning, a branch of artificial intelligence, concerns the construction and study of systems that can learn from data. For example, a machine learning system could be trained on email messages to learn to distinguish between spam and non-spam messages. After learning, it can then be used to classify new email messages into spam and non-spam folders.
  • 14. ARTIFICIAL INTELLIGENCE 13 The core of machine learning deals with representation and generalization. Representation of data instances and functions evaluated on these instances are part of all machine learning systems. Generalization is the property that the system will perform well on unseen data instances; the conditions under which this can be guaranteed are a key object of study in the subfield of computational learning theory. There are a wide variety of machine learning tasks and successful applications. Optical character recognition, in which printed characters are recognized automatically based on previous examples, is a classic example of machine learning. These two terms are commonly confused, as they often employ the same methods and overlap significantly. They can be roughly defined as follows:  Machine learning focuses on prediction, based on known properties learned from the training data.  Data mining focuses on the discovery of (previously) unknown properties in the data. This is the analysis step of Knowledge Discovery in Databases. The two areas overlap in many ways: data mining uses many machine learning methods, but often with a slightly different goal in mind. On the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in Knowledge Discovery and Data Mining (KDD) the key task is the discovery of previously unknown knowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data.Some machine learning systems attempt to eliminate the need for human intuition in data analysis, while others adopt a collaborative approach
  • 15. ARTIFICIAL INTELLIGENCE 14 between human and machine. Human intuition cannot, however, be entirely eliminated, since the system's designer must specify how the data is to be represented and what mechanisms will be used to search for a characterization of the data. 5.Neural Networks: A neural network is, in essence, an attempt to simulate the brain. Neural network theory revolves around the idea that certain key properties of biological neurons can be extracted and applied to simulations, thus creating a simulated (and very much simplified) brain.An artificial neural network (ANN) learning algorithm, usually called "neural network" (NN), is a learning algorithm that is inspired by the structure and functional aspects of biological neural networks. Computations are structured in terms of an interconnected group of artificial neurons, processing information using a connectionist approach to computation. Modern neural networks arenon- linear statistical data modeling tools. They are usually used to model complex relationships between inputs and outputs, to find patterns in data, or to capture the statistical structure in an unknown joint probability distribution between observed variables. In computer science and related fields, artificial neural networks are computational models inspired by animals' central nervous systems (in particular the brain) that are capable of machine learning and pattern recognition. They are usually presented as systems of interconnected "neurons" that can compute values from inputs by feeding information through the network. For example, in a neural network for handwriting recognition, a set of input neurons may be activated by the pixels of an input image representing a letter or digit. The activations of these neurons are then passed on, weighted and transformed by some function determined by the network's designer, to other neurons, etc., until finally an output neuron is activated that determines which character was read.
  • 16. ARTIFICIAL INTELLIGENCE 15 Like other machine learning methods, neural networks have been used to solve a wide variety of tasks that are hard to solve using ordinary rule-based programming, including computer vision and speech recognition. 6.Expert Systems:In artificial intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert.Expert systems are designed to solve complex problems by reasoning about knowledge, represented primarily as IF-THEN rules rather than through conventional procedural code. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of AI software. An expert system is divided into two sub-systems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging capabilities. Expert systems were introduced by the Stanford Heuristic Programming Project led by Edward Feigenbaum, who is sometimes referred to as the "father of expert systems". The Stanford researchers tried to identify domains where expertise was highly valued and complex, such as diagnosing infectious diseases (Mycin) and identifying unknown organic molecules (Dendral).Dendral was a tool to study hypothesis formation in the identification of organic molecules. The general problem it solved—designing a solution given a set of constraints—was one of the most successful areas for early expert systems applied to business domains such as sales people configuring Dec Vax computers and mortgage loan application development. SMH.PAL is an expert system for the assessment of students with multiple disabilities. Mistral is an expert system for the monitoring of dam safety developed in the 90's by Ismes (Italy). It gets data from an automatic monitoring system and performs a diagnosis of the state of the dam.
  • 17. ARTIFICIAL INTELLIGENCE 16 HISTORY OF AI 1950: Turing Test: In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of creating machines with true intelligence. He noted that "intelligence" is difficult to define and devised his famous Turing Test. If a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then the machine could be called "intelligent." This simplified version of the problem allowed Turing to argue convincingly that a "thinking machine" was at least plausible and the paper answered all the most common objections to the proposition. The Turing Test was the first serious proposal in the philosophy of artificial intelligence. 1956-1959:Golden Years: The Dartmouth Conference of 1956 was organized by Marvin Minsky, John McCarthy and two senior scientists: Claude Shannon and Nathan Rochester of IBM. The proposal for the conference included this assertion: "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it". The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert A. Simon, all of whom would create important programs during the first decades of AI research. At the conference Newell and Simon debuted the "Logic Theorist" and McCarthy persuaded the attendees to accept "Artificial Intelligence" as the name of the field.[43] The 1956 Dartmouth conference was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI. In 1958,John McCarthy (Massachusetts Institute of Technology or MIT) invented the Lisp programming language.In 1959,John McCarthy and Marvin Minsky founded the MIT AI Lab.
  • 18. ARTIFICIAL INTELLIGENCE 17 1965:ELIZA: ELIZA is a computer program and an early example of primitive natural language processing. ELIZA operated by processing users' responses to scripts, the most famous of which was DOCTOR, a simulation of a Rogerian psychotherapist. Using almost no information about human thought or emotion, DOCTOR sometimes provided a startlingly human-like interaction. ELIZA was written atMIT by Joseph Weizenbaum between 1964 and 1966. When the "patient" exceeded the very small knowledge base, DOCTOR might provide a generic response, for example, responding to "My head hurts" with "Why do you say your head hurts?" A possible response to "My mother hates me" would be "Who else in your family hates you?" ELIZA was implemented using simple pattern matching techniques, but was taken seriously by several of its users, even after Weizenbaum explained to them how it worked. It was one of the first chatterbots in existence. 1972:PROLOG Prologis a general purpose logic programming language associated withartificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is declarative: the program logic is expressed in terms of relations, represented as facts and rules. A computation is initiated by running a query over these relations. The language was first conceived by a group around Alain Colmerauer in Marseille, France, in the early 1970s and the first Prolog system was developed in 1972 by Colmerauer with Philippe Roussel. Prolog was one of the first logic programming languages, and remains the most popular among such languages today, with many free and commercial implementations available. While initially aimed at natural language processing, the language has since then stretched far into other areas like theorem proving, expert systems, games, automated answering systems, ontologies and sophisticated control systems. Modern Prolog
  • 19. ARTIFICIAL INTELLIGENCE 18 environments support creating graphical user interfaces, as well as administrative and networked applications. 1974:MYCIN.MYCIN was an early expert system that used artificial intelligence to identify bacteria causing severe infections, such as bacteremia and meningitis, and to recommend antibiotics, with the dosage adjusted for patient's body weight — the name derived from the antibiotics themselves, as many antibiotics have the suffix "-mycin". The Mycin system was also used for the diagnosis of blood clotting diseases. MYCIN was developed over five or six years in the early 1970s at Stanford University. It was written in Lisp as the doctoral dissertation of Edward Shortliffe under the direction of Bruce Buchanan,Stanley N. Cohen and others. It arose in the laboratory that had created the earlier Dendral expert system. MYCIN was never actually used in practice but research indicated that it proposed an acceptable therapy in about 69% of cases, which was better than the performance of infectious disease experts who were judged using the same criteria. 1988-93:AI Winter. In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The term was coined by analogy to the idea of a nuclear winter. The field has experienced several cycles of hype, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later. There were two major winters in 1974–80 and 1987–93 and several smaller episodes, including:  1966: The failure of machine translation,  1970: The abandonment of connectionism,  1971–75: DARPA's frustration with the Speech Understanding Research program at Carnegie Mellon University,  1973: The large decrease in AI research in the United Kingdom in response to the Light hill report,
  • 20. ARTIFICIAL INTELLIGENCE 19  1973–74: DARPA's cutbacks to academic AI research in general,  1987: The collapse of the Lisp machine market,  1988: The cancellation of new spending on AI by the Strategic Computing Initiative,  1993: Expert systems slowly reaching the bottom,  1990s: The quiet disappearance of the fifth-generation computer project's original goals, The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research. At the meeting, Roger Schank andMarvin Minsky—two leading AI researchers who had survived the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the '80s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse. Hype cycles are common in many emerging technologies, such as the railway mania or the dot-com bubble. An AI winter is primarily a collapse in the perception of AI by government bureaucrats and venture capitalists. Despite the rise and fall of AI's reputation, it has continued to develop new and successful technologies. AI researcher Rodney Brooks would complain in 2002 that "there's this stupid myth out there that AI has failed, but AI is around you every second of the day." Ray Kurzweil agrees: "Many observers still think that the AI winter was the end of the story and that nothing since has come of the AI field. Yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry." He adds: "the AI winter is long since over."
  • 21. ARTIFICIAL INTELLIGENCE 20 Great Achievements 1.Robocup:RoboCup is an international robotics competition founded in 1997.The aim is to promote robotics and AI research, by offering a publicly appealing, but formidable challenge. The name Robocopis a contraction of the competition's full name, "Robot Soccer World Cup", but there are many other stages of the competition such as "RoboCupRescue", "RoboCup@Home" and "RoboCup Junior". In the U.S robocup is not very big, with the national competition being at New Jersey every year, but in other countries it is very popular. In 2013 the world's competition was in the Netherlands. In 2014 the world competition is in Brazil. The official goal of the project: "By the middle of the 21st century, a team of fully autonomous humanoid robot soccer players shall win a soccer game, complying with the official rules of FIFA, against the winner of the most recent World Cup. " 2.Deep Blue:Deep Blue was a chess-playing computer developed by IBM. On May 11, 1997, the machine, with human intervention between games, won the second six-game match against world champion Garry Kasparov by two wins to one with three draws.Kasparov accused IBM of cheating and demanded a rematch. IBM refused and retired Deep Blue. Kasparov had beaten a previous version of Deep Blue in 1996. The project was started as ChipTest at Carnegie Mellon University by Feng- hsiung Hsu, followed by its successor, Deep Thought. After their graduation from Carnegie Mellon, Hsu, Thomas Anantharaman, and Murray Campbell from the Deep Thought team were hired by IBM Research to continue their quest to build a chess machine that could defeat the world champion. Hsu and Campbell joined IBM in autumn 1989, with Anantharaman following later. Anantharaman subsequently left IBM for Wall
  • 22. ARTIFICIAL INTELLIGENCE 21 Street and Arthur Joseph Hoane joined the team to perform programming tasks. Jerry Brody, a long-time employee of IBM Research, was recruited for the team in 1990. The team was managed first by Randy Moulic, followed by Chung-Jen (C J) Tan. After Deep Thought's 1989 match against Kasparov, IBM held a contest to rename the chess machine and it became "Deep Blue", a play on IBM's nickname, "Big Blue". After a scaled down version of Deep Blue, Deep Blue Jr., played Grandmaster Joel Benjamin, Hsu and Campbell decided that Benjamin was the expert they were looking for to develop Deep Blue's opening book, and Benjamin was signed by IBM Research to assist with the preparations for Deep Blue's matches against Garry Kasparov. In 1995 "Deep Blue prototype" (actually Deep Thought II, renamed for PR reasons) played in the 8th World Computer Chess Championship. Deep Blue prototype played the computer programWchess to a draw while Wchess was running on a personal computer. In round 5 Deep Blue prototype had the white pieces and lost to the computer program Fritz 3 in 39 moves while Fritz was running on an Intel Pentium 90Mhz personal computer. In the end of the championship Deep Blue prototype was tied for second place with the computer program Junior while Junior was running on a personal computer. 3.DARPA Grand challenge:The DARPA Grand Challenge is a prize competition for American autonomous vehicles, funded by the Defense Advanced Research Projects Agency,the most prominent research organization of the United States Department of Defense. Congress has authorized DARPA to award cash prizes to further DARPA's mission to sponsor revolutionary, high-payoff research that bridges the gap between fundamental discoveries and military use. The initial DARPA Grand Challenge was created to spur the development of technologies needed to create the first fully autonomous ground vehicles capable of completing a substantial off-road course within a limited time. The third event, the DARPA Urban Challenge extended the initial
  • 23. ARTIFICIAL INTELLIGENCE 22 Challenge to autonomous operation in a mock urban environment. The most recent Challenge, the 2012 DARPA Robotics Challenge, focused on autonomous emergency-maintenance robots. The most recent Challenge, the 2012 DARPA Robotics Challenge, focused on autonomous emergency- maintenance robots. Fully autonomous vehicles have been an international pursuit for many years, from endeavors in Japan (starting in 1977), Germany (Ernst Dickmanns and VaMP), Italy (the ARGO Project), the European Union (EUREKA Prometheus Project), the United States of America, and other countries. The Grand Challenge was the first long distance competition for driverless cars in the world; other research efforts in the field of Driverless cars take a more traditional commercial or academic approach. The U.S. Congress authorized DARPA to offer prize money ($1 million) for the first Grand Challenge to facilitate robotic development, with the ultimate goal of making one-third of ground military forces autonomous by 2015. Following the 2004 event, Dr. Tony Tether, the director of DARPA, announced that the prize money had been increased to $2 million for the next event, which was claimed on October 9, 2005. The first, second and third places in the 2007 Urban Challenge received $2 million, $1 million, and $500,000, respectively. The competition was open to teams and organizations from around the world, as long as there were at least one U.S. citizen on the roster. Teams have participated from high schools, universities, businesses and other organizations. More than 100 teams registered in the first year, bringing a wide variety of technological skills to the race. In the second year, 195 teams from 36 U.S. statesand 4 foreign countries entered the race.
  • 24. ARTIFICIAL INTELLIGENCE 23 TODAY’S AI APPLICATIONS 1.Driver-Less trains and Metros: Driverless metro lines are currently operational in the variouscities, such as, London,Barcelona,Dubai etc.Advantages of driverless metros:  Lower expenditure for staff (staff swallows a significant part of the costs of running a transport system). However, service and security personnel is common in automated systems.  Trains can be shorter and instead run more frequently without increasing expenditure for staff.  Service frequency can easily be adjusted to meet sudden unexpected demands.  Despite common psychological concerns, driverless metros are safer than traditional ones. None of them ever had a serious accident.  Intruder detection systems can be more effective than humans in stopping trains if someone is on the tracks.  Financial savings in both energy and wear-and-tear costs because trains are driven to an optimum specification.  Train turnover time at terminals can be extremely short (train goes into the holding track and returns immediately), reducing the number of train sets needed for operation. 2.Burglary Alarm System: A Burglary alarm is a system designed to detect intrusion – unauthorized entry – into a building or area. Security alarms are used in residential, commercial, industrial, and military properties for protection against burglary (theft) or property damage, as well as personal protection against intruders.Car alarms likewise protect vehicles and their contents. Prisons also use security systems for control of inmates. Some alarm systems serve a single purpose of burglary protection; combination systems provide both fire and intrusion protection. Intrusion
  • 25. ARTIFICIAL INTELLIGENCE 24 alarm systems may also be combined with closed-circuit television surveillance systems to automatically record the activities of intruders, and may interface to access control systems for electrically locked doors. Systems range from small, self-contained noisemakers, to complicated, multi-area systems with computer monitoring and control. 3.Automatic Essay Scoring in Education:Automated essay scoring (AES) is the use of specialized computer programs to assign grades to essays written in an educational setting. It is a method of educational assessment and an application of natural language processing. Its objective is to classify a large set of textual entities into a small number of discrete categories, corresponding to the possible grades—for example, the numbers 1 to 6. Therefore, it can be considered a problem of statistical classification. Several factors have contributed to a growing interest in AES. Among them are cost, accountability, standards, and technology. Rising education costs have led to pressure to hold the educational system accountable for results by imposing standards. The advance of information technology promises to measure educational achievement at reduced cost. The use of AES for high-stakes testing in education has generated significant backlash, with opponents pointing to research that computers cannot yet grade writing accurately and arguing that their use for such purposes promotes teaching writing in reductive ways (i.e. teaching to the test).
  • 26. ARTIFICIAL INTELLIGENCE 25 Submitted to :- NIIT Residency Road Srinagar Submitted by:-Masood Ahmad Bhat Student ID:-S1400D9700032 Batch Code:-B140045 Sig. Of HOC NIIT Sig. Of Concerned Faculty. ----------- -------------