2. Outline: Unit 1
o Foundations and History of Artificial Intelligence,
o can machine think?,
o AI techniques,
o components of AI,
o Applications of Artificial Intelligence,
o Intelligent Agents,
o Structure of Intelligent Agents.
o Computer vision,
o Natural Language Possessing
7. Today
o What is artificial intelligence?
o Where did it come from/What
can AI do?
8. What is AI?
The science of making machines that:
Think like people
Act like people
Think rationally
Act rationally
9. What is AI
Systems that think like human:
cognitive science, neuroscience
Systems that think rationally
Aristotle --- this is how to think so
that you don’t make mistakes;
Systems that act like humans
Alan Turing --- Turing test
Systems that act rationally
maximally achieving pre-defined
goals
10. Turing Test
o (Human) judge communicates with a human and a
machine over text-only channel,
o Both human and machine try to act like a human,
o Judge tries to tell which is which.
image from http://en.wikipedia.org/wiki/Turing_test
12. Rational Decisions
We’ll use the term rational in a very specific, technical way:
Rational: maximally achieving pre-defined goals
Rationality only concerns what decisions are made
(not the thought process behind them)
Goals are expressed in terms of the utility of outcomes
Being rational means maximizing your expected utility
Computational Rationality
15. The Foundation of AI
o Economics- maximize payoff, Decision theory, Game
theory, Operations research
o Computer Engineering- efficient computer
16. A (Short) History of AI
o 1940-1950: Early days
o 1943: McCulloch & Pitts: Boolean circuit model of brain
o 1950: Turing's “Computing Machinery and Intelligence”
o 1950—70: Excitement: Look, Ma, no hands!
o 1950s: Early AI programs, including Samuel's checkers
program, Newell & Simon's Logic Theorist, Gelernter's
Geometry Engine
o 1956: Dartmouth meeting: “Artificial Intelligence” adopted
o 1965: Robinson's complete algorithm for logical reasoning
o 1970—90: Knowledge-based approaches
o 1969—79: Early development of knowledge-based systems
o 1980—88: Expert systems industry booms
o 1988—93: Expert systems industry busts: “AI Winter”
o 1990—: Statistical approaches
o Resurgence of probability, focus on uncertainty
o General increase in technical depth
o Agents and learning systems… “AI Spring”?
o 2000—: Where are we now?
17. What Can AI Do?
Quiz: Which of the following can be done at present?
o Play a decent game of Jeopardy?
o Win against any human at chess?
o Play a decent game of tennis?
o Grab a particular cup and put it on a shelf?
o Unload any dishwasher in any home?
o Drive safely along the highway?
o Buy a week's worth of groceries on the web?
o Discover and prove a new mathematical theorem?
o Perform a surgical operation?
o Unload a know dishwasher in collaboration with a person?
o Translate spoken Chinese into spoken English in real time?
o Write an intentionally funny story?
18.
19. Game Agents
o Classic Moment: May, '97: Deep Blue vs. Kasparov
o First match won against world champion
o “Intelligent creative” play
o 200 million board positions per second
o Humans understood 99.9 of Deep Blue's moves
o Can do about the same now with a PC cluster
o 1996: Kasparov Beats Deep Blue
“I could feel --- I could smell --- a new kind of intelligence across the table.”
o 1997: Deep Blue Beats Kasparov
“Deep Blue hasn't proven anything.”
Text from Bart Selman, image from IBM’s Deep Blue pages
22. Robotics
o Robotics
o Part mech. eng.
o Part AI
o Reality much
harder than
simulations!
o Technologies
o Vehicles
o Rescue
o Help in the home
o Lots of automation…
o In this class:
o We ignore mechanical aspects
o Methods for planning
o Methods for control
Images from UC Berkeley, Boston Dynamics, RoboCup, Google
27. Natural Language
o Speech technologies (e.g. Siri)
o Automatic speech recognition (ASR)
o Text-to-speech synthesis (TTS)
o Dialog systems
o Language processing technologies
o Question answering
o Machine translation
o Web search
o Text classification, spam filtering, etc…
29. What About the Brain?
Brains (human minds) are very
good at making rational decisions,
but not perfect
Brains aren’t as modular as
software, so hard to reverse
engineer!
“Brains are to intelligence as
wings are to flight”
Lessons learned from the brain:
memory and simulation are key to
decision making
30. Designing Rational Agents
o An agent is an entity that perceives and acts.
o A rational agent selects actions that maximize
its (expected) utility.
o Characteristics of the percepts, environment,
and action space dictate techniques for
selecting rational actions
Agent
?
Sensors
Actuators
Environment
Percepts
Actions
31. Agents and environments
o An agent perceives its environment through sensors and
acts upon it through actuators (or effectors, depending on
whom you ask)
o The agent function maps percept sequences to actions
o It is generated by an agent program running on a machine
Agent
?
Sensors
Actuators
Environment
Percepts
Actions
32. The Nature of Environment
o The task environment - PEAS
(Performance, Environment, Actuators, Sensors )
34. The task environment - PEAS
o Performance measure
o -1 per step; + 10 food; +500 win; -500 die;
+200 hit scared ghost
o Environment
o Pacman dynamics (incl ghost behavior)
o Actuators
o Left Right Up Down or NSEW
o Sensors
o Entire state is visible
35. PEAS: Automated taxi
o Performance measure
o Income, happy customer, vehicle costs,
fines, insurance premiums
o Environment
o Roads, streets, other drivers, customers,
weather, police…
o Actuators
o Steering, brake, gas, display/speaker, horn
o Sensors
o Camera, radar, accelerometer, engine
sensors, microphone, GPS
Image: http://nypost.com/2014/06/21/how-google-
might-put-taxi-drivers-out-of-business/
36. PEAS: Medical diagnosis system
o Performance measure
o Patient health, cost, reputation
o Environment
o Patients, medical staff,hospitals
o Actuators
o Screen display, email, dignoses,
treatment referrals
o Sensors
o Keyboard/mouse for entry of patient’s
recaords
37. Environment types
Fully or partially observable Single sensor
Single-agent or multiagent Crossword
Vs. chess
Competitive :
chess
Cooperative:
Taxi driving
Deterministic or stochastic Current state
and action
Next state
completely defined
Deterministic
Vaccum cleaner vs
traffic
Static or dynamic Crossword
Vs taxi
Discrete or continuous Respect to
Time
chess Taxi driving
Known vs Unkown Agent state of
knowledge
Solitaire: known
environment but
partially
observable
Video game: unknown
environment but fully
observable
Episodic vs sequential Defectice part
dection
chess
38.
39. Structure of Agent
Agent program implements agents function, The mapping of
precepts to actions.
Agent program= architecture + program
40. Structure of Agent
o Agent program
function TABLE-DRIVEN-AGENT(percept) returns an action
persistent: percepts, a sequence, initially empty
table, a table of actions, indexed by percept
sequences, initially fully specified
append percept to the end of percepts
action ← LOOKUP(percepts, table)
return action
41. Agent program
o Simple Reflex agents
o Model Based reflex agents
o Goal based agents
o Utility based agents
43. SIMPLE-REFLEX-AGENT
function SIMPLE-REFLEX-AGENT(percept) returns an action
persistent: rules, a set of condition–action rules
state ← INTERPRET-INPUT(percept)
rule ← RULE-MATCH(state, rules)
action ← rule.ACTION
return action
A simple reflex agent. It acts according to a rule whose
condition matches the current state, as defined by the
percept.
45. Reflex agents with state (Model Based)
Agent
Environment
Sensors
State
How the world evolves
What my actions do
Condition-action rules
Actuators
What the world
is like now
What action I
should do now
46. function MODEL-BASED-REFLEX-AGENT(percept) returns an action
persistent: state, the agent’s current conception of the world state
transition model, a description of how the next state depends on the
current state and action sensor
model, a description of how the current world state is reflected in the
agent’s percepts
rules, a set of condition–action rules
action, the most recent action, initially none
state ← UPDATE-STATE(state, action, percept,transition model, sensor
model)
rule ← RULE-MATCH(state, rules)
action ← rule.ACTION
return action
A model-based reflex agent. It keeps track of the current state of the world,
using an internal model. It then chooses an action in the same way as the reflex agent.
47. Goal-based agents
Agent
Environment
Sensors
What action I
should do now
State
How the world evolves
What my actions do
Actuators
What the world
is like now
What it will be like
if I do action A
Goals
A model-based, goal-based agent. It keeps track of the world state as well as
a set of goals it is trying to achieve, and chooses an action that will (eventually)
lead to the achievement of its goals.
48. Utility based Agents
A model-based, utility-based agent. It uses a model of the world, along with a
utility function that measures its preferences among states of the world. Then it
chooses the action that leads to the best expected utility, where expected utility is
computed by averaging over all possible outcome states, weighted by the
probability of the outcome.
49. General Learning Agent
A general learning agent. The “performance element” box represents what we
have previously considered to be the whole agent program. Now, the “learning
element” box gets to modify that program to improve its performance.
50. Spectrum of representations
Three ways to represent states and the transitions between them. (a) Atomic
representation: a state (such as B or C) is a black box with no internal structure; (b)
Factored representation: a state consists of a vector of attribute values; values can
be Boolean, real valued, or one of a fixed set of symbols. (c) Structured
representation: a state includes objects, each of which may have attributes of its own
as well as relationships to other objects.
Notes de l'éditeur
Who are these? C3PO, what does he do? Essentially google translate, (but with anxiety!)
Little guy? R2D2 – what does he do, yeah, not so sure
Things got darker: machines come back from the future – to kill us!
90’s : software is scary
Basic fear about what technology might do ?
What if we can’t even tell technology apart from ourselves?
OR maybe it’ll look really different and snarky
Some exceptions like wall-E, positive view of technology (but maybe not of us humans!)
But mostly a worry
[not very worried myself, at least at present]
Robots look like this. We have autonomous cars that figure out how to take us to our destination
Robots help nurses in hospitals deliver stuff to different rooms
Or drones that record cooll videos of us as we do outdoor activities
Or in my case,
Top left: natural intelligence -- Think like people --- cognitive science, neuroscience
Bottom left: who cares about how they think? act like people --- actually very early definition, dating back to Alan Turing --- Turing test; problem to do really well you start focusing on things like don’t answer too quickly what the square root of 1412 is, don’t spell too well, and make sure you have a favorite movie etc. So it wasn’t really leading us to build intelligence
Think rationally – correct thought process -- long tradition dating back to Aristotle --- this is how to think so that you don’t make mistakes; but not a winner, because difficult to encode how to think / especially in the face of unknowns/uncertainty, and in the end it’s not about how you think, it’s about how you end up acting
Distill course to maximize expected utility
Computation with circuits, brain was like a bunch of circuits
Less computation than your watch; could barely do anything , but we thought we’re right there
Write stuff down, they start contradicting; winter
Statistics, uncertainty
Computation with artificial neurons, brain was like a bunch of real neurons
Yeah, Kasparov comment probably says more about humans than about computers
Stopped making wings that flap
Act to achieve a goal
Pro: use goal to index into actions that might achieve it, eg “Have milk” -> “buy milk”
Con: cannot handle tradeoffs among goals, failure probability etc.