2. Goals/Objectives of AI
The goal of research in Artificial Intelligence is to develop a technology
whereby computers behave intelligently, for example:
Intelligent behavior by simulated persons or other actors in a computer game
Mobile robots in hostile or dangerous environments that behave intelligently
Intelligent responses to user requests in information search systems
such as Google or Wikipedia
Process control systems that supervise complex processes and that
are able to respond intelligently to a broad variety of malfunctioning
and other unusual situations.
Intelligent execution of user requests in the operating system and
the basic services (document handling, electronic mail) of standard
computers.
Generally :- is to create intelligent machines.
But what does it mean to be intelligent?
emnetzewdu2121@gmail.com
Introduction to AI 2
3. What is AI?
Systems that act
rationally
Systems that think
like humans
Systems that think
rationally
Systems that act
like humans
THOUGHT
BEHAVIOUR
HUMAN RATIONAL
emnetzewdu2121@gmail.com
Introduction to AI 3
4. Systems that act like
humans:
Turing Test approach
“The art of creating machines that perform functions that require intelligence
when performed by people.” (Kurzweil,
1990)
“The study of how to make computers do things at which, at the moment, people
are
better.” (Rich and Knight, 1991)
emnetzewdu2121@gmail.com
Introduction to AI 4
5. Turing test approach:
?
You enter a room which has a computer terminal. You have a fixed
period of time to type what you want into the terminal, and study
the replies. At the other end of the line is either a human being or a
computer system.
If it is a computer system, and at the end of the period you cannot
reliably determine whether it is a system or a human, then the
system is deemed to be intelligent. ????????
emnetzewdu2121@gmail.com
Introduction to AI 5
6. Cont.…
To pass the Turing test the machine needs:
Natural language processing :- for communication with human
Knowledge representation :- to store information effectively & efficiently
Automated reasoning :- to retrieve & answer questions using the stored
information
Machine learning:- to adapt to new circumstances
OR
To pass the total Turing Test, the computer will need
computer vision to perceive objects(Seeing)
robotics to manipulate objects and move about(acting)
emnetzewdu2121@gmail.com
Introduction to AI 6
7. Systems that think like humans(thinking
humanly): The Cognitive Modelling
approach
“The exciting new effort to make computers think . . . machines with minds, in the
full and literal sense.” (Haugeland, 1985)
“[The automation of] activities that we associate with human thinking, activities
such as decision-making, problem solving, learning . . .” (Bellman, 1978).
Humans as observed from ‘inside’, How do we know how
humans think?
Introspection vs. psychological experiments
Cognitive Science
The interdisciplinary field , brings together computer models from AI and
experimental techniques from psychology to construct precise and testable theories of
the human mind.
emnetzewdu2121@gmail.com
Introduction to AI 7
8. Systems that think ‘rationally’
"laws of thought“ approach
“The study of mental facilities through the use of
computational models” (Charniak and McDermott)
“The study of the computations that make it possible
to perceive, reason, and act” (Winston)
Humans are not always ‘rational’.
Rational - defined in terms of logic?
Logic can’t express everything (e.g. uncertainty)
Logical approach is often not feasible in terms of
computation time (needs ‘guidance’).
emnetzewdu2121@gmail.com
Introduction to AI 8
9. Systems that act rationally: rational
agent approach
“Computational Intelligence is the study of the design of
intelligent agents.” (Poole et al., 1998)
“AI . . . is concerned with intelligent behavior in artifacts.”
(Nilsson, 1998)
Rational behavior: doing the right thing
The right thing: that which is expected to maximize goal achievement,
given the available information
Giving answers to questions is ‘acting’.
We don’t care whether a system:
replicates human thought processes
makes the same decisions as humans
uses purely logical reasoning
emnetzewdu2121@gmail.com
Introduction to AI 9
10. Cont.
Study AI as rational agent –
2 advantages:
It is more general than using logic only
Because: LOGIC + Domain knowledge
It allows extension of the approach with more
scientific methodologies
emnetzewdu2121@gmail.com
Introduction to AI 10
11. Rational agents
An agent is an entity that perceives and acts
Abstractly, an agent is a function from percept histories to actions:
[f: P* A]
For any given class of environments and tasks, we seek the agent (or class of
agents) with the best performance
Limitation: computational limitations make perfect rationality unachievable
design best program for given machine resources
Artificial
Produced by human art or effort, rather than
originating naturally.
• Intelligence
is the ability to acquire knowledge and use it"
[Pigford and Baur]
emnetzewdu2121@gmail.com
Introduction to AI 11
12. Cont.
So AI was defined as:
AI is the study of ideas that enable computers to be intelligent.
AI is the part of computer science concerned with design of computer
systems that exhibit human intelligence(From the Concise Oxford
Dictionary)
From the above two definitions, we can see that AI has two major roles:
Study the intelligent part concerned with humans.
Represent those actions using computers.
emnetzewdu2121@gmail.com
Introduction to AI 12
13. Group Assignment
The advancement on turing
test(Arguments on turing test)
The foundation of AI
The History of AI
The state of the art(Current status of AI)
emnetzewdu2121@gmail.com
Introduction to AI 13
14. Intelligent Agents
Agents and Environments
What is an agent?
An agent is anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through actuators/
effectors to maximize progress towards its goal
What is Intelligent Agent?
Intelligent agents are software entities that carry out some set of operations on
behalf of a user or another program, with some degree of independence or
autonomy and in so doing, employ some knowledge or representation of the user’s
goals or desires.” (“The IBM Agent”)
PAGE (Percepts, Actions, Goals, Environment)description
emnetzewdu2121@gmail.com
Introduction to AI 14
15. Cont.
Percepts
Actions
Goals
Environment
information acquired through the agent’s sensory system
operations performed by the agent on the environment
through its actuators
desired outcome of the task with a measurable
performance
surroundings beyond the control of the agent
emnetzewdu2121@gmail.com
Introduction to AI 15
17. Examples of Agents
human agent
eyes, ears, skin, taste buds, etc. for sensors
hands, fingers, legs, mouth, etc. for actuators
powered by muscles
robot
camera, infrared, bumper, etc. for sensors
grippers, wheels, lights, speakers, etc. for actuators
often powered by motors
software agent
functions as sensors
information provided as input to functions in the form of encoded
bit strings or symbols
functions as actuators
results deliver the output
emnetzewdu2121@gmail.com
Introduction to AI 17
18. Cont.
Mathematically speaking, we say that an agent’s behavior is
described by the agent function that maps any given percept
sequence to an action
[f: P* A]
The agent program runs on the physical architecture to produce f
the agent function for an artificial agent will be implemented by an agent
program.
The agent function is an abstract mathematical description;
the agent program is a concrete implementation, running within some
physical system
Example: The vacuum-cleaner
emnetzewdu2121@gmail.com
Introduction to AI 18
19. Percepts: location and contents, e.g., [A,Dirty]
Actions: Left, Right, Suck, NoOp
emnetzewdu2121@gmail.com
Introduction to AI 19
20. THE CONCEPT OF
RATIONALITY
A rational agent is one that does the right thing—
conceptually speaking, every entry in the table for the agent
function is filled out correctly.
The right action is the one that will cause the agent to be
most successful
Q:- what does it mean to do the right thing?
It may depend on the consequences of the agent’s behavior
Sequence of actions based the percepts it received.
Causes the env’t to go through sequence of states.
If the sequence is desirable, then the agent has performed well.
Desirability is capture by Performance measure which evaluate any
given sequence of env’t states.
emnetzewdu2121@gmail.com
Introduction to AI 20
21. Cont.
Example:Performance measure for vacuum cleaner.
by the amount of dirt cleaned up in a single eight-
hour shift
reward the agent for having a clean floor.
As a general rule, it is better to design performance measures
according to what one actually wants in the environment,
rather than according to how one thinks the agent should
behave.
Since : With a rational agent, of course, what you ask for is
what you get and A rational agent can maximize this
performance measure by cleaning up the dirt, then dumping it
all on the floor, then cleaning it up again, and so on.
emnetzewdu2121@gmail.com
Introduction to AI 21
22. Rationality
What is rational at any given time depends on four things:
• The performance measure that defines the criterion of success.
• The agent’s prior knowledge of the environment.
• The actions that the agent can perform.
• The agent’s percept sequence to date
This leads to a definition of a rational agent:
For each possible percept sequence, a rational agent
should select an action that is expected to maximize its
performance measure, given the evidence provided by
the percept sequence and whatever built-in knowledge
the agent has.
emnetzewdu2121@gmail.com
Introduction to AI 22
23. Cont.
Notes:
Rationality is distinct from omniscience (“all knowing”).
We can behave rationally even when faced with
incomplete information.
Agents can perform actions in order to modify future
percepts so as to obtain useful information: information
gathering, exploration.
An agent is autonomous if its behavior is determined by
its own experience (with ability to learn and adapt).
emnetzewdu2121@gmail.com
Introduction to AI 23
24. Characterizing a Task
Environment
Must first specify the setting for intelligent agent design/Rational agent
PEAS: Performance measure, Environment, Actuators, Sensors
Performance
Measures
Environment
Actuators
Sensors
used to evaluate how well an agent solves the task at
hand
surroundings beyond the control of the agent
determine the actions the agent can perform
provide information about the current state
of the environment
emnetzewdu2121@gmail.com
Introduction to AI 24
25. Cont.
Consider, e.g., the task of designing an
automated taxi driver:
Performance measure: Safe, fast, legal, comfortable trip,
maximize profits
Environment: Roads, other traffic, pedestrians, customers
Actuators: Steering wheel, accelerator, brake, signal, horn
Sensors: Cameras, sonar, speedometer, GPS, odometer,
engine
sensors, keyboard
emnetzewdu2121@gmail.com
Introduction to AI 25
26. Cont.
Agent: Medical diagnosis system
PEAS???
Performance measure: Healthy patient, minimize costs,
lawsuits
Environment: Patient, hospital, staff
Actuators: Screen display (questions, tests, diagnoses,
treatments, referrals)
Sensors: Keyboard (entry of symptoms, findings, patient's
answers)
emnetzewdu2121@gmail.com
Introduction to AI 26
27. Cont.
Agent: Interactive English tutor
Performance measure: Maximize student's score on
test
Environment: Set of students
Actuators: Screen display (exercises, suggestions,
corrections)
Sensors: Keyboard
emnetzewdu2121@gmail.com
Introduction to AI 27
28. Recap
Agents and Environment
PAGE- agent description
Rationality
PEAS- task environment description
emnetzewdu2121@gmail.com
Introduction to AI 28
29. Properties of Task Environment
The range of the task environment is obviously vast. However, we can identify some
dimensions along to categorize the task environment fairly.
These dimensions determine:
To a large extent
The appropriate agent design
The applicability of each of the principal families of techniques for agent
implementation.
Fully observable vs. partially observable:
An agent's sensors give it access to the complete state of the environment at
each point in time.
i.e. the sensor detect all aspects that are relevant to the choice of action.
whereas Relevance depends on the performance measure.
Partially observable because of noisy and inaccurate sensors or
Parts of the state are simply missing from the sensor data.
emnetzewdu2121@gmail.com
Introduction to AI 29
30. Cont.
Deterministic vs. stochastic:
If next state of the environment is completely determined by the
current state and the action executed by the agent, otherwise, it is
stochastic. (If the environment is deterministic except for the actions of
other agents, then the environment is strategic)
Episodic vs. sequential:
The agent's experience is divided into atomic "episodes" (each episode
consists of the agent perceiving and then performing a single action),
and the choice of action in each episode depends only on the episode
itself.
If the agent need to think ahead; the current decision could affect the
future decision(sequential environments).
i.e. episodic environment is much simpler than sequential environments;
no need of think
emnetzewdu2121@gmail.com
Introduction to AI 30
31. Cont.
Static vs. dynamic:
If the environment is unchanged while an agent is deliberating
then, the env’t is static for that agent otherwise dynamic.
Static environments are easy to deal with because the agent need
not keep looking at the world while it is deciding on an action, nor
need it worry about the passage of time.
Dynamic environments are continuously asking the agent what it
wants to do; if it hasn’t decided yet, that counts as deciding to do
nothing.
The environment is semi dynamic if the environment itself does
not change with the passage of time but the agent's
performance score does
emnetzewdu2121@gmail.com
Introduction to AI 31
32. Cont.
Discrete vs. continuous:
A limited number of distinct, clearly defined
percepts and actions.
e.g. chess has discrete states, percepts and
actions
taxi driving is continuous state and continuous
time problem
Single agent vs. multiagent:
An agent operating by itself in an
environment.
emnetzewdu2121@gmail.com
Introduction to AI 32
33. Structure of the Agent
emnetzewdu2121@gmail.com
Introduction to AI 33
Agent’s strategy is a mapping from percept sequence to action
How to encode an agent’s strategy?
Long list of what should be done for each possible percept sequence
vs. shorter specification (e.g. algorithm)
function SKELETON-AGENT (percept) returns action
static: memory, the agent’s memory of the world
memory UPDATE-MEMORY(memory,percept)
action CHOOSE-BEST-ACTION(memory)
memory UPDATE-MEMORY(memory, action)
return action
On each invocation, the agent’s memory is updated to reflect the new percept, the best
action is chosen, and the fact that the action was taken is also stored in the memory.
The memory persists from one invocation to the next.
35. Table-lookup driven
agents
Uses a percept sequence / action table in memory to find the next
action. Implemented as a (large) lookup table.
Drawbacks:
– Huge table (often simply too large)
– Takes a long time to build/learn the
table
emnetzewdu2121@gmail.com
Introduction to AI 35
36. emnetzewdu2121@gmail.com
Introduction to AI 36
Toy example:
Vacuum world.
Percepts: robot senses it’s location and “cleanliness.”
So, location and contents, e.g., [A, Dirty], [B, Clean].
With 2 locations, we get 4 different possible sensor inputs.
Actions: Left, Right, Suck, NoOp
37. Simple reflex agents
Agents do not have memory of past world states or percepts.
So, actions depend only on current percept.
Action becomes a “reflex.”
Uses condition-action rules.
emnetzewdu2121@gmail.com
Introduction to AI 37
39. Model-based reflex agents
Key difference (wrt simple reflex agents):
Agents have internal state, which is used
to keep track of past states of the world.
Agents have the ability to represent
change in the World.
emnetzewdu2121@gmail.com
Introduction to AI 39
41. Goal-based agents
Key difference wrt Model-Based Agents:
In addition to state information, have goal information
that
describes desirable situations to be achieved.
Agents of this kind take future events into consideration.
What sequence of actions can I take to achieve certain
goals?
Choose actions so as to (eventually) achieve a (given or
computed) goal.
emnetzewdu2121@gmail.com
Introduction to AI 41
42. emnetzewdu2121@gmail.com
Introduction to AI 42
Agent keeps track of the world state as well as set
of goals it’s trying to achieve: chooses
actions that will (eventually) lead to the goal(s).
More flexible than reflex agents may involve
search and planning
43. Utility-based agents
When there are multiple possible alternatives, how to decide which
one is best?
Goals are qualitative: A goal specifies a crude distinction between a
happy and unhappy state, but often need a more general
performance measure that describes “degree of happiness.”
Important for making tradeoffs: Allows decisions comparing choice
between conflicting goals, and choice between likelihood of success
and importance of goal (if achievement is uncertain).
Use decision theoretic models: e.g., faster vs. safer.
emnetzewdu2121@gmail.com
Introduction to AI 43
46. To Summarize
emnetzewdu2121@gmail.com
Introduction to AI 46
(1) Table-driven agents
use a percept sequence/action table in memory to find the next action. They are implemented by a (large) lookup
table.
(2) Simple reflex agents
are based on condition-action rules, implemented with an appropriate production system. They are
stateless devices which do not have memory of past world states.
(3) Agents with memory - Model-based reflex agents
have internal state, which is used to keep track of past states of the world.
(4) Agents with goals – Goal-based agents
are agents that, in addition to state information, have goal information that describes desirable
situations. Agents of this kind take future events into consideration.
(5) Utility-based agents
base their decisions on classic axiomatic utility theory in order to act rationally.
(6) Learning agents
they have the ability to improve performance through learning.
47. Chapter Next:
Solving Problems By Searching
emnetzewdu2121@gmail.com
Introduction to AI 47
Problem-solving agents
Problem types
Problem formulation
Example problems
Basic search algorithms
48. Cont.
Goal-based agents:- consider future actions
and the desirability of their outcomes than
others.
problem-solving agent :- one kind of Goal-
Based agent which use atomic representations.
Goal-based agents that use more advanced
factored or structured representations are
usually called planning agents.
emnetzewdu2121@gmail.com
Introduction to AI 48
49. Problem Solving Agents
Goal formulation, based on the current situation and
the agent’s performance measure; first step in problem
solving.
We will consider a goal to be a set of world states—
exactly those states in which the goal is satisfied.
Problem formulation is the process of deciding what
actions and states to consider, given a goal.
Search for solution: Given the problem, search for
a solution --- a sequence of actions to achieve the
goal starting from the initial state.
Execution of the solution
emnetzewdu2121@gmail.com
Introduction to AI 49
50. emnetzewdu2121@gmail.com
Introduction to AI 50
Formulate goal:
be in Bucharest
(Romania)
Formulate problem:
action: drive between
pair of connected cities
(direct road)
state: be in a city
(20 world states)
Find solution:
sequence of cities
leading from start to
goal state, e.g., Arad,
Sibiu, Fagaras,
Bucharest
Execution
drive from Arad to
Bucharest according to
the solution
Example: Path Finding problem
Initial
State
Goal
State
Environment: fully observable (map), deterministic, and the
agent knows effects of each action. Is this really the case?
Note: Map is somewhat of a “toy” example. Our real
interest: Exponentially large spaces, with e.g. 10^100 or
more states. Far beyond full search. Humans can often
still handle those! One of the mysteries of cognition.
51. Problem and Solution
Definition
A problem can be defined formally by five components:
I. Initial Sate : state that the agent starts in.
e.g. In(Arad).
II. Transition Model: A description of what each action does; RESULT(s,a) :- function
that returns the state that results from doing action a in sate s.
Successor:- refers any reachable state from a given state by a single action.
e.g. RESULT(In(Arad), Go(Zerind)) = In(Zerind) .
III.State Space:- the set of all states reachable from the initial state by any sequence
of actions.
i.e. implicitly defined by the initial state, actions, and transition model
Graph :- formed by state forms in which the nodes are states and links between
nodes are actions.
Path :- in the state space is sequence of states connected by a sequence of actions.
emnetzewdu2121@gmail.com
Introduction to AI 51
52. Cont.
IV. Goal Test:- which determines whether a given state is a goal state.
Explicit:- if there is set of possible goal states and checks whether the given
state is one of them.
V.Path Cost:- function which assigns a numeric cost to each path
The agent will chooses a cost function which reflects its own performance
measure.
Here we assume that cost of path can be described as the sum of costs of
individual actions along the path
The step cost of taking action a in state s to reach state s’ is
denoted by c(s, a, s’).
Solution quality is measured by the path cost function, and an optimal solution
has the lowest path cost among all solutions.
We remove detail information about our state description from a
representation which is called Abstraction.
emnetzewdu2121@gmail.com
Introduction to AI 52
53. Ex .Prob.: Toy Prob.; Vacuum world
state space graph
emnetzewdu2121@gmail.com
Introduction to AI 53
states?
actions?
goal test?
path cost?
The agent is in one of 8 possible world states.
Start state
Left, Right, Suck [simplified: left out No-op]
Goal
(reach one in
this set of states)
No dirt at all locations (i.e., in one of bottom two states).
1 per action
Minimum path from Start to Goal state:
Alternative, longer plan:
3 actions
4 actions
Note: path with
thousands of steps before
reaching goal also exist.
54. Real world problems
Rout finding problems:
Rout-finding algorithms:- are used in a variety of applications;
Websites
Routing Video streams in computer networks
Airline travel-planning systems
Car systems; driving direction solving
Military operation planning
Airline travel-planning: -
States:- each location and time, status of flight…..
Actions:- any flight from current location, be in any seat class,
leaving at any time, spend enough time in
airport
Goal Test:-be in final destination specified by user
Path cost:- depends on monetary cost, waiting time, customs and
immigration procedures, seat quality,
time of day, airplane type….
emnetzewdu2121@gmail.com Introduction to AI 54
55. emnetzewdu2121@gmail.com Introduction to AI
55
Robotic Assembly
states?: real-valued coordinates of robot joint angles parts of the object
to be assembled
actions?: continuous motions of robot joints
goal test?: complete assembly
path cost?: time to execute
56. Touring Problems:- W.rt. Rout-finding
It’s actions are corresponding to trips between adjacent cities,
State space:- each state must include set of cities the agent has visited in addition
to current location.
Goal test:- checks whether the agent is in goal state and all other states have been
visited.
Traveling salesperson problem(TSP):- touring problem in which each city must be visited
exactly
once.
The aim is to find shortest tour.
Other Examples:
Robot navigation/planning
VLSI layout: positioning millions of components and connections on a
chip to minimize area, circuit delays, etc.
Protein design : sequence of amino acids that will fold into the 3-
dimensional protein with the right properties.
Automatic assembly of complex objects
Literally thousands of combinatorial search / reasoning / parsing / matching
problems can be formulated as search problems in exponential size state
spaces.
emnetzewdu2121@gmail.com Introduction to AI 56
58. Searching for Solutions
emnetzewdu2121@gmail.com
Introduction to AI 58
Search through the state space.
We will consider search techniques that use an
explicit search tree that is generated by the
initial state + successor function.
initialize (initial node)
Loop
choose a node for expansion
according to strategy
goal node? done
expand node with successor function
59. Tree-search algorithms
emnetzewdu2121@gmail.com Introduction to AI
59
Basic idea:
simulated exploration of state space by generating successors of
already-explored states (a.k.a. ~ expanding states)
Note: 1) Here we only check a node for possibly being a goal state, after we select the
node for expansion.
2) A “node” is a data structure containing state + additional info (parent
node, etc.
62. emnetzewdu2121@gmail.com
Introduction to AI 62
Selected for expansion.
Added to tree.
Note: Arad added (again) to
tree!
(reachable from Sibiu)
Not necessarily a problem, but
in Graph-Search, we will avoid
this by maintaining an
“explored” list.
64. Implementation: states
vs. nodes
emnetzewdu2121@gmail.com Introduction to AI 64
A state is a --- representation of --- a physical configuration.
A node is a data structure constituting part of a search tree includes
state, tree parent node, action (applied to parent), path cost (initial state
to node) g(x), depth
65. Search strategies
emnetzewdu2121@gmail.com
Introduction to AI 65
A search strategy is defined by picking the order of node expansion.
criteria that might be used to choose among the avail algorithms . We can
evaluate an algorithm’s performance in four ways(dimensions):
completeness: does it always find a solution if one exists?
time complexity: How long does it take to find a solution(number of
nodes generated)
space complexity: maximum number of nodes in memory
optimality: does it always find a least-cost solution?
Time and space complexity are measured in terms of
b: maximum branching factor of the search tree(maximum number of
successors of any node
d: the depth of the shallowest goal node (i.e., the number of steps
along the path from the root) (depth of the least-cost solution)
m: maximum depth of the state space (may be ∞)
66. Search Algorithms
Informed(heuristic)- have some guidance on where to look
for solutions.
Uninformed(blind)- given no info. About the problem
other than its definition. Some of them can
solve any kind of problems but not efficient.
1. Breadth-first search 4.Depth-limited search
2. Uniform-cost search 5.Iterative deepening search
3. Depth-first search 6.Bidirectional
search
emnetzewdu2121@gmail.com Introduction to AI
66
Key issue: type of queue used for the fringe of the search tree
Fringe is the collection of nodes that have been generated but not (yet)
expanded. Each node of the fringe is a leaf node.
67. Breadth- First Search
Breadth-first search is a simple strategy in which the root
node is expanded first, then all the
successors of the root node are expanded next, then their
successors, and so on.
In general, all nodes will be expanded at given depth in
the search tree ; based on their level.
Breadth-first search is an instance of the general graph-
search algorithm in which the shallowest unexpanded
node is chosen for expansion.
The goal test is applied on each node when it is generated
rather than it is selected for expansion.
emnetzewdu2121@gmail.com
Introduction to AI 67
68. Cont.
Implementation:
Fringe is : FIFO queue used to expand the
frontier.
At the back of the queue: New nodes(always
deeper than their parents).
At top of the queue: Old nodes (shallower than
new nodes) .
emnetzewdu2121@gmail.com Introduction to AI 68
Fringe queue:
<A>
Select A from
queue and expand.
Gives
<B, C>
69. Cont.
Implementation:
Fringe is : FIFO queue used to expand the
frontier.
At the back of the queue: New nodes(always
deeper than their parents).
At top of the queue: Old nodes (shallower than
new nodes) .
emnetzewdu2121@gmail.com Introduction to AI 69
Queue: <B, C>
Select B from
front, and expand.
Put children at the
end.
Gives
<C, D, E>
70. Cont.
Implementation:
Fringe is : FIFO queue used to expand the
frontier.
At the back of the queue: New nodes(always
deeper than their parents).
At top of the queue: Old nodes (shallower than
new nodes) .
emnetzewdu2121@gmail.com Introduction to AI 70
Fringe queue: <C, D, E>
71. Cont.
Implementation:
Fringe is : FIFO queue used to expand the
frontier.
At the back of the queue: New nodes(always
deeper than their parents).
At top of the queue: Old nodes (shallower
than new nodes) .
emnetzewdu2121@gmail.com Introduction to AI
71
Fringe queue: <D, E, F, G>
Assuming no further children,
queue becomes
<E, F, G>, <F, G>, <G>, <>. Each
time node checked
for goal state.
72. Properties of breadth-first
search
emnetzewdu2121@gmail.com Introduction to AI 72
b: maximum branching factor of
the search tree
d: depth of the least-cost solution
Complete? Yes (if b is finite)
Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)
Note: check for
goal only when
node is expanded.
Space? O(bd+1) (keeps every node in memory;
needed also to reconstruct soln. path)
Optimal soln. found?
Yes (if all step costs are identical)
Space is the bigger problem (more than time)
73. Uniform-cost search
Expand least-cost (of path to) unexpanded node
(e.g. useful for finding shortest path
on map)
Implementation:
fringe = queue ordered by path cost
Equivalent to breadth-first if step costs all equal
emnetzewdu2121@gmail.com Introduction to AI
73
74. Uniform-cost search
Expand least-cost unexpanded node
Implementation:
fringe = queue ordered by path cost
Equivalent to breadth-first if step costs all equal
emnetzewdu2121@gmail.com
Introduction to AI 74
79. Uniform-cost search
Complete? Yes
Time? # of nodes with g ≤ cost of optimal solution,
O(b1+(C*/ ε)) where C* is the cost of the optimal solution, ε some
small positive constant for cost of every step .
Space? # of nodes with g ≤ cost of optimal solution,
O(b(C*/ ε))
Optimal? Yes
emnetzewdu2121@gmail.com
Introduction to AI 79
80. Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
emnetzewdu2121@gmail.com
Introduction to AI 80
81. Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at
front
emnetzewdu2121@gmail.com
Introduction to AI 81
82. Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
emnetzewdu2121@gmail.com
Introduction to AI 82
83. Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at
front.
emnetzewdu2121@gmail.com
Introduction to AI 83
84. Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at
front
emnetzewdu2121@gmail.com
Introduction to AI 84
85. Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
emnetzewdu2121@gmail.com
Introduction to AI 85
86. Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
emnetzewdu2121@gmail.com
Introduction to AI 86
87. Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
emnetzewdu2121@gmail.com
Introduction to AI 87
88. Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
emnetzewdu2121@gmail.com
Introduction to AI 88
89. Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at
front
emnetzewdu2121@gmail.com
Introduction to AI 89
90. Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at
front
emnetzewdu2121@gmail.com
Introduction to AI 90
91. Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
emnetzewdu2121@gmail.com
Introduction to AI 91
92. Properties of depth-
first search
Complete? No: fails in infinite-depth spaces, spaces with
loops
Modify to avoid repeated states along path
complete in finite spaces
Time? O(bm): terrible if m is much larger than d
but if solutions are dense, may be much faster than breadth-
first
Space? O(bm), i.e., linear space!
Optimal? No
emnetzewdu2121@gmail.com
Introduction to AI 92
93. Depth-limited search
= depth-first search with depth limit l,
i.e., nodes at depth l have no successors
Recursive implementation:
emnetzewdu2121@gmail.com
Introduction to AI 93
102. Repeated states
Do not return to the state you just
came from.
Do not create paths with cycles in
them.
Do not generate any state that was
ever generated before.
Hashing
emnetzewdu2121@gmail.com
Introduction to AI 102
104. Summary
Problem formulation usually requires abstracting away real-
world details to define a state space that can feasibly be
explored
Variety of uninformed search strategies
Breadth First Search
Depth First Search
Uniform Cost Search (Best First Search)
Depth Limited Search
Iterative Deepening
Bidirectional Search
emnetzewdu2121@gmail.com
Introduction to AI 104
106. Introduction.
The objective of research into
intelligent machines is to produce
systems which can reason with
available knowledge and so behave
intelligently.
One of the major issues then is how to
incorporate knowledge into these
systems.
107. The Problem Of Knowledge
Representation
How is the whole abstract concept of
knowledge reduced into forms which
can be written into a computers
memory.
This is called the problem of Knowledge
Representation.
108. Fields of Knowledge
The concept of Knowledge is central to
a number of fields of established
academic study, including Philosophy,
Psychology, Logic,and Education.
Even in Mathematics and Physics people
like Isaac Newton and Leibniz reflected
that since physics has its foundations in
a mathematical formalism, the laws of
all nature should be similarly described.
111. What is a Logic?
• When most people say ‘logic’, they mean either propositional logic or first-
order predicate logic.
• However, the precise definition is quite broad, and literally hundreds of logics
have been studied by philosophers, computer scientists and mathematicians.
• Formal Logic is a classical approach of representing Knowledge.
• It was developed by philosophers and mathematicians as a calculus of the
process of making inferences from facts.
• The simplest logic formalism is that of Propositional Calculus which is
effectively an equivalent form of Boolean Logic, the basis of Computing.
• We can make Statements or Propositions which can either be atomic ( i.e.
they can stand alone) or composite which are sentences constructed using
atomic statements joined by logical connectives
• Any ‘formal system’ can be considered a logic if it has:
– a well-defined syntax;
– a well-defined semantics; and
– a well-defined proof-theory. Introduction to AI
112. •The syntax of a logic defines the syntactically acceptable
objects of the language, which are properly called well-formed
formulae (wff). (We shall just call them formulae.)
•The semantics of a logic associate each formula with a
meaning.
•The proof theory is concerned with manipulating formulae
according to certain rules.
Introduction to AI
113. •The simplest, and most abstract logic we can study is called propositional
logic.
•Definition: A proposition is a statement that can be either true or false; it
must be one or the other, and it cannot be both.
•EXAMPLES. The following are propositions:
– the reactor is on;
– the wing-flaps are up;
– Abiy Ahmed is prime minister.
whereas the following are not:
– are you going out somewhere?
– 2+3
-Good Morning.
Introduction to AI
Propositional Logic
114. •It is possible to determine whether any given statement is a proposition by
prefixing it with:
It is true that . . .
and seeing whether the result makes grammatical sense.
•We now define atomic propositions. Intuitively, these are the set of smallest
propositions.
•Definition: An atomic proposition is one whose truth or falsity does not
depend on the truth or falsity of any other proposition.
•So all the above propositions are atomic.
Introduction to AI
115. •Now, rather than write out propositions in full, we will
abbreviate them by using propositional variables.
•It is standard practice to use the lower-case lattin letters
p, q, r, . . . to stand for propositions.
•If we do this, we must define what we mean by writing
something like:
Let p be Abiy Ahimed is prime Minister.
•Another alternative is to write something like reactor is on, so
that the interpretation of the propositional variable becomes
obvious.
Introduction to AI
116. The Connectives
•Now, the study of atomic propositions is pretty boring. We therefore
now introduce a number of connectives which will allow us to build
up complex propositions.
•The connectives we introduce are:
∧ and (& or .)
∨ or (| or +)
¬ not (∼)
⇒ implies (⊃ or →)
⇔ iff
•(Some books use other notations; these are given in parentheses.)
Introduction to AI
117. And
•Any two propositions can be combined to form a third
proposition called the conjunction of the original
propositions.
•Definition: If p and q are arbitrary propositions, then the
conjunction of p and q is written
p ∧ q
and will be true iff both p and q are true.
Introduction to AI
118. •We can summarise the operation of ∧ in a truth table. The idea of a
truth table for some formula is that it describes the behaviour of a
formula under all possible interpretations of the primitive
propositions the are included in the formula.
•If there are n different atomic propositions in some formula, then
there are 2n different lines in the truth table for that formula. (This is
because each proposition can take one 1 of 2 values — true or false.)
•Let us write T for truth, and F for falsity. Then the truth table for p ∧
q is: p q p ∧ q
F
F
T
T
F
T
F
T
F
F
F
T
Introduction to AI
119. • Any two propositions can be combined by the word ‘or’ to form a third
proposition called the disjunction of the originals.
• Definition: If p and q are arbitrary propositions, then the disjunction of p and q is
written
p ∨ q
and will be true iff either p is true, or q is true, or both p and q are true.
The operation of ∨ is summarized in the following truth table:
OR
p q p ∨ q
F
F
T
T
F
T
F
T
F
T
T
T
Introduction to AI
120. If. . . Then. . .
•Many statements, particularly in mathematics, are of the form:
if p is true then q is true.
Another way of saying the same thing is to write:
p implies q.
•In propositional logic, we have a connective that combines two
propositions into a new proposition called the conditional, or
implication of the originals, that attempts to capture the sense
of such a statement.
Introduction to AI
121. •Definition: If p and q are arbitrary propositions, then the
conditional of p and q is written
p ⇒ q
and will be true iff either p is false or q is true.
•The truth table for ⇒ is:
p q p ⇒ q
F
F
T
T
F
T
F
T
T
T
F
T
Introduction to AI
122. •The ⇒ operator is the hardest to understand of the operators we
have considered so far, and yet it is extremely important.
•If you find it difficult to understand, just remember that the p ⇒
q means ‘if p is true, then q is true’.
If p is false, then we don’t care about q, and by default, make p
⇒ q evaluate to T in this case.
•Terminology: if φ is the formula p ⇒ q, then p is the antecedent
of φ and q is the consequent.
Introduction to AI
123. Iff
•Another common form of statement in logic is:
p is true if, and only if, q is true.
•The sense of such statements is captured using the biconditional operator.
•Definition: If p and q are arbitrary propositions, then the biconditional of p and
q is written:
p ⇔ q
and will be true iff either:
1. p and q are both true; or
2. p and q are both false.
•The truth table for ⇔ is:
•If p ⇔ q is true, then p and q are said
to be logically equivalent. They will be
true under exactly the same circumstances.
p q p ⇔ q
F
F
T
T
F
T
F
T
T
F
F
T
124. Not
•All of the connectives we have considered so far have been binary: they have taken
two arguments.
•The final connective we consider here is
unary. It only takes one argument.
•Any proposition can be prefixed by the word ‘not’ to form a second proposition
called the negation of the original.
•Definition: If p is an arbitrary proposition then the negation of p is written
¬p
and will be true iff p is false.
•Truth table for ¬:
p ¬p
F T
T F
Introduction to AI
125. Comments
•We can nest complex formulae as deeply as we want.
•We can use parentheses i.e., ),(, to
disambiguate formulae.
•EXAMPLES. If p, q, r, s and t are atomic propositions, then all of
the following are formulae:
p ∧ q ⇒ r p ∧ (q ⇒ r)
(p ∧ (q ⇒ r)) ∨ s
((p ∧ (q ⇒ r)) ∨ s) ∧ t
whereas none of the following is:
p ∧
p ∧ q) p¬
Introduction to AI
126. • When we consider formulae in terms of interpretations, it turns out that some have
interesting properties.
• Definition:
1. A formula is a tautology iff it is true under every valuation;
2. A formula is consistent iff it is true under at least one valuation;
3. A formula is inconsistent/Contradiction iff it is not made true under any
valuation.
• Now, each line in the truth table of a formula correponds to a valuation.
• So, we can use truth tables to determine whether or not formulae are tautologies or
other.
Tautologies , Contradiction & Consistency
Introduction to AI
129. 129
Tautology and Contradiction
A tautology is a statement that is always true
p ¬p will always be true (Negation Law)
A contradiction is a statement that is always
false
p ¬p will always be false(Negation Law)
p p ¬p p ¬p
T T F
F T F
130. 130
Logical Equivalence
A logical equivalence means that the two
sides always have the same truth values
Symbol is ≡ or (we’ll use ≡)
131. 131
p T ≡ p Identity law
p F ≡ F Domination law
Logical Equivalences of And
p T pT
T T T
F T F
p F pF
T F F
F F F
132. 132
p p ≡ p Idempotent law
p q ≡ q p Commutative law
Logical Equivalences of And
p p pp
T T T
F F F
p q pq qp
T T T T
T F F F
F T F F
F F F F
133. 133
(p q) r ≡ p (q r) Associative law
Logical Equivalences of And
p q r pq (pq)r qr p(qr)
T T T T T T T
T T F T F F F
T F T F F F F
T F F F F F F
F T T F F T F
F T F F F F F
F F T F F F F
F F F F F F F
134. 134
p T ≡ T Identity law
p F ≡ p Domination law
p p ≡ p Idempotent law
p q ≡ q p Commutative law
(p q) r ≡ p (q r) Associative law
Logical Equivalences of Or
135. 135
Corollary of the Associative Law
(p q) r ≡ p q r
(p q) r ≡ p q r
Similar to (3+4)+5 = 3+4+5
Only works if ALL the operators are the same!
136. 136
¬(¬p) ≡ p Double negation law
p ¬p ≡ T Negation law
p ¬p ≡ F Negation law
Logical Equivalences of Not
137. 137
DeMorgan’s Law
Probably the most important logical equivalence
To negate pq (or pq), you “flip” the sign, and
negate BOTH p and q
Thus, ¬(p q) ≡ ¬p ¬q
Thus, ¬(p q) ≡ ¬p ¬q
p q p q pq (pq) pq pq (pq) pq
T T F F T F F T F F
T F F T F T T T F F
F T T F F T T T F F
F F T T F T T F T T
139. 139
How to prove two propositions
are equivalent?
Two methods:
Using truth tables
Not good for long formula
In this course, only allowed if specifically stated!
Using the logical equivalences
The preferred method
Example:
Show that: r
q
p
r
q
r
p
)
(
)
(
)
(
140. 140
Using Truth Tables
p q r p→r q →r (p→r)(q →r) pq (pq) →r
T T T T T T T T
T T F F F F T F
T F T T T T F T
T F F F T T F T
F T T T T T F T
F T F T F T F T
F F T T T T F T
F F F T T T F T
r
q
p
r
q
r
p
)
(
)
(
)
(
(pq) →r
pq
(p→r)(q →r)
q →r
p→r
r
q
p
141. 141
Idempotent Law
Associativity of Or
r
q
p
r
q
r
p
)
(
)
(
)
( Definition of implication
Using Logical Equivalences
r
q
p
r
q
r
p
)
(
)
(
)
(
r
q
p
r
q
p
r
q
p
r
r
q
p
r
q
p
r
q
r
p
)
(
)
(
)
(
r
q
p
r
q
r
p
Re-arranging
Original statement
DeMorgan’s Law
q
p
q
p
q
p
q
p
)
(
q
r
p
r
q
r
p
)
(
)
(
r
r
r
144. Pros and cons of propositional
logic
Propositional logic is declarative
Propositional logic allows partial/disjunctive/negated information
(unlike most data structures and databases)
Propositional logic is compositional:
Meaning in propositional logic is context-independent
(unlike natural language, where meaning depends on context)
Propositional logic has very limited expressive power
146. First-order logic
Whereas propositional logic assumes the world contains facts,
first-order logic (like natural language) assumes the world contain
Objects, which are things with individual identities
Properties of objects that distinguish them from other objects
Relations that hold among sets of objects
Functions, which are a subset of relations where there is only
one “value” for any given “input”
Examples:
Objects: Students, lectures, companies, cars ...
Relations: Brother-of, bigger-than, outside, part-of, has-color,
occurs-after, owns, visits, precedes, ...
Properties: blue, oval, even, large, ...
Functions: father-of, best-friend, second-half, one-more-than ...
147. User provides
Constant symbols, which represent individuals in the
world
Mary
3
Green
Function symbols, which map individuals to individuals
father-of(Mary) = John
color-of(Sky) = Blue
Predicate symbols, which map individuals to truth
values
greater(5,3)
green(Grass)
color(Grass, Green)
148. FOL Provides
Variable symbols
E.g., x, y, foo
Connectives
Same as in PL: not (), and (), or (), implies (), if
and only if (biconditional )
Quantifiers
Universal x or (Ax)
Existential x or (Ex)
149. Sentences are built from terms and atoms
A term (denoting a real-world individual) is a
constant symbol, a variable symbol, or an n
place function of n terms.
x and f(x1, ..., xn) are terms, where each xi is a term.
A term with no variables is a ground term
An atomic sentence (which has value true o
false) is an n-place predicate of n terms
A complex sentence is formed from atomic
sentences connected by the logica
connectives:
P, PQ, PQ, PQ, PQ where P and Q are
sentences
150. Quantifiers
Universal quantification
(x)P(x) means that P holds for all values of x in the
domain associated with that variable
E.g., (x) dolphin(x) mammal(x)
Existential quantification
( x)P(x) means that P holds for some value of x in
the domain associated with that variable
E.g., ( x) mammal(x) lays-eggs(x)
Permits one to make a statement about some object
without naming it
151. Quantifiers
Universal quantifiers are often used w
“implies” to form “rules”:
(x) student(x) smart(x) means “All students a
smart”
Universal quantification is rarely used to ma
blanket statements about every individual in t
world:
(x)student(x)smart(x) means “Everyone in the wo
is a student and is smart”
Existential quantifiers are usually used w
“and” to specify a list of properties about
152. Quantifier Scope
Switching the order of universal quantifiers
does not change the meaning:
(x)(y)P(x,y) ↔ (y)(x) P(x,y)
Similarly, you can switch the order of
existential quantifiers:
(x)(y)P(x,y) ↔ (y)(x) P(x,y)
Switching the order of universals and
existentials does change meaning:
Everyone likes someone: (x)(y) likes(x,y)
Someone is liked by everyone: (y)(x) likes(x,y)
153. Connections between All and Exists
We can relate sentences involving and
using De Morgan’s laws:
(x) P(x) ↔ (x) P(x)
(x) P ↔ (x) P(x)
(x) P(x) ↔ (x) P(x)
(x) P(x) ↔ (x) P(x)
154. Translating English to FOL
Every gardener likes the sun.
x gardener(x) likes(x,Sun)
You can fool some of the people all of the time.
x t person(x) time(t) can-fool(x,t)
You can fool all of the people some of the time.
x t (person(x) time(t) can-fool(x,t))
x (person(x) t (time(t) can-fool(x,t)))
All purple mushrooms are poisonous.
x (mushroom(x) purple(x)) poisonous(x)
No purple mushroom is poisonous.
x purple(x) mushroom(x) poisonous(x)
x (mushroom(x) purple(x)) poisonous(x)
There are exactly two purple mushrooms.
x y mushroom(x) purple(x) mushroom(y) purple(y) ^ (x=y)
(mushroom(z) purple(z)) ((x=z) (y=z))
Clinton is not tall.
tall(Clinton)
X is above Y iff X is on directly on top of Y or there is a pile of one or mor
other objects directly on top of one another starting with X and endin
with Y.
Equivalent
Equivalent
156. emnetzewdu2121@gmail.com
Introduction to AI 156
Logical Equivalence
You might have noticed that the final column in the truth table
from ¬P∨Q¬P∨Q is identical to the final column in the truth table
for P→Q:
P Q P→Q ¬P∨Q
T T T T
T F F F
F T T T
F F T T
This says that no matter what P and Q are, the statements ¬P∨Q and P→Q either both
true or both false. We therefore say these statements are logically equivalent.
And also we can proof by drawing the truth table of (¬P∨Q) ↔ (P→Q) is tautology
159. “Thinking Rationally”
Computational models of human “thought”
processes
Computational models of human behavior
Computational systems that “think” rationally
Computational systems that behave rationally
emnetzewdu2121@gmail.com
Introduction to AI 159
160. Logical Agents
Reflex agents find their way from Arad to Bucharest
by dumb luck
Chess program calculates legal moves of its king, but
doesn’t know that no piece can be on 2 different
squares at the same time
Logical(Knowledge-Based) agents combine general
knowledge with current percepts to infer hidden
aspects of current state prior to selecting actions
Crucial in partially observable environments
emnetzewdu2121@gmail.com
Introduction to AI 160
161. Roadmap
Knowledge-based agents
Wumpus world
Logic in general
Propositional and first-order logic
Inference, validity, equivalence and satisfiability
Reasoning patterns
Resolution
Forward/backward chaining
emnetzewdu2121@gmail.com
Introduction to AI 161
163. Knowledge Base
Knowledge Base: set of sentences represented in a
knowledge representation language and represents
assertions about the world.
Inference rule: when one ASKs questions of the KB,
the answer should follow from what has been
TELLed to the KB previously.
tell
ask
emnetzewdu2121@gmail.com
Introduction to AI 163
165. Abilities KB agent
Agent must be able to:
Represent states and actions,
Incorporate new percepts
Update internal representation of the world
Deduce hidden properties of the world
Deduce appropriate actions
emnetzewdu2121@gmail.com
Introduction to AI 165
166. Description level
The KB agent is similar to agents with
internal state
Agents can be described at different levels
Knowledge level
What they know, regardless of the actual
implementation. (Declarative description)
Implementation level
Data structures in KB and algorithms that manipulate
them e.g propositional logic and resolution.
emnetzewdu2121@gmail.com
Introduction to AI 166
167. Types of Knowledge
Procedural, e.g.: functions
Such knowledge can only be used in
one way -- by executing it
Declarative, e.g.: constraints and rules
It can be used to perform many
different sorts of inferences
emnetzewdu2121@gmail.com
Introduction to AI 167
168. The Wumpus World
The Wumpus computer game
The agent explores a cave consisting of rooms
connected by passageways.
Lurking somewhere in the cave is the Wumpus, a
beast that eats any agent that enters its room.
Some rooms contain bottomless pits that trap any
agent that wanders into the room.
Occasionally, there is a heap of gold in a room.
The goal is to collect the gold and exit the world
without being eaten
emnetzewdu2121@gmail.com
Introduction to AI 168
169. Wumpus PEAS description
Performance measure:
gold +1000, death -1000,
-1 per step, -10 use arrow
Environment:
Squares adjacent to wumpus are stench
Squares adjacent to pit are breezy
Glitter iff gold is in the same square
Bump iff move into a wall
Woeful scream iff the wumpus is killed
Shooting kills wumpus if you are facing it
Shooting uses up the only arrow
Grabbing picks up gold if in same square
Releasing drops the gold in same square
Sensors: Stench, Breeze, Glitter, Bump, Scream
Actuators: Left turn, Right turn, Forward, Grab, Release, Shoot
emnetzewdu2121@gmail.com
Introduction to AI 169
170. A typical Wumpus world
The agent always
starts in [1,1].
The task of the
agent is to find the
gold, return to the
field [1,1] and climb
out of the cave.
emnetzewdu2121@gmail.com
Introduction to AI 170
172. Wumpus World Characterization
Observable?
No, only local perception
Deterministic?
Yes, outcome exactly specified
Episodic?
No, sequential at the level of actions
Static?
Yes, Wumpus and pits do not move
Discrete?
Yes
Single-agent?
Yes, Wumpus is essentially a natural feature.
emnetzewdu2121@gmail.com
Introduction to AI 172
173. The Wumpus agent’s first step
[1,1] The KB initially contains the rules of the environment. The first percept is [none,
none,none,none,none], move to safe cell e.g. 2,1
[2,1] breeze which indicates that there is a pit in [2,2] or [3,1], return to [1,1] to try next
safe cell emnetzewdu2121@gmail.com
Introduction to AI 173
174. The Wumpus agent’s first step
[1,1] The KB initially contains the rules of the environment. The first percept is [none,
none,none,none,none], move to safe cell e.g. 2,1
[2,1] breeze which indicates that there is a pit in [2,2] or [3,1], return to [1,1] to try next
safe cell emnetzewdu2121@gmail.com
Introduction to AI 174
175. Next….
[1,2] Stench in cell which means that wumpus is in [1,3] or [2,2]
YET … not in [1,1]
YET … not in [2,2] or stench would have been detected in [2,1]
THUS … wumpus is in [1,3]
THUS [2,2] is safe because of lack of breeze in [1,2]
THUS pit in [3,1]
move to next safe cell [2,2] emnetzewdu2121@gmail.com
Introduction to AI 175
176. Then…
[2,2] move to [2,3]
[2,3] detect glitter , smell, breeze
THUS pick up gold
THUS pit in [3,3] or [2,4]
emnetzewdu2121@gmail.com
Introduction to AI 176
177. What is a logic?
A formal language
Syntax – what expressions are legal (well-formed)
Semantics – what legal expressions mean
in logic the truth of each sentence with respect to each possible
world.
E.g the language of arithmetic
X+2 >= y is a sentence, x2+y is not a sentence
X+2 >= y is true in a world where x=7 and y =1
X+2 >= y is false in a world where x=0 and y =6
emnetzewdu2121@gmail.com
Introduction to AI 177
179. Entailment
One thing follows from another
KB |=
KB entails sentence if and only if is true
in worlds where KB is true.
g. x+y=4 entails 4=x+y
Entailment is a relationship between
sentences that is based on semantics.
emnetzewdu2121@gmail.com
Introduction to AI 179
180. Models
Models are formal definitions of possible
states of the world
We say m is a model of a sentence if is
true in m
M() is the set of all models of
Then KB if and only if M(KB) M()
M()
M(KB)
emnetzewdu2121@gmail.com
Introduction to AI 180
182. Wumpus models
KB = wumpus-world rules + observations
emnetzewdu2121@gmail.com
Introduction to AI 182
183. Wumpus models
KB = wumpus-world rules + observations
α1 = "[1,2] is safe", KB ╞ α1, proved by model checking
emnetzewdu2121@gmail.com
Introduction to AI 183
184. Wumpus models
KB = wumpus-world rules + observations
emnetzewdu2121@gmail.com
Introduction to AI 184
185. Wumpus models
KB = wumpus-world rules + observations
α2 = "[2,2] is safe", KB ╞ α2
emnetzewdu2121@gmail.com
Introduction to AI 185
186. Inference
KB ├i α = sentence α can be derived from KB by
procedure i
Soundness: i is sound if whenever KB ├i α, it is also true
that KB╞ α
Completeness: i is complete if whenever KB╞ α, it is also
true that KB ├i α
Preview: we will define a logic which is expressive
enough to say almost anything of interest, and for which
there exists a sound and complete inference procedure.
That is, the procedure will answer any question whose
answer follows from what is known by the KB.
(i is an algorithm that derives α from KB )
emnetzewdu2121@gmail.com
Introduction to AI 186
187. Propositional logic: Syntax
Propositional logic is the simplest logic – illustrates basic
ideas
The proposition symbols P1, P2 etc are sentences
If S is a sentence, S is a sentence (negation)
If S1 and S2 are sentences, S1 S2 is a sentence (conjunction)
If S1 and S2 are sentences, S1 S2 is a sentence (disjunction)
If S1 and S2 are sentences, S1 S2 is a sentence (implication)
If S1 and S2 are sentences, S1 S2 is a sentence (biconditional)
emnetzewdu2121@gmail.com
Introduction to AI 187
188. Propositional logic: Semantics
Each model specifies true/false for each proposition symbol
E.g. P1,2 P2,2 P3,1
false true false
With these symbols, 8 possible models, can be enumerated automatically.
Rules for evaluating truth with respect to a model m:
S is true iff S is false Negation
S1 S2 is true iff S1 is true and S2 is true
Conjunction
S1 S2 is true iff S1is true or S2 is true Disjunction
S1 S2 is true iff S1 is false or S2 is true Implication
i.e., is false iff S1 is true and S2 is false
S1 S2 is true iff S1S2 is true and S2S1 is true
Bicond.
Simple recursive process evaluates an arbitrary sentence, e.g.,
P1,2 (P2,2 P3,1) = true (true false) = true true = true
emnetzewdu2121@gmail.com
Introduction to AI 188
189. Truth tables for connectives
emnetzewdu2121@gmail.com
Introduction to AI 189
190. Wumpus world sentences
Let Pi,j be true if there is a pit in [i, j].
Let Bi,j be true if there is a breeze in [i, j].
R1: P1,1
R4: B1,1
R5: B2,1
"Pits cause breezes in adjacent squares"
R2: B1,1 (P1,2 P2,1)
R3: B2,1 (P1,1 P2,2 P3,1)
( R1 ^ R2 ^ R3 ^ R4 ^ R5 ) is the State of the Wumpus World
emnetzewdu2121@gmail.com
Introduction to AI 190
191. Truth tables for inference
emnetzewdu2121@gmail.com
Introduction to AI 191
192. Order of Precedence
Examples:
A B C is equivalent to ((A)B)C
emnetzewdu2121@gmail.com
Introduction to AI 192
193. Logical equivalence
Two sentences are logically equivalent iff true in
same models: α ≡ ß iff α╞ β and β╞ α
emnetzewdu2121@gmail.com
Introduction to AI 193
194. Validity & Satisfiability
A sentence is valid if it is true in all models,
e.g., True, A A, A A, (A (A B)) B
Tautologies are necessarily true statements
Validity is connected to inference via the Deduction Theorem:
KB ╞ α if and only if (KB α) is valid
A sentence is satisfiable if it is true in some model
A sentence is unsatisfiable if it is true in no models
e.g., AA
Satisfiability is connected to inference via the following:
KB ╞ α if and only if (KB α) is unsatisfiable
emnetzewdu2121@gmail.com
Introduction to AI 194
195. Rules of Inference
• Valid Rules of Inference:
Modus Ponens
And-Elimination
And-Introduction
Or-Introduction
Double Negation
Unit Resolution
Resolution
emnetzewdu2121@gmail.com
Introduction to AI 195
197. Rules of Inference (continued)
And-Introduction
1, 2, …, n
1 2 …n
Or-Introduction
i
1 2 …i … n
Double Negation
Unit Resolution (special case of resolution)
emnetzewdu2121@gmail.com
Introduction to AI 197
198. Wumpus World KB
Proposition Symbols for each i,j:
Let Pi,j be true if there is a pit in square i,j
Let Bi,j be true if there is a breeze in square i,j
Sentences in KB
“There is no pit in square 1,1”
R1: P1,1
“A square is breezy iff pit in a neighboring square”
R2: B1,1 (P1,2 P2,1)
R3: B2,1 (P1,1 P3,1 P2,2)
“Square 1,1 has no breeze”, “Square 2,1 has a breeze”
R4: B1,1
R5: B2,1
emnetzewdu2121@gmail.com
Introduction to AI 198
199. Inference in Wumpus World
Apply biconditional elimination to R2:
R6: (B1,1 (P1,2 P2,1)) ((P1,2 P2,1) B1,1)
Apply e to R6:
R7: ((P1,2 P2,1) B1,1)
Contrapositive of R7:
R8: ( B1,1 (P1,2 P2,1))
Modus Ponens with R8 and R4 ( B1,1):
R9: (P1,2 P2,1)
de Morgan:
R10: P1,2 P2,1
emnetzewdu2121@gmail.com
Introduction to AI 199
200. Searching for Proofs
Finding proofs is exactly like finding solutions
to search problems.
Can search forward (forward chaining) to
derive goal or search backward (backward
chaining) from the goal.
Searching for proofs is not more efficient than
enumerating models, but in many practical
cases, it’s more efficient because we can
ignore irrelevant propositions
emnetzewdu2121@gmail.com
Introduction to AI 200
201. Full Resolution Rule Revisited
Full Resolution Rule is a generalization
of this rule:
For clauses of length two:
emnetzewdu2121@gmail.com
Introduction to AI 201
202. Resolution Applied to Wumpus
World
R11: ¬B1,2.
R12: B1,2 ⇔(P1,1∨P2,2∨P1,3).
At some point we determine the absence
of a pit in square 2,2:
R13: P2,2
Biconditional elimination applied to R3
followed by modus ponens with R5:
R15: P1,1 P1,3 P2,2
Resolve R15 and R13:
R16: P1,1 P1,3
Resolve R16 and R1:
R17: P1,3
emnetzewdu2121@gmail.com
Introduction to AI 202
203. Resolution: Complete
Inference Procedure
Any complete search algorithm,
applying only the resolution rule, can
derive any conclusion entailed by any
knowledge base in propositional logic.
emnetzewdu2121@gmail.com
Introduction to AI 203
204. Cont.
The resolution rule applies only to clauses (that is,
disjunctions of literals), so it would seem to be
relevant only to knowledge bases and queries
consisting of clauses.
How, then, can it lead to a complete inference
procedure for all of propositional logic?
The answer is that every sentence of propositional logic is
logically equivalent to a conjunction of clauses.
A sentence expressed as a conjunction of clauses is
said to be in conjunctive normal form or CNF
emnetzewdu2121@gmail.com
Introduction to AI 204
205. Conjunctive Normal Form
Conjunctive Normal Form is conjunction
of clause(a disjunction of literals.)
Example:
(A B C) (B D) ( A) (B C)
clause
literals
emnetzewdu2121@gmail.com
Introduction to AI 205
207. Resolution Algorithm
To show KB ╞ , we show (KB ) is
unsatisfiable.
This is a proof by contradiction.
First convert (KB ) into CNF.Then apply
resolution rule to resulting clauses.
The process continues until:
there are no new clauses that can be added
(KB does not entail )
two clauses resolve to yield empty clause
in which KB entails .
emnetzewdu2121@gmail.com
Introduction to AI 207
208. Simple Inference in Wumpus
World
KB = R2 R4 = (B1,1 (P1,2 P2,1)) B1,1
Prove P1,2 by adding the negation P1,2
Convert KB P1,2 to CNF
PL-RESOLUTION algorithm
emnetzewdu2121@gmail.com
Introduction to AI 208
209. Horn clauses and Definite clauses
Horn Form
definite clause: disjunction of literals which exactly one is positive.
e.g. (¬L1,1∨¬B2,2 ∨B1,1) is but not (¬B1,1∨P1,2∨P2,1)
Horn clause: disjunction of literals of which at most one is positive.
e.g. (¬B2,2 ∨B1,1), (¬L1,1∨¬B2,2 ) is ,(¬B1,1∨P1,2∨P2,1) is not
So all definite clauses are Horn clauses
KB containing only definite clauses are interesting since;
Every definite clause can be written as an implication.
e.g. (¬L1,1∨¬B2,2∨B1,1) => (L1,1∧B2,2) ⇒B1,1
Note: In Horn form, the premise is called body the conclusion is
called head. sentence consisting of a single positive literal,
such asL1,1, is called a fact.
emnetzewdu2121@gmail.com
Introduction to AI 209
210. Inference with Horn clauses can be done through forward-chaining
and backward- chaining algorithms. Both are easily understandable
by human and this type of inference is base for logic programming.
Deciding entailment with Horn clauses can be done in time that is linear
in the size of the knowledge base—a pleasant surprise
210
emnetzewdu2121@gmail.com
Introduction to AI
211. Forward Chaining
Fire any rule whose premises are
satisfied in the KB.
Add its conclusion to the KB until query
is found.
emnetzewdu2121@gmail.com
Introduction to AI 211
212. Forward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 212
213. Forward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 213
214. Forward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 214
215. Forward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 215
216. Forward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 216
217. Forward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 217
218. Forward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 218
219. Forward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 219
220. Backward Chaining
Motivation: Need goal-directed
reasoning in order to keep from getting
overwhelmed with irrelevant
consequences
Main idea:
Work backwards from query q
To prove q:
Check if q is known already
Prove by backward chaining all premises of
some rule concluding q
emnetzewdu2121@gmail.com
Introduction to AI 220
221. Backward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 221
222. Backward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 222
223. Backward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 223
224. Backward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 224
225. Backward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 225
226. Backward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 226
227. Backward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 227
228. Backward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 228
229. Backward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 229
230. Backward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 230
231. Backward Chaining Example
P Q
L M P
B L M
A P L
A B L
A
B
emnetzewdu2121@gmail.com
Introduction to AI 231
232. Forward Chaining vs.
Backward Chaining
FC is data-driven—it may do lots of
work irrelevant to the goal
BC is goal-driven—appropriate for
problem-solving
emnetzewdu2121@gmail.com
Introduction to AI 232
234. Outline
Why FOL?
Syntax and semantics of FOL
Using FOL
Wumpus world in FOL
emnetzewdu2121@gmail.com
Introduction to AI 234
235. Pros and cons of propositional
logic
Propositional logic is declarative
Propositional logic allows partial/disjunctive/negated
information
(unlike most data structures and databases)
Meaning in propositional logic is context-independent
(unlike natural language, where meaning depends on
context)
Propositional logic has very limited expressive power
(unlike natural language)
emnetzewdu2121@gmail.com
Introduction to AI 235
236. First-order logic
Whereas propositional logic assumes the world
contains facts,
first-order logic (like natural language) assumes the
world contains
Objects: people, houses, numbers, colors, baseball
games, wars, …
Relations: red, round, prime, brother of, bigger
than, part of, comes between, …
Functions: father of, best friend, one more than,
plus, …
emnetzewdu2121@gmail.com
Introduction to AI 236
237. Syntax of FOL: Basic elements
Constants KingJohn, 2, NUS,...
Predicates Brother, >,...
Functions Sqrt, LeftLegOf,...
Variables x, y, a, b,...
Connectives , , , ,
Equality =
Quantifiers ,
emnetzewdu2121@gmail.com
Introduction to AI 237
238. Atomic sentences
Atomic sentence = predicate (term1,...,termn)
or term1 = term2
Term = function (term1,...,termn)
or constant or
variable
E.g., Brother(KingJohn,RichardTheLionheart) >
(Length(LeftLegOf(Richard)),
Length(LeftLegOf(KingJohn)))
emnetzewdu2121@gmail.com
Introduction to AI 238
239. Complex sentences
Complex sentences are made from
atomic sentences using connectives
S, S1 S2, S1 S2, S1 S2, S1 S2,
E.g. Sibling(KingJohn,Richard)
Sibling(Richard,KingJohn)
>(1,2) ≤ (1,2)
>(1,2) >(1,2)
emnetzewdu2121@gmail.com
Introduction to AI 239
240. Truth in first-order logic
Sentences are true with respect to a model and an
interpretation
Model contains objects (domain elements) and relations among
them
Interpretation specifies referents for
constant symbols → objects
predicate symbols → relations
function symbols → functional relations
An atomic sentence predicate(term1,...,termn) is true
iff the objects referred to by term1,...,termn
are in the relation referred to by predicate
emnetzewdu2121@gmail.com
Introduction to AI 240
241. Models for FOL: Example
emnetzewdu2121@gmail.com
Introduction to AI 241
242. Universal quantification
<variables> <sentence>
Everyone at NUS is smart:
x At(x,NUS) Smart(x)
x P is true in a model m iff P is true with x being
each possible object in the model
Roughly speaking, equivalent to the conjunction of
instantiations of P
At(KingJohn,NUS) Smart(KingJohn)
At(Richard,NUS) Smart(Richard)
At(NUS,NUS) Smart(NUS)
...
242
emnetzewdu2121@gmail.com
Introduction to AI
243. A common mistake to avoid
Typically, is the main connective with
Common mistake: using as the main
connective with :
x At(x,NUS) Smart(x)
means “Everyone is at NUS and everyone is smart”
emnetzewdu2121@gmail.com
Introduction to AI 243
244. Existential quantification
<variables> <sentence>
Someone at NUS is smart:
x At(x,NUS) Smart(x)$
x P is true in a model m iff P is true with x being
some possible object in the model
Roughly speaking, equivalent to the disjunction of
instantiations of P
At(KingJohn,NUS) Smart(KingJohn)
At(Richard,NUS) Smart(Richard)
At(NUS,NUS) Smart(NUS)
...
emnetzewdu2121@gmail.com
Introduction to AI 244
245. Another common mistake to
avoid
Typically, is the main connective with
Common mistake: using as the main
connective with :
x At(x,NUS) Smart(x)
is true if there is anyone who is not at NUS!
emnetzewdu2121@gmail.com
Introduction to AI 245
246. Properties of quantifiers
x y is the same as y x
x y is the same as y x
x y is not the same as y x
x y Loves(x,y)
“There is a person who loves everyone in the world”
y x Loves(x,y)
“Everyone in the world is loved by at least one person”
Quantifier duality: each can be expressed using the other
x Likes(x,IceCream) x Likes(x,IceCream)
x Likes(x,Banana) x Likes(x,Banana)
emnetzewdu2121@gmail.com
Introduction to AI 246
247. Equality
term1 = term2 is true under a given
interpretation if and only if term1 and term2
refer to the same object
E.g., definition of Sibling in terms of Parent:
x,y Sibling(x,y) [(x = y) m,f (m = f)
Parent(m,x) Parent(f,x) Parent(m,y)
Parent(f,y)]
emnetzewdu2121@gmail.com
Introduction to AI 247
248. Using FOL
The kinship domain:
Brothers are siblings
x,y Brother(x,y) Sibling(x,y)
One's mother is one's female parent
m,c Mother(c) = m (Female(m) Parent(m,c))
“Sibling” is symmetric
x,y Sibling(x,y) Sibling(y,x)
emnetzewdu2121@gmail.com
Introduction to AI 248
249. Interacting with FOL KBs
Suppose a wumpus-world agent is using an FOL KB
and perceives a smell and a breeze (but no glitter) at
t=5:
Tell(KB,Percept([Smell,Breeze,None],5))
Ask(KB,a BestAction(a,5))
I.e., does the KB entail some best action at t=5?
Answer: Yes, {a/Shoot}
emnetzewdu2121@gmail.com
Introduction to AI 249
250. Knowledge base for the
wumpus world
Perception
t,s,b Percept([s,b,Glitter],t) Glitter(t)
Reflex
t Glitter(t) BestAction(Grab,t)
emnetzewdu2121@gmail.com
Introduction to AI 250
251. Deducing hidden properties
x,y,a,b Adjacent([x,y],[a,b])
[a,b] {[x+1,y], [x-1,y],[x,y+1],[x,y-1]}
Properties of squares:
s,t At(Agent,s,t) Breeze(t) Breezy(s)
Squares are breezy near a pit:
Diagnostic rule---infer cause from effect
s Breezy(s) Exi{r} Adjacent(r,s) Pit(r)
Causal rule---infer effect from cause
r Pit(r) [s Adjacent(r,s) Breezy(s) ]
emnetzewdu2121@gmail.com
Introduction to AI 251
252. Knowledge engineering in FOL
1. Identify the task
2. Assemble the relevant knowledge
3. Decide on a vocabulary of predicates,
functions, and constants
4. Encode general knowledge about the
domain
5. Encode a description of the specific problem
instance
6. Pose queries to the inference procedure and
get answers
7. Debug the knowledge base
emnetzewdu2121@gmail.com
Introduction to AI 252
253. Summary
First-order logic:
objects and relations are semantic
primitives
syntax: constants, functions, predicates,
equality, quantifiers
Increased expressive power: sufficient
to define wumpus world
emnetzewdu2121@gmail.com
Introduction to AI 253
255. Categories and Objects
The organization of objects into categories is a vital part of KR.
Important relationships are
subclass relation (AKO - a kind of)
<category> AKO <category>.
instance relation ( ISA - is a)
<object> ISA <category>.
emnetzewdu2121@gmail.com
Introduction to AI 255
256. The upper ontology(*)
Anything
AbstractObjects GeneralizedEvents
Sets Numbers Representations Intervals Places PhysicalObjects Processes
Categories Sentences Measurements Moments Things Stuff
Times Weights Animals Agents Solid Liquid Gas
Humans
(*) This is AIMA ’s version. Other authors have other partitions.
See bus_semantics
emnetzewdu2121@gmail.com
Introduction to AI 256
257. Bus semantics
% thing top node
unkn ako thing.
event ako thing.
set ako thing.
cardinality ako number.
member ako thing.
abstract ako thing.
activity ako thing.
agent ako thing.
company ako agent.
contact ako thing.
content ako thing.
group ako thing.
identity ako thing.
information ako thing.
list ako thing.
measure ako thing.
place ako thing.
story ako thing.
study ako activity.
subset ako thing.
version ako abstract.
accident ako activity.
activation ako activity. %% start of it
addition ako abstract.
address ako place.
advice ako abstract.
age ako year. %% (measure)
agreement ako abstract.
analysis ako abstract.
animate ako agent.
application ako activity.
area ako measure.
% …..
% + (750 ako items)
257
emnetzewdu2121@gmail.com
Introduction to AI
258. Categories
• Category is a kind of set and denotes a set of objects.
• A category has a set of properties that is common to all its members.
• Categories are formally represented in logic as predicates, but we will
also regard categories as a special kind of objects.
• We then actually introduce a restricted form of second order logic, since
the the terms that occur may be predicates.
Example: Elephants and Mammals are categories.
The set denoted by Elephants is a subset of the set denoted by Mammals.
The set of properties common to Elephants is a superset of the set of properties
common to Mammals.
emnetzewdu2121@gmail.com
Introduction to AI 258
259. Taxonomy
• Subcategory relations organize categories into a taxonomy or
taxonomic hierarchy.
Other names are type hierarchy or class hierarchy .
• We state that a category is a subcategory of another category by
using the notation for subsets
Basketball Ball
We will also use the notation
ako(basketball,ball).
emnetzewdu2121@gmail.com
Introduction to AI 259
260. Category representations
There are two choices of representing categories in first order logic:
predicates and objects. That is, we can use the predicate Basketball(b)
or we can reify the category as an ”object” basketball. We could then
write
member(x,basketball) or
x basketball
We will also use the notation
isa(x,basketball).
Basketball is a subset or subcategory of Ball, which is abbreviated
Basketball Ball
We will also use the notation
ako(basketball,ball).
emnetzewdu2121@gmail.com
Introduction to AI 260
261. Inheritance
Categories serve to organize and simplify the
knowledge base through inheritance.
e.g. If we say that all instances of Food is edible (edible is
in the property set of Food), and if we assert that Fruit
is a subcategory of Food, and Apple is a subcategory of
Fruit, then we know that every apple is edible.
i.e. We say that the individual apples inherit the property
of edibility, in this case from their membership in the
Food category.
emnetzewdu2121@gmail.com
Introduction to AI 261
262. Reifying properties
• An individual object may have a property.
For example, a specific ball, BB9 can be round.
In ordinary FOL, we write
Round(BB9).
As for categories, we can regard Round as higher order object, and
say
BB9 has the property Round
We will also use the notation
hasprop(BB9,round).
emnetzewdu2121@gmail.com
Introduction to AI 262
263. Reifying Property Values
Some properties are determined by an attribute and a value. For
example, the diameter of my basketball BB9 has diameter 9.5:
Diameter(BB9 )=9.5
We can also use the notation
has(bb9,diameter,9.5).
An alternative representation for properties , when regarded as
Boolean attributes is
has(BB9,round,true).
In the same manner, we can express that a red ball has colour = red.
has(BB9,colour,red).
emnetzewdu2121@gmail.com
Introduction to AI 263
264. Logical expressions on categories
• An object is a member of a category
isa(bb9,basketball).
• A category is a subclass of another category
ako(basketball,ball).
• All members of a category have som properties
isa(X,basketball) => hasprop(X,round).
• Members of a category can be recognized by some properties, for example:
hasprop(X,orange) and hasprop(X,round) and
has(X,diameter,9.5) and isa(X,ball) => isa(X,basketball)
• A category as a whole has some properties
isa(teacher,profession).
Here, it is a fallacy to conclude that
isa(tore,teacher) and isa(teacher,profession)=>isa(tore,profession).
264
emnetzewdu2121@gmail.com
Introduction to AI
265. Category Decompositions
• We can say that both Male and Female is a subclass of Animal,
but we have not said that a male cannot be a female. That is
expressed by
Disjoint({Male,Female})
• If we know that all animals are either male or female, (they
exhaust the possibilities)
Exhaustive({Male,Female},Animals).
• A disjoint exhaustive decomposition is known as a partition
Partition({Male,Female},Animals).
emnetzewdu2121@gmail.com
Introduction to AI 265
266. Physical Compositions
One object can be a part of another object.
Example, declaring direct parts
part(bucharest,romania).
part(romania,eastern_europe).
part(europe,earth).
We can make a transitive extension partof
part(Y,Z) and partof(X,Y) => partof(X,Z).
and reflexive (*)
partof(X,X).
Therefore we can conclude that partof(bucharest,earth)
emnetzewdu2121@gmail.com Introduction to AI 266
267. Bunch
It is also useful to define composite objects with definite
parts but no particular structure. For example, we might say
”The apples in the bag weigh two pounds”
It is adviced that we don’t regard these apples as the set of
(all) apples, but instead define them as a bunch of apples.
For example, if the apples are Apple1,Apple2 and Apple3,
then
BunchOf({Apple1,Apple2,Apple3})
Denotes the composite object with three apples as parts, not
elements.
emnetzewdu2121@gmail.com
Introduction to AI 267
268. Substances and Objects
• The real world can be seen as consisting of primitive objects (particles) and
composite objects. A common characteristic is that they can be counted
(individuated)
However, there are objects that cannot be individuated like
Butter, Water, Nitrogen, Wine, Grass, etc.
They are called stuff, and are denoted in English without articles or quantifiers
(not ”a water”).
Typically, they can be divided without loosing their intrinsic properties. When
we take a part of a substance, we still have the same substance.
isa(X,butter) and partof(Y,X)=>isa(Y,butter).
We can say that butter melts at 30 degrees centigrade
isa(X,butter) =>has(X,meltingpoint,30).
268
emnetzewdu2121@gmail.com
Introduction to AI
269. Measures, Abstracts ,Mentals
• In the common world, objects have height, mass, cost and so on. The
values we assign for these properties are called measures.
• We imagine that the universe includes abstract ”measure objects” such as
length. Measure objects are given as a number of units, e.g. meters. Logically,
we can combine this with unit functions
Length(L1) = Inches(1.5) = Centimeters(3.81)
• Another way is to use predicates
Length(L1,1.5,inches)
Abstract concepts like ”autonomy”, ”quality” are difficult to represent
without seeking artificial measurements. (e.g. IQ).
Mental concepts are beliefs, thoughts, feelings etc.
emnetzewdu2121@gmail.com
Introduction to AI 269
270. Reasoning systems for categories
Semantic networks and Description Logics are two closely
related systems for reasoning with categories. Both can be
described using logic.
•Semantic networks provide graphical aids of visualizing the
knowledge base, together with efficient algoritms for inferring
properties of an object on the basis of its category
membership.
•Description logics provide a formal language for
constructing and combining category definitions, and efficient
algorithms for deciding subsets and superset relationships
between categories.
emnetzewdu2121@gmail.com
Introduction to AI 270
272. Link types in semantic nets
There are 3 types of entities in a semantic nets
categories, objects and values (other than these)
Then there could be 9 different types of relations between these. They are drawn
with certain conventions. Note that objects can act as values also.
category category ako(C1,C2) every C1 is a.k.o. C2
category category haveatt(C1,R,C2) every C1has a R a.k.o C2
category value have(C1,R,V) every C1 has attribute value R = V
object category isa(O,C) O is a C
object object has(O1,R,O2) O1 has relation R to O2
object value has (O,R,V) O has attribute value R=V
object object partof (O1,O2) O1 is a part of O2
In addition, we have all kinds of relations between values.
value* V1 > 2*V2 +5 272
emnetzewdu2121@gmail.com
Introduction to AI
273. Further comments on link types
We know that persons have female persons as mothers, but we cannot draw
a HasMother link from Persons to FemalePersons because HasMother is a
relation between a person and his or her mother, and categories do not have
mothers. For this reason, we use a special notation – the double boxed link.
In logic, we have given it the name haveatt,e.g.
haveatt(person,mother,femaleperson).
Compare this to
haveatt(lion,mother,lioness).
We also want to express that persons normally have two legs.As before, we
must be careful not to assert that categories have legs; instead we use a
single-boxed link. In logic, we have given it the name have,e.g.
have(person,legs,2).
emnetzewdu2121@gmail.com
Introduction to AI 273
274. Content of semantic net
A paraphrase of the
knowledge
All persons are mammals
All females are persons
Persons have a mother who is female
Persons have normally 2 legs
John has 1 leg
Mary isa female
John is a male
John has a sister Mary
Mary has a brother John
Logic representation
ako(person,mammal).
ako(female,person).
haveatt(person,mother,female).
have(person,legs,2).
has(john,legs,1).
isa(mary,female).
isa(john,male).
has(john,sister,mary).
has(mary,brother,john).
274
emnetzewdu2121@gmail.com
Introduction to AI
275. Inheritance and inference in semantic nets
The rules of inheritance can now be automated using our logic
representation of semantic nets.
isa(X,Y) and ako(Y,Z) => isa(X,Z).
have(Y,R,V) and isa(X,Y) => has(X,R,V).
haveatt(C,R,M) and isa(X,C) and has(X,R,V) =>
isa(V,M).
With these definitions, we can prove that Mary has two legs, even if
this information is not explicitly represented.
emnetzewdu2121@gmail.com
Introduction to AI 275
276. Example of inheritance
isa(X,Y)and ako(Y,Z)=>isa(X,Z)
have(X,Y,Z)and isa(A1,X)=>has(A1,Y,Z)
haveatt(X,Y,C) and isa(A1,X) and
has(A1,Y,V) => isa(V,C).
t=>ako(female,person)
t=>ako(male,person)
t=>haveatt(person,mother,female)
t=>have(person,legs,2)
t=>isa(mary,female)
t=>isa(john,male)
t=>has(john,legs,1)
t=>has(john,sister,mary)
t=>has(mary,brother,john)
PROOF:
has(mary,legs,2) because
have(person,legs,2) and
isa(mary,person)
have(person,legs,2) is true
isa(mary,person) because
isa(mary,female) and
ako(female,person)
isa(mary,female) is true
ako(female,person) is true
emnetzewdu2121@gmail.com
Introduction to AI 276