3. Artificial Intelligence
The design and study of systems that behave intelligently
Focus on hard problems, often with no, or very ine cient full
algorithmic solution
Focus on problems that require “reasoning” (“intelligence”)
and a large amount of knowledge about the world
Critical
Represent knowledge about the world
Reason with these representations to obtain meaningful
answers/solutions
2/67
4. Symbolic Knowledge Representation
Important objects (collections of objects) and their
relationships are represented explicitly by internal symbols
Symbolic manipulation of internal symbolic representations
achieves results meaningful in the real world
3/67
5. Goals of Knowledge Representation
Find representation that are:
Rich enough to express the important knowledge relevant to
the problem at hand
Close to problem at hand: compact, natural, maintainable
Amenable to e cient computation
4/67
6. Representational Adequacy
Consider the following facts:
Most children believe in Santa.
John will have to finish his assignment before he can start
working on his project
Can all be represented as a string! But hard then to
manipulate and draw conclusions
How do we represent these formally in a way that can be
manipulated in a computer program?
5/67
7. Well-defined Syntax and Semantics
Precise syntax: what can be expressed in the language
Formal language, unlike natural language
Prerequisite for precise manipulation through computation
Precise semantics: formal meaning of expression
6/67
8. Naturalness of expression
Also helpful if our representation scheme is quite intuitive and
natural for human readers!
Could represent the fact that my car is red using the notation:
“xyzzy ! Zing”
where xyzzy refers to redness, Zing refers to by car, and !
used in some way to assign properties
But this would not be very helpful. . .
7/67
9. Inferential Adequacy
Representing knowledge not very interesting unless you can
use it to make inferences:
Draw new conclusions from existing facts.
“If its raining John never goes out” + “It is raining today”
so. . .
Come up with solutions to complex problems, using the
represented knowledge.
Inferential adequacy refers to how easy it is to draw inferences
using represented knowledge.
8/67
10. Inferential E ciency
You may be able, in principle, to make complex deductions, but it
may be just too ine cient.
The basic tradeo↵ of all KR
Generally the more complex the possible deductions,
the less e cient will be the reasoning process (in the
worst case).
The eternal quest of KR
Need representation and inference system su cient
for the task, without being hopelessly ine cient
9/67
11. Inferential Adequacy (2)
Representing everything as natural language strings has good
representational adequacy and naturalness, but very poor
inferential adequacy
10/67
12. Requirements for KR Languages
Representational Adequacy
Clear syntax/semantics
Inferential adequacy
Inferential e ciency
Naturalness
In practice no one language is perfect, and di↵erent languages are
suitable for di↵erent problems.
11/67
13. Why Reasoning?
Patient x is allergic to medication m
Anybody allergic to medication m is also allergic to
medication n
Is it ok to prescribe n for x?
Reasoning uncovers implicit knowledge not represented explicitly.
Beyond database systems technology
12/67
14. Syntactic vs Semantic Reasoning
Semantic Reasoning
Sentences P1 ...,Pn entail sentence P i↵ the
truth of P is implicit in the truth of P1 ...,Pn
Or: if the world satisfies P1 ...,Pn then it must
also satisfy P
Reasoning usually done by humans
Syntactic Reasoning
Sentences P1 ...,Pn infer sentence P i↵ there is
a syntactic manipulation of P1 ...,Pn that
results in P
Reasoning done by humans and machines
13/67
15. Reasoning: Soundness and Completeness
Sound (syntactic) reasoning:
If P is inferred by P1 ...,Pn then it is also entailed semantically
Only semantically valid conclusions are drawn
Complete (syntactic) reasoning
If P is entailed semantically by P1 ...,Pn then it can also be
inferred
All semantically valid conclusions can be drawn
Usually interested in sound and complete reasoning
But sometimes we have to give up one for the sake of e ciency
(usually completeness)
14/67
16. Main KR Approaches
Logic Based
Focus on clean, mathematical semantics: declaratively
Explainability
Frames / Semantic Networks / Objects
Focus on structure of objects
Rule-based systems
Focus on e ciency
A ) B in logic and rule-based systems
15/67
17. The Landscape of KR
Predicate logic (first order logic) and its sublanguages
Logic programming, (pure) Prolog
Description logics
Web ontology languages
Predicate logic (first order logic) extensions
Modal and epistemic logics
Temporal logics
Spatial logics
Inconsistency-tolerant logics:
Paraconsistency
Nonmonotonic reasoning
Representing vagueness
Probabilistic logics
Bayesian networks
Markov chains
16/67
19. Being Lazy: Reasonable Results with Minimum E↵ort
Factual omniscience and (non-)monotonic reasoning
PhD ! Uni
Weekend ! ¬Uni
PublicHoliday ! ¬Uni
Sick ! ¬Uni
Weekend ^VICdeadline ! Uni
VICdeadline ^PartnerBirthday ! ¬Uni
Phd ^(¬Weekend _(Weekend ^VICdeadline ^¬PartnerBirthday))^¬Sick ... ! Uni
17/67
20. Inconsistent Information
Classical logics “collapse” in the face of inconsistencies
Everything can be derived
But inconsistencies do happen in real settings
Common when integrating knowledge from various Web
sources
Nonmonotonic reasoning is inconsistency tolerant reasoning
18/67
21. Rules with Exceptions
Natural representation for policies and business rules.
Priority information is often implicitly or explicitly available to
resolve conflicts among rules.
Potential applications
Normative reasoning
Security policies
Business rules
Personalization
Brokering
Bargaining, automated agent negotiations
19/67
23. Basic Reasoning
Suppose you have one pieces of evidence, Evidence A suggesting
that the defendant is responsible.
Given: EvidenceA and the rule
EvidenceA ) Responsible
Sceptical: Responsible
Credulous: Responsible
21/67
24. Conflict
Suppose that your legal system is based on presumption of
innocence, and the somebody is guilty if responsibility is proved.
Given the rules
r1 : Responsible ) Guilty
r2 : ) ¬Guilty
Sceptical: ¬Guilty
Credulous: ¬Guilty
What about if we have r1 > r2 (same conclusions)
22/67
25. Sceptical vs Credulous
Suppose you have two pieces of evidence. Evidence A suggesting
that the defendant is responsible, and Evidence A suggesting that
the defendant is not responsible.
Given: EvidenceA, and EvidenceB and the rules
EvidenceA ) Responsible
EvidenceB ) ¬Responsible
Sceptical: no conclusions
Credulous: both conclusions
23/67
26. Sceptical vs Credulous and preference
Suppose you have two pieces of evidence. Evidence A suggesting
that the defendant is responsible, and Evidence A suggesting that
the defendant is not responsible. However, Evidence A is more
reliable than Evidence B.
Given: EvidenceA, and EvidenceB and the rules
r1 : EvidenceA ) Responsible
r2 : EvidenceB ) ¬Responsible
r1 > r2
Sceptical: Responsible
Credulous: Responsible
24/67
27. Ambiguity Propagation vs Ambiguity Blocking
Suppose you have two pieces of evidence. Evidence A suggesting
that the defendant is responsible, and Evidence A suggesting that
the defendant is not responsible. If the defendant is responsible,
then he is guilty. and we have presupposition of innocence.
Given: EvidenceA, and EvidenceB and the rules
EvidenceA ) Responsible
EvidenceB ) ¬Responsible
Responsible ) Guilty
) ¬Guilty
Ambiguity blocking concludes ¬Guilty
Ambiguity propagation does not concludes ¬Guilty and fails to
conclude Guilty.
25/67
28. Ambiguity Propagation vs Ambiguity Blocking
Suppose you have two pieces of evidence. Evidence A suggesting that the
defendant is responsible, and Evidence A suggesting that the defendant is
not responsible. If the defendant is responsible, then he is guilty. and we
have presupposition of innocence. If the defendant was wrongly accused
then he is entitled to compensation.
Given: EvidenceA, and EvidenceB and the rules
EvidenceA ) Responsible
EvidenceB ) ¬Responsible
Responsible ) Guilty
) ¬Guilty
¬Guilty ) Innocent
Innocent ) Compensation
Ambiguity blocking concludes Compensation
Ambiguity propagation does not conclude Compensation
26/67
29. Team Defeat vs No Team Defeat
r1 :General ) Attack
r2 :Captain ) ¬Attack
r1 > r2
r3 :Bishop ) Attack
r4 :Priest ) ¬Attack
r3 > r4
Team Defeat concludes Attack
No Team Defeat does not conclude Attack
27/67
30. Weak and Strong Support
Suppose that a drunk person testify that the accused (not known to him)
was in location di↵erent from the crime scene at the time of the crime.
Secure footage from high definition camera shows the accused at the
crime scene at the time of the crime.
r1 :drunk ) ¬CrimeScene
r2 :camera ) CrimeScene
r3 :¬CrimeScene ) Alibi
r2 > r1
Do we have scintilla of evidence to claim that the accuse was at the
crime scene at the time of the crime?
Is it reasonable to say that we have substantial evidence supporting for
the same claim?
Is it reasonable to claim that beyond any reasonable doubts the accused
has an alibi?
28/67
32. Defeasible Logic: Strength of Conclusions
Derive (plausible) conclusions with the minimum amount of
information.
Definite conclusions
Defeasible conclusions
Defeasible Theory
Facts
Strict rules (A ! B)
Defeasible rules (A ) B)
Defeaters (A ; B)
Superiority relation over rules
30/67
33. Conclusions in Defeasible Logic
A proof is a finite sequence P = (P(1),...,P(n)) of tagged literals
satisfying four conditions
+∂p (-∂p): p is (not) derivable using ambiguity blocking,
team defeat
+∂⇤p: p is derivable using ambiguity blocking, no team defeat
+dp: p is derivable using ambiguity propagation, team defeat
+d⇤p p is derivable using ambiguity propagation, no team
defeat
+sp: p is a credulous conclusion using team defeat
+sp: p is a credulous conclusion using no team defeat
+s p: p is a credulous weak conclusion
31/67
34. Proving Conclusions in Defeasible Logic
1 Give an argument for the conclusion you want to prove
2 Consider all possible counterarguments to it
3 Rebut all counterarguments
Defeat the argument by a stronger one
Undercut the argument by showing that some of the premises
do not hold
32/67
35. Example
Facts: A1, A2, B1, B2
Rules: r1:A1 ) C
r2:A2 ) C
r3:B1 ) ¬C
r4:B2 ) ¬C
r5:B3 ) ¬C
Superiority relation:
r1 > r3
r2 > r4
r5 > r1
Phase 1: Argument for C
A1 (Fact), r1 : A1 ) C
Phase 2: Possible counterarguments
r3 : B1 ) ¬C
r4 : B2 ) ¬C
r5 : B3 ) ¬C
Phase 3: Rebut the counterarguments
r3 weaker than r1
r4 weaker than r2
r5 is not applicable
33/67
36. Derivations in Defeasible Logics: Ambiguity blocking
+∂p
1) 9 an applicable rule r pro p
2) 8 rule t con p either:
2.1) t is not applicable
2.2) t is defeated by an applicable rule s pro p stronger than t
+∂⇤p
1) 9 an applicable rule r pro p
2) 8 rule t con p either:
2.1) t is not applicable
2.2) t is defeated by r where r is stronger than t
34/67
37. Derivations in Defeasible Logics: Ambiguity propagation
+∂p
1) 9 an applicable rule r pro p
2) 8 rule t con p either:
2.1) t is not applicable
2.2) t is defeated by an applicable rule s pro p stronger than t
+dp
1) 9 an applicable rule r pro p
2) 8 rule t con p either:
2.1) t is not applicable
2.2) t is defeated by a supported rule s pro p stronger than s
35/67
38. Derivations in Defeasible Logics: Support
+sp
1) 9 a supported rule r pro p
2) 8 rule s con p either
2.1) s is not applicable using ambiguity propagation (i.e., d, d⇤)
2.2) s is not stronger than r
+s p
9 a supported rule r pro p
36/67
39. Properties of Defeasible Logic
Theorem
Defeasible logic is consistent. +∂a and +∂¬a cannot be both
derived, unless they are already known as certain knowledge (facts)
Theorem
Defeasible logic is coherent. +#a and #a cannot be derived
from the same knowledge base.
Theorem
Defeasible logic has linear complexity.
37/67
42. What is a rule?
A rule is a binary relationship between a set of ‘expressions’ and an
‘expression’
What’s the strength of the relationship?
What’s the type of the relationship?
39/67
43. Modal Defeasible Logic: Mode and Strength
1 The strength describes how strong is the relationships
between the antecedent and the consequent of a rule.
A1,...,An ! B (B is an indisputable consequence of
A1,...,An)
A1,...,An ) B (normally B if A1,...,An)
2 The mode qualifies the conclusion of a rule.
A1,...,An )BEL B (an agent forms the belief B when
A1,...,An are the case)
A1,...,An )INT B (an agent has the intention B when
A1,...,An are the case)
A1,...,An )OBL B (an agent has the obligation B when
A1,...,An are the case)
40/67
44. Conclusions in Basic Modal Defeasible Logic
+ 2i
q, which is intended to mean that q is definitely
provable (i.e., using only facts and strict rules of mode 2i );
2i
q, which is intended to mean that we have proved that
q is not definitely provable in D;
+∂2i
q, which is intended to mean that q is defeasibly
provable in D using rules of mode 2i ;
∂2i
q which is intended to mean that we have proved that q
is not defeasibly provable in D using rules of mode 2i .
We obtain 2i p i↵ +∂2i
p.
41/67
45. Recipe for Modal Defeasible Logics
Choose the appropriate modalities
Create a defeasible consequence relation for each modality
Identify relationships between modalities:
inclusion
21f ! 22f
conflicts
21f,22¬f ! ?
conversions from one modality to another modality
A1,...,An )21 B
22A1,...,22An ` 22B
Put in a mixer and shake well!
42/67
46. Proofs for Modal Defeasible Logic
Conflict 21 ! ¬22¬
1 Give an argument for the conclusion you want to prove
2 Consider all possible counterarguments to it using rules for
both 21 and 22
3 Rebut all counterarguments
Defeat the argument by a stronger one
Undercut the argument by showing that some of the premises
do not hold
43/67
47. Proofs for Modal Defeasible Logic
Conversion 21 to 22
1 Give an argument for the conclusion you want to prove using
rules for either 22 or rules of mode 21 st all premises are
provable with mode 22
2 Consider all possible counterarguments to it
3 Rebut all counterarguments
Defeat the argument by a stronger one (same as 1)
Undercut the argument by showing that some of the premises
do not hold (for rules of mode 21 show that the premises are
not provable with mode 22)
43/67
48. DL for cognitive agents
D = (F,RBEL
,RDES
,RINT
,ROBL
,>)
RBEL rules for belief !BEL, )BEL, ;BEL
RDES rules for desire !DES, )DES, ;DES
RINT rules for intention !INT, )INT, ;INT
ROBL rules for obligation !OBL, )OBL, ;OBL
For X 2 {INT,DES,OBL}
D ` XA i↵ D ` +∂X A
D ` A i↵ D ` +∂BELA
44/67
50. Conversions
What do we conclude from
A1,A2 )OBL C
and INTA1 and INTA2?
What about
A1,INTA2 )OBL C
and INTA1 and INTA2?
47/67
51. Social Agents
Definition (Social agent)
An agent is social if in case of a conflict between an obligation
and an intention, the agent prefers the obligation to her intention
IJCAIdeadline )OBL Uni
SoccerWorldCup )INT ¬Uni
49/67
52. BIO Logical Agents
A set of rules for beliefs:
a1,...,an )BEL c
The agent derives BELc from a1,...,an
A set of rules for intentions:
a1,...,an )INT c
The agent derives INTc from a1,...,an
A set of rules for :
a1,...,an )OBL c
The agent derives OBLc from a1,...,an
Belief rules are stronger than obligation rules which in turns are
stronger than intention rules
50/67
53. From Beliefs to Intentions
If we want to model realistic agents, the model must conform with
the real world. According to current legal theories:
If an agent knows/beliefs that B is a consequence of A, and the
agent intends A, then the agent intends B (unless she has some
justifications for not intending it).
From a1,...,an )BEL c, and
INTa1,...,INTan derive
INTc
If an agent believes that dropping a glass will break it, and she
intends to drop the glass, she intends to break it.
51/67
54. Good News
Why should we use BIO Logical agents
Theorem
The complexity of defeasible logic for BIO logical agents is linear
52/67
56. Temporalised Defeasible Logic
Temporalised Defeasible Logic is an umbrella expression for a zoo
of variants of logics.
time points: A : t (A holds at time t)
intervals: A[ts,te] (A holds from ts to te)
durations: A : d (A holds for d time units)
. . .
53/67
57. Persistent and Transient Conclusions
linear discrete time line with a fixed granularity
propositions (literals) are associated with instants of time
C : t is persistent at time t, if C continues to hold after t
unless some event occurs to terminate it.
C : t is transient at time t, if C is guaranteed to hold at time t
only.
partition the rules into persistent rules and transient rules
ClapHands : t !t
MakeSomeNoise : t
TearPaper : t !p
ShreddedPaper : t
A1 : t1,...An : tn )x
C : t
no constraints over t1,...,tn and t.
54/67
58. Proving Persistence
1 Generate an argument for the persistent conclusion now using
persistent rules.
Take a rule for the conclusion that is applicable now or
Show the there is a time in the past where the persistent
conclusion obtains.
2 Consider all possible counterarguments for the conclusion
Take all rules for its negation that obtain now
Take all rules for its negation that have obtained since the
time in the past.
3 rebut the counterarguments
show that the rules have been discarded (not applicable or
defeated).
55/67
59. Example
Facts: A : t0, B : t2,
C : t2, D : t3
Rules: r1 : A : t )p E : t
r2 : B : t )p ¬E : t
r3 : C : t ;p E : t
r4 : D : t )t ¬E : t
Superiority relation:
r3 > r2
r1 > r4
Conclusions at time t0
A, E using r1 (E is persistent)
Conclusions at time t1
E
Conclusions at time t2
B, C, E
Conclusions at time t3
D, ¬E
56/67
60. Linear Time
Theorem
The extension of a temporalised defeasible theory D can be
computed in O(|R|⇤|H|⇤|T|)
R is the set of rules in D
H is the Herbrandt base of D, i.e., the set of distinct
propositional atoms
T is the set of distinct instant of time explicitly occurring in
D.
57/67
61. Running out of time: Deadlines
Many kinds of deadlines; di↵erent functions
Research Problem
How to represent deadlines (in contracts)?
What happens after the deadline?
Characterise types of deadlines
Approach:
Identify key parameters; template formulas
Temporalised Defeasible Deontic Logic
58/67
64. Basic Deadline + Sanction
Basic Deadline + Sanction
Customers must pay within 30 days after receiving the invoice.
Otherwise, a fine must be paid.
¬pay
invoice
viol(inv)
OBLfine
t1 t1 +31OBLpay
invinit invoice : t1 )OBL pay : [t1,max]
invterm OBLpay : t2,pay : t2 ;OBL ¬pay : t2 +1
invviol invoice : t1,OBLpay : t1 +31 ) viol(inv) : t1 +31
invsanc viol(inv) : t )OBL fine : [t,max]
60/67
65. Maintenance
Maintenance Deadlines
Customers must keep a positive balance, for 30 days after opening
an bank account.
¬positive
openAccount
vio(pos)
t1 t1 +30
t2
OBLpositive
posinit openAccount : t1 )OBL positive : [t1,t1 +30]
posterm
posviol ¬positive : t2,OBLpositive : t2 ) viol(pos) : t2
61/67
66. Persistent
Persistent Obligation after Deadline
Customers must pay within 30 days after receiving the invoice.
¬pay
invoice
vio(pos)
t1 t1 +31OBLpay
invinit invoice : t1 )OBL pay : [t1,max]
invterm OBLpay : t2,pay : t2 ;OBL ¬pay : t2 +1
invviol invoice : t1,OBLpay : t1 +31 ) viol(inv) : t1 +31
62/67
67. Non-persistent
Non-persistent Obligation after Deadline
A wedding cake must be delivered, before the wedding party.
¬cakeorder
wedding
viol(wed)
OBLcake
t1 t2
wedinit order : t1,wedding : t2 )OBL cake : [t1,t2 ]
wedterm OBLcake : t3,cake : t3 ;OBL ¬cake : t3 +1
wedviol wedding : t2,OBLcake : t2 ) viol(wed) : t2
63/67
68. References for Defeasible Logic
Grigoris Antoniou, David Billington, Guido Governatori, and Michael J.
Maher.
Representation results for defeasible logic.
ACM Transactions on Computational Logic, 2(2):255–287, April 2001.
Grigoris Antoniou, David Billington, Guido Governatori, and Michael J.
Maher.
Embedding defeasible logic into logic programming.
Theory and Practice of Logic Programming, 6(6):703–735, November
2006.
David Billington, Grigoris Antoniou, Guido Governatori, and Michael J.
Maher.
An inclusion theorem for defeasible logic.
ACM Transactions in Computational Logic, 12(1):article 6.
Ho-Pun Lam and Guido Governatori.
The making of SPINdle.
In Guido Governatori, John Hall, and Adrian Paschke, editors, RuleML
2009, pages 315–322, Springer, 2009.
65/67
69. References for Modal Defeasible Logic
Guido Governatori and Antonino Rotolo.
BIO logical agents: Norms, beliefs, intentions in defeasible logic.
Journal of Autonomous Agents and Multi Agent Systems, 17(1):36–69,
2008.
Duy Hoang Pham, Guido Governatori, Simon Raboczi, Andrew Newman,
and Subhasis Thakur.
On extending RuleML for modal defeasible logic.
In Nick Bassiliades, Guido Governatori, and Adrian Paschke, editors,
RuleML 2008, pages 89–103, Springer, 2008.
66/67
70. References for Temporal Defeasible Logic
Guido Governatori and Antonino Rotolo.
Changing legal systems: legal abrogations and annulments in defeasible
logic.
Logic Journal of IGPL, 18(1):157–194, 2010.
Guido Governatori, Antonino Rotolo, and Giovanni Sartor.
Temporalised normative positions in defeasible logic.
In Anne Gardner, editor, 10th International Conference on Artificial
Intelligence and Law (ICAIL05), pages 25–34. ACM Press, June 6–11
2005.
67/67