SlideShare une entreprise Scribd logo
1  sur  20
DEALING WITH
INCONSITENCIES
      ~ANKIT SHARMA
       M.Tech 3rd Sem
       ROLL No. 312
LOGIC-a brief intro
•   Logic is used to represent textual information in a formal way in
    order to give the information a precise meaning and to remove
    ambiguity.
•   For example, we might want to express that every day when it
    rains, the streets are wet.
•   In first-order logic (FOL) we might express the fact that it is
    raining at a specific day using a predicate rain with an
    argument representing the date when it is raining, e.g., the fact
    that it is raining on August, 24th, 2009 might be expressed with
    the following predicate rain(24082009).
•   So we conclude:
                      ∀X : rain(X) → streets wet(X)



            if we know that rain(24082009) because we observed the rain at the given date,
      then we infer streets wet(24082009), which is not known before. Hence, logical reasoning can
                                      be used to derive new facts.
Certainty factor
• The Door Bell Problem

• The door bell rang at 12 O’clock in midnight.
• -was someone at the door?
• -did Mohan wake up?

• -Proposition 1: atdoor(x)       doorbell.
• -Proposition 2: doorbell        wake(Mohan).
Reasoning about Door Bell
 Given doorbell can we say
   at doorbell(x), because atDoor(x)       doorbell.
 Abductive reasoning.
 But no, the doorbell might start ringing due to
  other reasons:-
 -short circuit.
 -wind.
 -animals.
Cnt….
•   Given doorbell, can we say
•   Wake(Mohan), because doorbell            wake(Mohan).
•   Deductive reasoning.
•   yes, only if proposition 2 is always true.
•   However in general Mohan may not wakeup even if
    the bell rings.
Cnt…
• Therefore, we cannot answer either of the
  questions with certainty.
• Proposition 1 is incomplete so modifying it as
• Atdoor(x)v shortcircuit v wind……..      Doorbell.
• Doesn't help because the list of possible causes
  on the left is huge(infinite may be).
• Proposition is often true but not a tautology.
Any way out?
• However, problems like that of the doorbell are
  very common in real life.
• In A.I we often need to reason under such It will rain in December.
                                                UNCERTAINITY

  circumstances.
• We solve it by proper modeling of uncertainty
• & impreciseness and developing appropriate
  reasoning techniques.
                                             IMPRECISENESS
                                           Often rarely sometimes.
                                             e.g. boy is very tall.
Non monotonic reasoning
• In first order logic, adding new axioms increases
  the amount of knowledge base. Therefore set of
  facts and inferences in such systems can grow
  larger, they cannot be reduced i.e. they increase
  monotonically.
• But Nonmonotonic reasoning means adding new
  facts to the database will contradict and invalidate
  the old knowledge.
example
•   We first state that all birds can fly and that one bird is named Tweety.
•   ∀X : bird(X) → fly(X)
•   bird(tweety)
•   From this two pieces of knowledge we can derive that Tweety can fly, i.e.,
•   bird(tweety) ∀X : bird(X) → fly(X)
•   fly(tweety)
•   If we add another fact like Tom is also a bird bird(Tom), then we can also derive that
    Tom is also able to fly. Hence, more knowledge allows us to derive more new rules
    and facts. As a consequence we conclude that classical reasoning (in FOL) is
    monotonic.
•   The situation changes if we add new knowledge like penguins cannot fly and that
    Tweety is a penguin.
•   ∀X : penguin(X) → ¬fly(X)
•   penguin(tweety)
•   In this case we derive a contradiction from which we can derive everything.
Truth maintenance systems
•   Necessary when changes in the fact-base lead
    to inconsistency / incorrectness among the facts
        non-monotonic reasoning
•   A Truth Maintenance System tries to adjust the
    Knowledge Base or Fact Base upon changes to
    keep it consistent and correct.
•   A TMS uses dependencies among facts to keep
    track of conclusions and allow revision /
    retraction of facts and conclusions.
Dependency…….example
•   Suppose the knowledge base KB contained only the propositions P, P →Q. From this IE would right
    fully conclude Q and this conclusion to the KB. Later if it was learned that if P was inappropriate, it
    would be added to the KB resulting in an contradiction. Consequenlty it woluld be necessary to
    remove P to eliminate the inconsistency. But with the P now removed, Q is no longer a justified
    belief. It should be removed . This type of job is done by TMS.
•   Actually the TMS does not discard the conclusions like Q as suggested. That could be wasteful since
    p became again valid. so again we have to re-derive it .instead TMS maintains a dependency
    records for all such conclusions. The records determine which se of beliefs are current (which are to
    be used by IE). Thus Q would be removed from the current belief set and not deleted.



           INFERENCE                       TELL
              ENGINE                       ASK                        TMS
                                                                                             Architecture of the problem
                                                                                                   solver with TMS

                                     KNOWLEDGE
                                        BASE
Structured knowledge

       PROFESSION(bob,professor)
        FACULTY(bob,engineering)
                      .
                      .
          MARRIED(bob,sandy)
        FATHER-OF(bob,sue,joey)
           OWNS(bob,house)




 When the quantity of information becomes large, the maintenance of
 knowledge becomes difficult. in such cases, some form of knowledge
 structuring is done.
ASSOCIATIVE NETWORKS
•   Network representations provides a means of structuring and
    exhibiting the structure in knowledge.
•   Network representations give a pictorial presentation of objects ,
    their attributes and their relationships that exist between them and
    other entities.

    Can         fly

•            a-kind-of              color
bird                       tweety             yellow

       Has
                   wings             fragment of associative n/w.
Understanding Frames
• Frames were first introduced by Marvin minsky as
  a data structure to represent a mental model of a
  stereotypical situation such as driving a car,
  attending a meeting or eating in a restaurant.
• Knowledge about an object or event is stored
  together in memory as a unit, then when a new
  frame is encountered, an appropriate frame is
  selected from memory for use in reasoning about
  the situation.
Understanding Frames – Facts

   Frames are record-like structures that have slots & slot-

    values for an entity

   Using frames, the knowledge about an object/event can be

    stored together in the KB as a unit

   A slot in a frame

       specify a characteristic of the entity which the frame

        represents

       Contains information as attribute-value pairs, default

        values etc.
Frame syntax
•   (<frame name>
             (<slot1>(<facet 1><value1>…<value1>)
                      (<facet 2><value2>…<value2>)
                         .
                         .
              (<slot2>(<facet 1><value1>…<valuem>)
                         .
                         .
                         .)
Understanding Frames - Examples

1.   An example frame:
•       (Tweety
•                        (SPECIES (VALUE bird))
•                        (COLOR          (VALUE yellow))
•                        (ACTIVITY       (VALUE fly)))


2.   Employee Details
•       ( Ruchi Sharma
•                        (PROFESSION     (VALUE Tutor))
•                        (EMPID          (VALUE 376074))
•                        (SUBJECT        (VALUE   Computers)))
Graphical Representation
• Graphs easy to store in a computer
• To be of any use must impose a formalism
Conceptual Graphs
• Semantic network where each graph represents a
  single proposition
• Concept nodes can be
   – Concrete (visualisable) such as restaurant, my dog Spot
   – Abstract (not easily visualisable) such as anger
• Edges do not have labels
   – Instead, conceptual relation nodes
   – Easy to represent relations between multiple objects
Thank you

Contenu connexe

Tendances

Pointing the Unknown Words
Pointing the Unknown WordsPointing the Unknown Words
Pointing the Unknown Wordshytae
 
23 Machine Learning Feature Generation
23 Machine Learning Feature Generation23 Machine Learning Feature Generation
23 Machine Learning Feature GenerationAndres Mendez-Vazquez
 
Harnessing Deep Neural Networks with Logic Rules
Harnessing Deep Neural Networks with Logic RulesHarnessing Deep Neural Networks with Logic Rules
Harnessing Deep Neural Networks with Logic RulesSho Takase
 
Backpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural NetworkBackpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural NetworkHiroshi Kuwajima
 
Using Topological Data Analysis on your BigData
Using Topological Data Analysis on your BigDataUsing Topological Data Analysis on your BigData
Using Topological Data Analysis on your BigDataAnalyticsWeek
 

Tendances (6)

Pointing the Unknown Words
Pointing the Unknown WordsPointing the Unknown Words
Pointing the Unknown Words
 
lec18_ref.pdf
lec18_ref.pdflec18_ref.pdf
lec18_ref.pdf
 
23 Machine Learning Feature Generation
23 Machine Learning Feature Generation23 Machine Learning Feature Generation
23 Machine Learning Feature Generation
 
Harnessing Deep Neural Networks with Logic Rules
Harnessing Deep Neural Networks with Logic RulesHarnessing Deep Neural Networks with Logic Rules
Harnessing Deep Neural Networks with Logic Rules
 
Backpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural NetworkBackpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural Network
 
Using Topological Data Analysis on your BigData
Using Topological Data Analysis on your BigDataUsing Topological Data Analysis on your BigData
Using Topological Data Analysis on your BigData
 

Similaire à aritficial intellegence

Number Crunching in Python
Number Crunching in PythonNumber Crunching in Python
Number Crunching in PythonValerio Maggio
 
Jarrar: ORM in Description Logic
Jarrar: ORM in Description Logic  Jarrar: ORM in Description Logic
Jarrar: ORM in Description Logic Mustafa Jarrar
 
Knowledge engg using & in fol
Knowledge engg using & in folKnowledge engg using & in fol
Knowledge engg using & in folchandsek666
 
NoSQL and Einstein's theory of relativity
NoSQL and Einstein's theory of relativityNoSQL and Einstein's theory of relativity
NoSQL and Einstein's theory of relativityLars Marius Garshol
 
Jarrar: Introduction to logic and Logic Agents
Jarrar: Introduction to logic and Logic Agents Jarrar: Introduction to logic and Logic Agents
Jarrar: Introduction to logic and Logic Agents Mustafa Jarrar
 
Dealing with inconsistency
Dealing with inconsistencyDealing with inconsistency
Dealing with inconsistencyRajat Sharma
 
The Semantic Web #8 - Ontology
The Semantic Web #8 - OntologyThe Semantic Web #8 - Ontology
The Semantic Web #8 - OntologyMyungjin Lee
 
Week 2 - ML models and Linear Regression.pptx
Week 2 - ML models and Linear Regression.pptxWeek 2 - ML models and Linear Regression.pptx
Week 2 - ML models and Linear Regression.pptxHafizAliHummad
 

Similaire à aritficial intellegence (20)

artficial intelligence
artficial intelligenceartficial intelligence
artficial intelligence
 
AI-Unit4.ppt
AI-Unit4.pptAI-Unit4.ppt
AI-Unit4.ppt
 
4KN Editted 2012.ppt
4KN Editted 2012.ppt4KN Editted 2012.ppt
4KN Editted 2012.ppt
 
Lec 3.pdf
Lec 3.pdfLec 3.pdf
Lec 3.pdf
 
l4.pptx
l4.pptxl4.pptx
l4.pptx
 
u-3 ppt.pptx
u-3 ppt.pptxu-3 ppt.pptx
u-3 ppt.pptx
 
c23_ml1.ppt
c23_ml1.pptc23_ml1.ppt
c23_ml1.ppt
 
Number Crunching in Python
Number Crunching in PythonNumber Crunching in Python
Number Crunching in Python
 
AI IMPORTANT QUESTION
AI IMPORTANT QUESTIONAI IMPORTANT QUESTION
AI IMPORTANT QUESTION
 
Jarrar: ORM in Description Logic
Jarrar: ORM in Description Logic  Jarrar: ORM in Description Logic
Jarrar: ORM in Description Logic
 
Knowledge representation
Knowledge representationKnowledge representation
Knowledge representation
 
Knowledge engg using & in fol
Knowledge engg using & in folKnowledge engg using & in fol
Knowledge engg using & in fol
 
NoSQL and Einstein's theory of relativity
NoSQL and Einstein's theory of relativityNoSQL and Einstein's theory of relativity
NoSQL and Einstein's theory of relativity
 
Jarrar: Introduction to logic and Logic Agents
Jarrar: Introduction to logic and Logic Agents Jarrar: Introduction to logic and Logic Agents
Jarrar: Introduction to logic and Logic Agents
 
Dealing with inconsistency
Dealing with inconsistencyDealing with inconsistency
Dealing with inconsistency
 
Lecture 1
Lecture 1Lecture 1
Lecture 1
 
lec1.ppt
lec1.pptlec1.ppt
lec1.ppt
 
The Semantic Web #8 - Ontology
The Semantic Web #8 - OntologyThe Semantic Web #8 - Ontology
The Semantic Web #8 - Ontology
 
Unit -I Toc.pptx
Unit -I Toc.pptxUnit -I Toc.pptx
Unit -I Toc.pptx
 
Week 2 - ML models and Linear Regression.pptx
Week 2 - ML models and Linear Regression.pptxWeek 2 - ML models and Linear Regression.pptx
Week 2 - ML models and Linear Regression.pptx
 

aritficial intellegence

  • 1. DEALING WITH INCONSITENCIES ~ANKIT SHARMA M.Tech 3rd Sem ROLL No. 312
  • 2. LOGIC-a brief intro • Logic is used to represent textual information in a formal way in order to give the information a precise meaning and to remove ambiguity. • For example, we might want to express that every day when it rains, the streets are wet. • In first-order logic (FOL) we might express the fact that it is raining at a specific day using a predicate rain with an argument representing the date when it is raining, e.g., the fact that it is raining on August, 24th, 2009 might be expressed with the following predicate rain(24082009). • So we conclude: ∀X : rain(X) → streets wet(X) if we know that rain(24082009) because we observed the rain at the given date, then we infer streets wet(24082009), which is not known before. Hence, logical reasoning can be used to derive new facts.
  • 3. Certainty factor • The Door Bell Problem • The door bell rang at 12 O’clock in midnight. • -was someone at the door? • -did Mohan wake up? • -Proposition 1: atdoor(x) doorbell. • -Proposition 2: doorbell wake(Mohan).
  • 4. Reasoning about Door Bell  Given doorbell can we say at doorbell(x), because atDoor(x) doorbell.  Abductive reasoning.  But no, the doorbell might start ringing due to other reasons:-  -short circuit.  -wind.  -animals.
  • 5. Cnt…. • Given doorbell, can we say • Wake(Mohan), because doorbell wake(Mohan). • Deductive reasoning. • yes, only if proposition 2 is always true. • However in general Mohan may not wakeup even if the bell rings.
  • 6. Cnt… • Therefore, we cannot answer either of the questions with certainty. • Proposition 1 is incomplete so modifying it as • Atdoor(x)v shortcircuit v wind…….. Doorbell. • Doesn't help because the list of possible causes on the left is huge(infinite may be). • Proposition is often true but not a tautology.
  • 7. Any way out? • However, problems like that of the doorbell are very common in real life. • In A.I we often need to reason under such It will rain in December. UNCERTAINITY circumstances. • We solve it by proper modeling of uncertainty • & impreciseness and developing appropriate reasoning techniques. IMPRECISENESS Often rarely sometimes. e.g. boy is very tall.
  • 8. Non monotonic reasoning • In first order logic, adding new axioms increases the amount of knowledge base. Therefore set of facts and inferences in such systems can grow larger, they cannot be reduced i.e. they increase monotonically. • But Nonmonotonic reasoning means adding new facts to the database will contradict and invalidate the old knowledge.
  • 9. example • We first state that all birds can fly and that one bird is named Tweety. • ∀X : bird(X) → fly(X) • bird(tweety) • From this two pieces of knowledge we can derive that Tweety can fly, i.e., • bird(tweety) ∀X : bird(X) → fly(X) • fly(tweety) • If we add another fact like Tom is also a bird bird(Tom), then we can also derive that Tom is also able to fly. Hence, more knowledge allows us to derive more new rules and facts. As a consequence we conclude that classical reasoning (in FOL) is monotonic. • The situation changes if we add new knowledge like penguins cannot fly and that Tweety is a penguin. • ∀X : penguin(X) → ¬fly(X) • penguin(tweety) • In this case we derive a contradiction from which we can derive everything.
  • 10. Truth maintenance systems • Necessary when changes in the fact-base lead to inconsistency / incorrectness among the facts non-monotonic reasoning • A Truth Maintenance System tries to adjust the Knowledge Base or Fact Base upon changes to keep it consistent and correct. • A TMS uses dependencies among facts to keep track of conclusions and allow revision / retraction of facts and conclusions.
  • 11. Dependency…….example • Suppose the knowledge base KB contained only the propositions P, P →Q. From this IE would right fully conclude Q and this conclusion to the KB. Later if it was learned that if P was inappropriate, it would be added to the KB resulting in an contradiction. Consequenlty it woluld be necessary to remove P to eliminate the inconsistency. But with the P now removed, Q is no longer a justified belief. It should be removed . This type of job is done by TMS. • Actually the TMS does not discard the conclusions like Q as suggested. That could be wasteful since p became again valid. so again we have to re-derive it .instead TMS maintains a dependency records for all such conclusions. The records determine which se of beliefs are current (which are to be used by IE). Thus Q would be removed from the current belief set and not deleted. INFERENCE TELL ENGINE ASK TMS Architecture of the problem solver with TMS KNOWLEDGE BASE
  • 12. Structured knowledge PROFESSION(bob,professor) FACULTY(bob,engineering) . . MARRIED(bob,sandy) FATHER-OF(bob,sue,joey) OWNS(bob,house) When the quantity of information becomes large, the maintenance of knowledge becomes difficult. in such cases, some form of knowledge structuring is done.
  • 13. ASSOCIATIVE NETWORKS • Network representations provides a means of structuring and exhibiting the structure in knowledge. • Network representations give a pictorial presentation of objects , their attributes and their relationships that exist between them and other entities. Can fly • a-kind-of color bird tweety yellow Has wings fragment of associative n/w.
  • 14. Understanding Frames • Frames were first introduced by Marvin minsky as a data structure to represent a mental model of a stereotypical situation such as driving a car, attending a meeting or eating in a restaurant. • Knowledge about an object or event is stored together in memory as a unit, then when a new frame is encountered, an appropriate frame is selected from memory for use in reasoning about the situation.
  • 15. Understanding Frames – Facts  Frames are record-like structures that have slots & slot- values for an entity  Using frames, the knowledge about an object/event can be stored together in the KB as a unit  A slot in a frame  specify a characteristic of the entity which the frame represents  Contains information as attribute-value pairs, default values etc.
  • 16. Frame syntax • (<frame name> (<slot1>(<facet 1><value1>…<value1>) (<facet 2><value2>…<value2>) . . (<slot2>(<facet 1><value1>…<valuem>) . . .)
  • 17. Understanding Frames - Examples 1. An example frame: • (Tweety • (SPECIES (VALUE bird)) • (COLOR (VALUE yellow)) • (ACTIVITY (VALUE fly))) 2. Employee Details • ( Ruchi Sharma • (PROFESSION (VALUE Tutor)) • (EMPID (VALUE 376074)) • (SUBJECT (VALUE Computers)))
  • 18. Graphical Representation • Graphs easy to store in a computer • To be of any use must impose a formalism
  • 19. Conceptual Graphs • Semantic network where each graph represents a single proposition • Concept nodes can be – Concrete (visualisable) such as restaurant, my dog Spot – Abstract (not easily visualisable) such as anger • Edges do not have labels – Instead, conceptual relation nodes – Easy to represent relations between multiple objects