Ce diaporama a bien été signalé.
Le téléchargement de votre SlideShare est en cours. ×

Artificial intelligence course work

Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Prochain SlideShare
Knowledge representation
Knowledge representation
Chargement dans…3
×

Consultez-les par la suite

1 sur 148 Publicité

Artificial intelligence course work

Fundamentals of Artificial Intelligence Introduction, A.I. Representation, Non-AI & AI Techniques, Representation of Knowledge, Knowledge Base Systems, State Space Search, Production Systems, Problem Characteristics, types of production systems, Intelligent Agents and Environments, concept of rationality, the nature of environments, structure of agents, problem solving agents, problem formulation ,Searching Planning Blocks world, STRIPS, Implementation using goal stack, Partial Order Planning, Hierarchical planning, and least commitment strategy. Conditional Planning, Continuous Planning Machine Learning AlgorithmsKnowledge Representation Knowledge based agents, Wumpus world, Propositional Logic: Representation, Inference, Reasoning Patterns, Resolution, First order Logic: Representation, Inference, Reasoning Patterns, Resolution, Forward and Backward Chaining. Basics of PROLOG: Representation, Structure, Backtracking, Expert System. Uncertainty Non Monotonic Reasoning, Logics for Non Monotonic Reasoning, Forward rules and Backward rules, Justification based Truth Maintenance Systems, Semantic Nets Statistical Reasoning, Probability and Bayes’ theorem, Bayesian Network, Markov Networks, Hidden Markov Model, Basis of Utility Theory, Utility Functions.

Fundamentals of Artificial Intelligence Introduction, A.I. Representation, Non-AI & AI Techniques, Representation of Knowledge, Knowledge Base Systems, State Space Search, Production Systems, Problem Characteristics, types of production systems, Intelligent Agents and Environments, concept of rationality, the nature of environments, structure of agents, problem solving agents, problem formulation ,Searching Planning Blocks world, STRIPS, Implementation using goal stack, Partial Order Planning, Hierarchical planning, and least commitment strategy. Conditional Planning, Continuous Planning Machine Learning AlgorithmsKnowledge Representation Knowledge based agents, Wumpus world, Propositional Logic: Representation, Inference, Reasoning Patterns, Resolution, First order Logic: Representation, Inference, Reasoning Patterns, Resolution, Forward and Backward Chaining. Basics of PROLOG: Representation, Structure, Backtracking, Expert System. Uncertainty Non Monotonic Reasoning, Logics for Non Monotonic Reasoning, Forward rules and Backward rules, Justification based Truth Maintenance Systems, Semantic Nets Statistical Reasoning, Probability and Bayes’ theorem, Bayesian Network, Markov Networks, Hidden Markov Model, Basis of Utility Theory, Utility Functions.

Publicité
Publicité

Plus De Contenu Connexe

Similaire à Artificial intelligence course work (20)

Publicité

Plus par Infinity Tech Solutions (20)

Plus récents (20)

Publicité

Artificial intelligence course work

  1. 1. Artificial Intelligence Course Work By: Mr. Ganesh Ingle
  2. 2. Artificial Intelligence Course (Fundamentals of Artificial Intelligence) By: Mr. Ganesh Ingle
  3. 3. Course Main Topic/Chapters Fundamentals of Artificial Intelligence Searching Planning Knowledge Representation Uncertainty
  4. 4. Course Main Topic/Chapters Section 1: Topics/Contents Fundamentals of Artificial Intelligence Introduction, A.I. Representation, Non-AI & AI Techniques, Representation of Knowledge, Knowledge Base Systems, State Space Search, Production Systems, Problem Characteristics, types of production systems, Intelligent Agents and Environments, concept of rationality, the nature of environments, structure of agents, problem solving agents, problem formulation Searching Planning Blocks world, STRIPS, Implementation using goal stack, Partial Order Planning, Hierarchical planning, and least commitment strategy. Conditional Planning, Continuous Planning Machine Learning Algorithms
  5. 5. Course Main Topic/Chapters Section 2: Topics/Contents Knowledge Representation Knowledge based agents, Wumpus world, Propositional Logic: Representation, Inference, Reasoning Patterns, Resolution, First order Logic: Representation, Inference, Reasoning Patterns, Resolution, Forward and Backward Chaining. Basics of PROLOG: Representation, Structure, Backtracking, Expert System. Uncertainty Non Monotonic Reasoning, Logics for Non Monotonic Reasoning, Forward rules and Backward rules, Justification based Truth Maintenance Systems, Semantic Nets Statistical Reasoning, Probability and Bayes’ theorem, Bayesian Network, Markov Networks, Hidden Markov Model, Basis of Utility Theory, Utility Functions.
  6. 6. Representation of Knowledge 1.AI agents deal with knowledge (data) • Facts (believe & observe knowledge) • Procedures (how to knowledge) • Meaning (relate & define knowledge) 2.Right representation is crucial • Early realisation in AI • Wrong choice can lead to project failure • Active research area
  7. 7. Representation of Knowledge Choosing a Representation 1. For certain problem solving techniques • ‘Best’ representation already known • Often a requirement of the technique • Or a requirement of the programming language (e.g. Prolog) 2. Examples • First order theorem proving… first order logic • Inductive logic programming… logic programs • Neural networks learning… neural networks 3. Some general representation schemes • Suitable for many different (and new) AI applications
  8. 8. Representation of Knowledge Some General Representations 1. Logical Representations 2. Production Rules 3. Semantic Networks • Conceptual graphs, frame What is a Logic? • A language with concrete rules • No ambiguity in representation (may be other errors!) • Allows unambiguous communication and processing • Very unlike natural languages e.g. English • Many ways to translate between languages • A statement can be represented in different logics • And perhaps differently in same logic • Expressiveness of a logic • How much can we say in this language? • Not to be confused with logical reasoning • Logics are languages, reasoning is a process (may use logic)
  9. 9. Syntax and Semantics Representation of Knowledge 1.Syntax • Rules for constructing legal sentences in the logic • Which symbols we can use (English: letters, punctuation) • How we are allowed to combine symbols 2.Semantics • How we interpret (read) sentences in the logic • Assigns a meaning to each sentence 3.Example: “All lecturers are seven foot tall” • A valid sentence (syntax) • And we can understand the meaning (semantics) • This sentence happens to be false (there is a counterexample)
  10. 10. Propositional Logic Representation of Knowledge 1.Syntax • Propositions, e.g. “it is wet” • Connectives: and, or, not, implies, if (equivalent) • Brackets, T (true) and F (false) 2.Semantics (Classical AKA Boolean) • Define how connectives affect truth • “P and Q” is true if and only if P is true and Q is true • Use truth tables to work out the truth of statements
  11. 11. Predicate Logic Representation of Knowledge 1.Predicate Logic 2. Propositional logic combines atoms • An atom contains no propositional connectives • Have no structure (today_is_wet, john_likes_apples) 3. Predicates allow us to talk about objects • Properties: is_wet(today) • Relations: likes(john, apples) • True or false 4. In predicate logic each atom is a predicate • e.g. first order logic, higher-order logic
  12. 12. Representation of Knowledge 1.First Order Logic 2.More expressive logic than propositional • Used in this course 3.Constants are objects: john, apples 4.Predicates are properties and relations: • likes(john, apples) 5.Functions transform objects: • likes(john, fruit_of(apple_tree)) 6.Variables represent any object: likes(X, apples) 7.Quantifiers qualify values of variables
  13. 13. Example: FOL Sentence Representation of Knowledge Example: FOL Sentence 1. “Every rose has a thorn” 2. For all X • if (X is a rose) • then there exists Y • (X has Y) and (Y is a thorn)
  14. 14. Higher Order Logic Representation of Knowledge Higher Order Logic 1. More expressive than first order 2. Functions and predicates are also objects • Described by predicates: binary(addition) • Transformed by functions: differentiate(square) • Can quantify over both 3. E.g. define red functions as having zero at 17 4. Much harder to reason with
  15. 15. Beyond True and False Representation of Knowledge Beyond True and False 1. Multi-valued logics • More than two truth values • e.g., true, false & unknown • Fuzzy logic uses probabilities, truth value in [0,1] 2. Modal logics • Modal operators define mode for propositions • Epistemic logics (belief) • e.g. p (necessarily p), p (possibly p), … 3. Temporal logics (time) 1. e.g. p (always p), p (eventually p), …
  16. 16. Representation of Knowledge Non-Logical Representations? 1. Production rules 2. Semantic networks • Conceptual graphs • Frames 3. Logic representations have restricitions and can be hard to work with • Many AI researchers searched for better representations
  17. 17. Representation of Knowledge 1.Production Rules 2. Rule set of <condition,action> pairs • “if condition then action” 3. Match-resolve-act cycle • Match: Agent checks if each rule’s condition holds • Resolve: • Multiple production rules may fire at once (conflict set) • Agent must choose rule from set (conflict resolution) • Act: If so, rule “fires” and the action is carried out 4. Working memory: • rule can write knowledge to working memory • knowledge may match and fire other rules
  18. 18. Representation of Knowledge Production Rules Example 1. IF (at bus stop AND bus arrives) THEN action(get on the bus) 2. IF (on bus AND not paid AND have oyster card) THEN action(pay with oyster) AND add(paid) 3. IF (on bus AND paid AND empty seat) THEN sit down 4. conditions and actions must be clearly defined can easily be expressed in first order logic!
  19. 19. Graphical Representation 1. Humans draw diagrams all the time, e.g. 2. Causal relationships 1. And relationships between ideas
  20. 20. Representation of Knowledge Graphical Representation 1. Graphs easy to store in a computer 2. To be of any use must impose a formalism 3. Jason is 15, Bryan is 40, Arthur is 70, Jim is 74 4. How old is Julia?
  21. 21. Representation of Knowledge 1.Semantic Networks • Because the syntax is the same • We can guess that Julia’s age is similar to Bryan’s • Formalism imposes restricted syntax
  22. 22. Representation of Knowledge Semantic Networks 1. Graphical representation (a graph) • Links indicate subset, member, relation, ... 2. Equivalent to logical statements (usually FOL) • Easier to understand than FOL? • Specialised SN reasoning algorithms can be faster 3. Example: natural language understanding • Sentences with same meaning have same graphs • e.g. Conceptual Dependency Theory (Schank)
  23. 23. 1.Conceptual Graphs 1. Semantic network where each graph represents a single proposition 2. Concept nodes can be • Concrete (visualisable) such as restaurant, my dog Spot • Abstract (not easily visualisable) such as anger 3. Edges do not have labels • Instead, conceptual relation nodes • Easy to represent relations between multiple objects Representation of Knowledge
  24. 24. Representation of Knowledge Frame Representations 1. Semantic networks where nodes have structure • Frame with a number of slots (age, height, ...) • Each slot stores specific item of information 2. When agent faces a new situation • Slots can be filled in (value may be another frame) • Filling in may trigger actions • May trigger retrieval of other frames 3. Inheritance of properties between frames • Very similar to objects in OOP
  25. 25. Representation of Knowledge Example: Frame Representation
  26. 26. Representation of Knowledge 1.Flexibility in Frames 2. Slots in a frame can contain • Information for choosing a frame in a situation • Relationships between this and other frames • Procedures to carry out after various slots filled • Default information to use where input is missing • Blank slots: left blank unless required for a task • Other frames, which gives a hierarchy 3. Can also be expressed in first order logic
  27. 27. Representation of Knowledge 1.Representation & Logic • AI wanted “non-logical representations” • Production rules • Semantic networks • Conceptual graphs, frames • But all can be expressed in first order logic! • Best of both worlds • Logical reading ensures representation well-defined • Representations specialised for applications • Can make reasoning easier, more intuitive
  28. 28. AI ,ML and Deep learning
  29. 29. AI Hard AI Soft AI Artificial Intelligence
  30. 30. Artificial Intelligence AI Act Like Human Turing Test Playing chess by machine and human in different rooms with observer not knowing which room has machine Complete Turring Test NLP: Natural language representation KR: Knowledge representation Movement: Robotics Rationally Rational Agent approach Think Like Human Cognitive Approach ANN design ANN training Activation functions Weights to the ANN Rationally Law of Thought Approach i/p 1=Ram is human i/p =All human are kind o/p=Ram is Kind
  31. 31. AI ,ML and Deep learning
  32. 32. How do our brains work? Dendrites: Input Cell body: Processor Synaptic: Link Axon: Output
  33. 33. Artificial Neuron?
  34. 34. How do ANNs work?
  35. 35. How do ANNs work? Output x1x2xm ∑ y Processing Input ∑= X1+X2 + ….+Xm =y . . . . . . . . . . . .
  36. 36. How ANNs work? Inputs Weights Bias value Hidden layer Sigmoid function Activation function Back propagation Output Fitness function
  37. 37. Learning by trial‐and‐error Continuous process of: Trial: Processing an input to produce an output (In terms of ANN: Compute the output function of a given input) Evaluate: Evaluating this output by comparing the actual output with the expected output. Adjust: Adjust the weights.
  38. 38. 1. Supervised Training.  Inputs and the outputs are provided by user  Compares resulting outputs w.r.t. desired outputs  Errors are back propagated, causing the system to adjust the weights which control the network. 2. Unsupervised, or Adaptive Training.  Inputs dataset but no desired outputs for comparison  System itself must decide what features will use to group the input data  Present time, unsupervised learning is not well understood CHARACTERISTICS TRADITIONAL COMPUTING (including Expert Systems) ARTIFICIAL NEURAL NETWORKS Processing style Functions Sequential Logically (left brained) via Rules Concepts Calculations Parallel Gestault (right brained) via Images Pictures Controls Learning Method Applications by rules (didactically) Accounting word processing math inventory digital communications by example (Socratically) Sensor processing speech recognition pattern recognition text recognition ANN Training
  39. 39.  AI takes decision to generate true or false output (Mathematical logic/ Propositional Logic-PL)  AI takes decisions with prediction 1. Predicated logic / First order predicate logic (FOL) 2. SET theory  AI takes decision to be or not to be 1. Resolution 2. FOL to CNF  AI searches data for self evolving/training(Complete Data Structure) AI systems dependencies Blind Search (UN Informed) 1. BFS 2. DFS 3. DLS 4. Depth limited search 5. Interactive Deeping Search 6. Bi Directional search Heuristic Search (Informed) 1. Best First search 2. A* 3. Greedy Search
  40. 40.  1. Heuristics Generic heuristics are Ant Colony Optimization2 and genetic algorithms3. The first is based on how simple ants are able to work together to solve complex problems; the latter is based on the principle of survival of the fittest.  2. Support Vector Machines SVM classification models can also be found in image recognition, e.g. face recognition, or when handwriting is converted to text.  3. Artificial Neural Networks For image recognition purposes, typically Convolutional networks are used, in which only groups of neurons from one layer are connected to groups of neurons in the next layer. For speech recognition purposes, typically Recurrent networks are used, that allow for loops from neurons in a later layer back to an earlier layer. Non-AI & AI Techniques,
  41. 41.  4. Markov Decision Process Based on the probabilities and rewards a policy (function) can be made using the initial and final state. inventory planning problem - a stock keeper or manager has to decide how many units have to be ordered each week. The inventory planning can be modeled as an MDP, where the states can be considered as positive inventory and shortages.  5. Natural Language Processing Analysing the grammar of the text and the way the words are arranged, so that the relationship between the words is clear. The Part-of-Speech tag from the lexical analysis is used and then grouped into small phrases, which in turn can also be combined with other phrases or words to make a slightly longer phrase. This is repeated until the goal is reached: every word in the sentence has been used. The rules of how the words can be grouped are called the grammar and can take a form like this: D+N = NP, which reads: a Determiner + Noun = Noun Phrase. The final result is depicted in the figure. Non-AI & AI Techniques,
  42. 42.  The techniques used within the domain of Artificial Intelligence are actually just advanced forms of statistical and mathematical models. All these models cleverly put together provide us with tools to compute tasks that were previously thought to be reserved for humans. In subsequent blogs we will dive deeper into business applications, some associated technology trends, and the top 5 risks and concerns. Non-AI & AI Techniques,
  43. 43.  What is a knowledge-based system? A system which is built around a knowledge base. i.e. a collection of knowledge, taken from a human, and stored in such a way that the system can reason with it.  What is knowledge? Knowledge is the sort of information that people use to solve problems. Knowledge includes: facts, concepts, procedures, models, heuristics, examples. Knowledge may be: specific or general, exact or fuzzy, procedural or declarative Knowledge Base Systems
  44. 44.  The task that an expert system performs will generally be regarded as difficult.  An expert system almost always operates in a rather narrow field of knowledge. The field of knowledge is called the knowledge domain of the system.  There are many fields where expert systems can usefully be built.  There are many fields where they can’t.  Note also that an expert can usually explain and justify his/her decisions. Expert Systems
  45. 45.  Developing an expert system usually costs a great deal of time & money  Historically, there has been a high failure rate in E.S. projects  The project may well fail during development - most likely during the “knowledge acquisition” phase.  The development may succeed, but the organisation may fail to accept and use the finished system. Disadvantages of Expert Systems
  46. 46.  Introduction Different searches that can be used to explore the search space in order to find a solution. Before an AI problem can be solved it must be represented as a state space. The state space is then searched to find a solution to the problem. A state space essentially consists of a set of nodes representing each state of the problem, arcs between nodes representing the legal moves from one state to another, an initial state and a goal state. Each state space takes the form of a tree or a graph. Factors that determine which search algorithm or technique will be used include the type of the problem and the how the problem can be represented. Search  Techniques that will be examined in the course include: • Depth First Search • Depth First Search with Iterative Deepening • Breadth First Search • Best First Search • Hill Climbing • Branch and Bound Techniques • A* Algorithm State Space Search
  47. 47. Classic AI Problems Traveling Salesman Towers of Hanoi 8-Puzzle State Space Search
  48. 48. State Space Search
  49. 49. State Space Search
  50. 50. State Space Search
  51. 51. State Space Search
  52. 52. State Space Search
  53. 53. State Space Search
  54. 54. State Space Search
  55. 55. State Space Search
  56. 56. State Space Search
  57. 57. State Space Search
  58. 58. State Space Search
  59. 59. State Space Search
  60. 60. State Space Search
  61. 61. State Space Search
  62. 62. State Space Search
  63. 63. State Space Search
  64. 64. State Space Search
  65. 65. State Space Search
  66. 66. State Space Search
  67. 67. State Space Search
  68. 68. The production system is a model of computation that can be applied to implement search algorithms and model human problem solving. Such problem solving knowledge can be packed up in the form of little quanta called productions. A production is a rule consisting of a situation recognition part and an action part. A production is a situation-action pair in which the left side is a list of things to watch for and the right side is a list of things to do so. When productions are used in deductive systems, the situation that trigger productions are specified combination of facts. The actions are restricted to being assertion of new facts deduced directly from the triggering combination. Production systems may be called premise conclusion pairs rather than situation action pair.. Production Systems
  69. 69. A production system consists of following components. (a ) A set of production rules, which are of the form A®B. Each rule consists of left hand side constituent that represent the current problem state and a right hand side that represent an output state. A rule is applicable if its left hand side matches with the current problem state. (b) A database, which contains all the appropriate information for the particular task. Some part of the database may be permanent while some part of this may pertain only to the solution of the current problem. (c) A control strategy that specifies order in which the rules will be compared to the database of rules and a way of resolving the conflicts that arise when several rules match simultaneously. (d) A rule applier, which checks the capability of rule by matching the content state with the left hand side of the rule and finds the appropriate rule from database of rules. Production Systems
  70. 70. The production system can be classified as monotonic, non- monotonic, partially commutative and commutative. Production Systems
  71. 71. Production Systems
  72. 72. Production Systems
  73. 73. Features of Production System • Simplicity • Modularity • Modifiability • Knowledge intensive Disadvantages of production system • Opacity • Inefficiency • Absence of learning • Conflict resolution Production Systems
  74. 74. Intelligent Agents and Environments
  75. 75. Intelligent Agents and Environments Simple reflex agents
  76. 76. Intelligent Agents and Environments Model-based reflex agents
  77. 77. Intelligent Agents and Environments Goal-based agents
  78. 78. Learning Agent  Learning element :It is responsible for making improvements by learning from the environment  Critic: Learning element takes feedback from critic which describes how well the agent is doing with respect to a fixed performance standard.  Performance element: It is responsible for selecting external action  Problem Generator: This component is responsible for suggesting actions that will lead to new and informative experiences. Intelligent Agents and Environments
  79. 79. Rationality is nothing but status of being reasonable, sensible, and having good sense of judgment. Rationality is concerned with expected actions and results depending upon what the agent has perceived. Performing actions with the aim of obtaining useful information is an important part of rationality. The rationality of the agent is measured by its performance measure, the prior knowledge it has, the environment it can perceive and actions it can perform. Concept of rationality
  80. 80. A rational agent needs to be designed, keeping in mind the type of environment it will be used in. Below are the types: • Fully observable and partially observable • Deterministic and Stochastic • Static and Dynamic • Discrete and Continuous • Single agent and Multi-agent • Rational agents or Problem-solving agents in AI mostly used these search strategies or algorithms to solve a specific problem and provide the best result. Problem- solving agents are the goal-based agents and use atomic representation. Concept of rationality
  81. 81. Problem-solving agents Goal Formulation-Set of one or more (desirable) world states.(eg.Checkmate in Chess) Problem Formulation-What actions and states to consider given a goal and an initial state Search for solution-Given the problem, search for a solution-- a sequence of actions to achieve the goal starting from initial state Execution of the solution Concept of rationality
  82. 82. Problem-solving agents Goal Formulation- Specify the objectives to be achieved • goal - a set of desirable world states in which the objectives have been achieved • current / initial situation - starting point for the goal formulation • actions - cause transitions between world states Concept of rationality
  83. 83. Problem-solving agents Problem Formulation Actions and states to consider states - possible world states accessibility - the agent can determine via its sensors in which state it is consequences of actions - the agent knows the results of its actions levels - problems and actions can be specified at various levels constraints - conditions that influence the problem- solving process performance - measures to be applied costs - utilization of resources Concept of rationality
  84. 84. Problem-solving agents Problem Formulation Problem Types Not all problems are created equal • Single-state problem • Multiple-state problem • Contingency problem • Exploration problem Concept of rationality
  85. 85. Artificial Intelligence Course Work (Searching Planning ) By: Mr. Ganesh Ingle
  86. 86. Blocks world
  87. 87. Blocks world
  88. 88. Blocks world
  89. 89. Blocks world
  90. 90. Blocks world
  91. 91. Blocks world
  92. 92. Blocks world
  93. 93. Blocks world
  94. 94. Blocks world
  95. 95. Blocks world
  96. 96. Partial Order Planning Self Repairing Explorers Monitors Command dispatch Fault protection Attitude control Mission Goal Scenario Self-commanding Self-diagnosing Self-repairing RECOVERY Commanded at: • Mission level • Engineering level
  97. 97. Partial Order Planning Planning with Atomic Time • Operator-based planning as search • Declarative encoding of states and operators • Partial order planning • Planning problem • Partial order planning algorithm
  98. 98. Partial Order Planning Operator-based Planning Problem  Input  Set of world states  Action operators  Fn: world-stateworld-state  Initial state of world  Goal  partial state (set of world states)  Output  Sequence of actions  What assumptions are implied?  Atomic time.  Agent is omniscient (no sensing necessary).  Agent is sole cause of change.  Actions have deterministic effects.  No indirect effects.  STRIPS Assumptions a a a north11 north12 W0 W2W1
  99. 99. Partial Order Planning Planning with Atomic Time • Operator-based planning as search • Declarative encoding of state and operators • Partial order planning • Planning problem • Partial order planning algorithm Plan from goals, back to initial state Search through partial plans Representation: • Operators given in declarative representation, rather than black box functions. • Plans represent only relevant commitments (e.g., relevant ordering of operators, not total ordering)
  100. 100. Partial Order Planning POP(<A,O,L>, agenda, actions) • <A,O,L>, A partial plan to expand • Agenda: A queue of open conditions still to be satisfied: <p, aneed > • Actions: A set of actions that may be introduced to meet needs. • aadd: an action that produces the needed condition p for aneed • Athreat : an action that might threaten a causal link from aproducer to aconsumer
  101. 101. Partial Order Planning POP(<A,O,L>, agenda, actions) 1. Termination: If agenda is empty, return plan <A,O,L>. 2. Goal Selection: select and remove open condition <p, aneed > from agenda. 3. Action Selection: Choose new or existing action aadd that can precede aneed and whose effects include p. Link and order actions. 4. Update Agenda: If aadd is new, add its preconditions to agenda. 5. Threat Detection: For every action athreat that might threaten some causal link from aproduce to aconsume, choose a consistent ordering: a) Demotion: Add athreat < aproduce b) Promotion: Add aconsume < athreat 6. Recurse: on modified plan and agenda
  102. 102. Artificial Intelligence Course Work (Knowledge Representation) By: Mr. Ganesh Ingle
  103. 103. Course Main Topic/Chapters Section 2: Topics/Contents Knowledge Representation Knowledge based agents, Wumpus world, Propositional Logic: Representation, Inference, Reasoning Patterns, Resolution, First order Logic: Representation, Inference, Reasoning Patterns, Resolution, Forward and Backward Chaining. Basics of PROLOG: Representation, Structure, Backtracking, Expert System.
  104. 104. Representation of Knowledge
  105. 105. Representation of Knowledge Logical Representations ● They are mathematically precise, thus we can analyze their limitations, their properties, the complexity of inference etc. ● They are formal languages, thus computer programs can manipulate sentences in the language. ● They come with both a formal syntax and a formal semantics. ● Typically, have well developed proof theories: formal procedures for reasoning (achieved by manipulating sentences).
  106. 106. Representation of Knowledge 1.AI agents deal with knowledge (data) • Facts (believe & observe knowledge) • Procedures (how to knowledge) • Meaning (relate & define knowledge) 2.Right representation is crucial • Early realisation in AI • Wrong choice can lead to project failure • Active research area
  107. 107. Representation of Knowledge Choosing a Representation 1. For certain problem solving techniques • ‘Best’ representation already known • Often a requirement of the technique • Or a requirement of the programming language (e.g. Prolog) 2. Examples • First order theorem proving… first order logic • Inductive logic programming… logic programs • Neural networks learning… neural networks 3. Some general representation schemes • Suitable for many different (and new) AI applications
  108. 108. Representation of Knowledge Some General Representations 1. Logical Representations 2. Production Rules 3. Semantic Networks • Conceptual graphs, frame What is a Logic? • A language with concrete rules • No ambiguity in representation (may be other errors!) • Allows unambiguous communication and processing • Very unlike natural languages e.g. English • Many ways to translate between languages • A statement can be represented in different logics • And perhaps differently in same logic • Expressiveness of a logic • How much can we say in this language? • Not to be confused with logical reasoning • Logics are languages, reasoning is a process (may use logic)
  109. 109. Syntax and Semantics Representation of Knowledge 1.Syntax • Rules for constructing legal sentences in the logic • Which symbols we can use (English: letters, punctuation) • How we are allowed to combine symbols 2.Semantics • How we interpret (read) sentences in the logic • Assigns a meaning to each sentence 3.Example: “All lecturers are seven foot tall” • A valid sentence (syntax) • And we can understand the meaning (semantics) • This sentence happens to be false (there is a counterexample)
  110. 110. Propositional Logic Representation of Knowledge 1.Syntax • Propositions, e.g. “it is wet” • Connectives: and, or, not, implies, if (equivalent) • Brackets, T (true) and F (false) 2.Semantics (Classical AKA Boolean) • Define how connectives affect truth • “P and Q” is true if and only if P is true and Q is true • Use truth tables to work out the truth of statements
  111. 111. Predicate Logic Representation of Knowledge 1.Predicate Logic 2. Propositional logic combines atoms • An atom contains no propositional connectives • Have no structure (today_is_wet, john_likes_apples) 3. Predicates allow us to talk about objects • Properties: is_wet(today) • Relations: likes(john, apples) • True or false 4. In predicate logic each atom is a predicate • e.g. first order logic, higher-order logic
  112. 112. Representation of Knowledge 1.First Order Logic 2.More expressive logic than propositional • Used in this course 3.Constants are objects: john, apples 4.Predicates are properties and relations: • likes(john, apples) 5.Functions transform objects: • likes(john, fruit_of(apple_tree)) 6.Variables represent any object: likes(X, apples) 7.Quantifiers qualify values of variables
  113. 113. Example: FOL Sentence Representation of Knowledge Example: FOL Sentence 1. “Every rose has a thorn” 2. For all X • if (X is a rose) • then there exists Y • (X has Y) and (Y is a thorn)
  114. 114. Higher Order Logic Representation of Knowledge Higher Order Logic 1. More expressive than first order 2. Functions and predicates are also objects • Described by predicates: binary(addition) • Transformed by functions: differentiate(square) • Can quantify over both 3. E.g. define red functions as having zero at 17 4. Much harder to reason with
  115. 115. Beyond True and False Representation of Knowledge Beyond True and False 1. Multi-valued logics • More than two truth values • e.g., true, false & unknown • Fuzzy logic uses probabilities, truth value in [0,1] 2. Modal logics • Modal operators define mode for propositions • Epistemic logics (belief) • e.g. p (necessarily p), p (possibly p), … 3. Temporal logics (time) 1. e.g. p (always p), p (eventually p), …
  116. 116. Representation of Knowledge Non-Logical Representations? 1. Production rules 2. Semantic networks • Conceptual graphs • Frames 3. Logic representations have restricitions and can be hard to work with • Many AI researchers searched for better representations
  117. 117. Representation of Knowledge 1.Production Rules 2. Rule set of <condition,action> pairs • “if condition then action” 3. Match-resolve-act cycle • Match: Agent checks if each rule’s condition holds • Resolve: • Multiple production rules may fire at once (conflict set) • Agent must choose rule from set (conflict resolution) • Act: If so, rule “fires” and the action is carried out 4. Working memory: • rule can write knowledge to working memory • knowledge may match and fire other rules
  118. 118. Representation of Knowledge Production Rules Example 1. IF (at bus stop AND bus arrives) THEN action(get on the bus) 2. IF (on bus AND not paid AND have oyster card) THEN action(pay with oyster) AND add(paid) 3. IF (on bus AND paid AND empty seat) THEN sit down 4. conditions and actions must be clearly defined can easily be expressed in first order logic!
  119. 119. Graphical Representation 1. Humans draw diagrams all the time, e.g. 2. Causal relationships 1. And relationships between ideas
  120. 120. Representation of Knowledge Graphical Representation 1. Graphs easy to store in a computer 2. To be of any use must impose a formalism 3. Jason is 15, Bryan is 40, Arthur is 70, Jim is 74 4. How old is Julia?
  121. 121. Representation of Knowledge 1.Semantic Networks • Because the syntax is the same • We can guess that Julia’s age is similar to Bryan’s • Formalism imposes restricted syntax
  122. 122. Representation of Knowledge Semantic Networks 1. Graphical representation (a graph) • Links indicate subset, member, relation, ... 2. Equivalent to logical statements (usually FOL) • Easier to understand than FOL? • Specialised SN reasoning algorithms can be faster 3. Example: natural language understanding • Sentences with same meaning have same graphs • e.g. Conceptual Dependency Theory (Schank)
  123. 123. 1.Conceptual Graphs 1. Semantic network where each graph represents a single proposition 2. Concept nodes can be • Concrete (visualisable) such as restaurant, my dog Spot • Abstract (not easily visualisable) such as anger 3. Edges do not have labels • Instead, conceptual relation nodes • Easy to represent relations between multiple objects Representation of Knowledge
  124. 124. Representation of Knowledge Frame Representations 1. Semantic networks where nodes have structure • Frame with a number of slots (age, height, ...) • Each slot stores specific item of information 2. When agent faces a new situation • Slots can be filled in (value may be another frame) • Filling in may trigger actions • May trigger retrieval of other frames 3. Inheritance of properties between frames • Very similar to objects in OOP
  125. 125. Representation of Knowledge Example: Frame Representation
  126. 126. Representation of Knowledge 1.Flexibility in Frames 2. Slots in a frame can contain • Information for choosing a frame in a situation • Relationships between this and other frames • Procedures to carry out after various slots filled • Default information to use where input is missing • Blank slots: left blank unless required for a task • Other frames, which gives a hierarchy 3. Can also be expressed in first order logic
  127. 127. Representation of Knowledge 1.Representation & Logic • AI wanted “non-logical representations” • Production rules • Semantic networks • Conceptual graphs, frames • But all can be expressed in first order logic! • Best of both worlds • Logical reading ensures representation well-defined • Representations specialised for applications • Can make reasoning easier, more intuitive
  128. 128. Forward and Backward Chaining
  129. 129. Forward and Backward Chaining
  130. 130. Forward and Backward Chaining
  131. 131. Forward and Backward Chaining
  132. 132. Introduction to Prolog What is Prolog? (continued) • traditional programming languages are said to be procedural • procedural programmer must specify in detail how to solve a problem: • mix ingredients; • beat until smooth; • bake for 20 minutes in a moderate oven; • remove tin from oven; • put on bench; • close oven; • turn off oven; • in purely declarative languages, the programmer only states what the problem is and leaves the rest to the language system • We'll see specific, simple examples of cases where Prolog fits really well shortly
  133. 133. Introduction to Prolog Applications of Prolog • intelligent data base retrieval • natural language understanding • expert systems • specification language • machine learning • robot planning • automated reasoning • problem solving
  134. 134. Introduction to Prolog • Backtracking in Prolog • Who does Codd teach? ?- lectures(codd, Course), studies(Student, Course). Course = 9311 Student = jack ; Course = 9314 Student = jill ; Course = 9314 Student = henry ; • Prolog solves this problem by proceeding left to right and then backtracking. • When given the initial query, Prolog starts by trying to solve lectures(codd, Course) • There are six lectures clauses, but only two have codd as their first argument. • Prolog uses the first clause that refers to codd: lectures(codd, 9311). • With Course = 9311, it tries to satisfy the next goal, studies(Student, 9311). • It finds the fact studies(jack, 9311). and hence the first solution: (Course = 9311, Student = jack)
  135. 135. Artificial Intelligence Course Work (Uncertainty) By: Mr. Ganesh Ingle
  136. 136. Course Main Topic/Chapters Section 2: Topics/Contents Uncertainty Non Monotonic Reasoning, Logics for Non Monotonic Reasoning, Forward rules and Backward rules, Justification based Truth Maintenance Systems, Semantic Nets Statistical Reasoning, Probability and Bayes’ theorem, Bayesian Network, Markov Networks, Hidden Markov Model, Basis of Utility Theory, Utility Functions.
  137. 137. Probability and Bayes’ theorem, Bayesian Network Probability is numerical measure of the likelihood of the occurrence of an event. Questions: • what is a good general size for artifact samples? • what proportion of populations of interest should we be attempting to sample? • how do we evaluate the absence of an artifact type in our collections?
  138. 138. Probability and Bayes’ theorem, Bayesian Network “frequentist” approach probability should be assessed in purely objective terms no room for subjectivity on the part of individual researchers knowledge about probabilities comes from the relative frequency of a large number of trials this is a good model for coin tossing not so useful for archaeology, where many of the events that interest us are unique…
  139. 139. Bayesian approach Bayes Theorem Thomas Bayes 18th century English clergyman concerned with integrating “prior knowledge” into calculations of probability problematic for frequentists prior knowledge = bias, subjectivity… Probability and Bayes’ theorem, Bayesian Network
  140. 140. Basic concepts probability of event = p 0 <= p <= 1 0 = certain non-occurrence 1 = certain occurrence .5 = even odds .1 = 1 chance out of 10 Probability and Bayes’ theorem, Bayesian Network
  141. 141. Basic concepts probability of event = p 0 <= p <= 1 0 = certain non-occurrence 1 = certain occurrence .5 = even odds .1 = 1 chance out of 10 if A and B are mutually exclusive events: P(A or B) = P(A) + P(B) ex., die roll: P(1 or 6) = 1/6 + 1/6 = .33 possibility set: sum of all possible outcomes ~A = anything other than A P(A or ~A) = P(A) + P(~A) = 1 Probability and Bayes’ theorem, Bayesian Network
  142. 142. Basic concepts probability of event = p 0 <= p <= 1 0 = certain non-occurrence 1 = certain occurrence .5 = even odds .1 = 1 chance out of 10 if A and B are mutually exclusive events: P(A or B) = P(A) + P(B) ex., die roll: P(1 or 6) = 1/6 + 1/6 = .33 Probability and Bayes’ theorem, Bayesian Network Basic concepts possibility set: sum of all possible outcomes ~A = anything other than A P(A or ~A) = P(A) + P(~A) = 1 discrete vs. continuous probabilities discrete finite number of outcomes continuous outcomes vary along continuous scale
  143. 143. Discrete probabilities Probability and Bayes’ theorem, Bayesian Network Continuous probabilities
  144. 144. Independent events one event has no influence on the outcome of another event if events A & B are independent then P(A&B) = P(A)*P(B) if P(A&B) = P(A)*P(B) then events A & B are independent coin flipping if P(H) = P(T) = .5 then P(HTHTH) = P(HHHHH) = .5*.5*.5*.5*.5 = .55 = .03 if you are flipping a coin and it has already come up heads 6 times in a row, what are the odds of an 7th head? =.5 note that P(10H) < > P(4H,6T) lots of ways to achieve the 2nd result (therefore much more probable) Probability and Bayes’ theorem, Bayesian Network
  145. 145. 1. Mutually exclusive events are not independent 2. Rather, the most dependent kinds of events • if not heads, then tails • joint probability of 2 mutually exclusive events is 0 • P(A&B)=0 Conditional probability concern the odds of one event occurring, given that another event has occurred P(A|B)=Prob of A, given B P(B|A) = P(A&B)/P(A) if A and B are independent, then P(B|A) = P(A)*P(B)/P(A) P(B|A) = P(B) Probability and Bayes’ theorem, Bayesian Network
  146. 146. Bayes Theorem Can be derived from the basic equation for conditional probabilities Probability and Bayes’ theorem, Bayesian Network              BAPBPBAPBP BAPBP ABP |~~| | |  
  147. 147. 1. Mutually exclusive events are not independent 2. Rather, the most dependent kinds of events • if not heads, then tails • joint probability of 2 mutually exclusive events is 0 • P(A&B)=0 Conditional probability concern the odds of one event occurring, given that another event has occurred P(A|B)=Prob of A, given B P(B|A) = P(A&B)/P(A) if A and B are independent, then P(B|A) = P(A)*P(B)/P(A) P(B|A) = P(B) Probability and Bayes’ theorem, Bayesian Network
  148. 148. THANK YOU Image Source searchenterpriseai.techtarget.com wikipedia

×