SlideShare une entreprise Scribd logo
1  sur  80
Learning Agents Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci
Overview Basic bibliography Learning strategies Instructable agents: the Disciple approach
Learning strategies Inductive learning from examples Deductive (explanation-based) learning Introduction Analogical learning Abductive learning Multistrategy learning
Machine Learning is the domain of Artificial Intelligence which is concerned with building adaptive computer systems that are able to improve their competence and/or efficiency through learning from input data or from their own problem solving experience. What is Machine Learning What does it mean to improve competence? What does it mean to improve efficiency?
Two complementary dimensions for learning A system is improving its competence if it learns to solve a broader class of problems, and to make fewer mistakes in problem solving. A system is improving its efficiency, if it learns to solve the problems from its area of competence faster or by using fewer resources. Competence Efficiency
The architecture of a learning agent Ontology Rules/Cases/Methods Problem Solving Engine Learning Agent User/ Environment Output/ Sensors Effectors Input/ Knowledge Base Learning Engine Implements learning methods  for extending and refining  the knowledge  base to improve agent’s competence and/or efficiency in problem solving. Implements a general problem solving method that uses the knowledge from the knowledge base to interpret the input and provide an appropriate output. Data structures that represent the objects from the application domain,  general laws governing them, actions that can be performed with them, etc.
Learning strategies A Learning Strategy is a basic form of learning characterized by the employment of a certain type of inference (like deduction, induction or analogy) and a certain type of computational or representational mechanism (like rules, trees, neural networks, etc.). ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Successful applications of Machine Learning ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Basic ontological elements: instances and concepts An  instance  is a representation of a particular entity from the application domain. A  concept  is a representation of a set of instances.  government_of_Britain_1943 government_of_US_1943 state_government instance_of instance_of “ state_government” represents the set of all entities that are governments of states. This set includes “government_of_US_1943” and “government_of_Britain_1943” which are called  positive examples. government_of_US_1943 government_of_Britain_1943 state_government “ instance_of”  is the relationship between an instance and the concept to which it belongs. An entity which is not an instance of a concept  is called a  negative example  of that concept.
Concept generality A concept P is  more general than  another concept Q if and only if the set of instances represented by P includes the set of instances represented by Q.  state_government democratic_government representative_ democracy totalitarian_ government parliamentary_ democracy Example: “ subconcept_of”  is the relationship between a concept and a more general concept. state_government subconcept_of democratic_government
A generalization hierarchy feudal_god_ king_government totalitarian_ government democratic_ government theocratic_ government state_government military_ dictatorship police_ state religious_ dictatorship representative_ democracy parliamentary_ democracy theocratic_ democracy monarchy governing_body other_state_ government dictator deity_figure chief_and_ tribal_council autocratic_ leader democratic_ council_ or_board group_governing_body other_ group_ governing_ body government_ of_Italy_1943 government_ of_Germany_1943 government_ of_US_1943 government_ of_Britain_1943 ad_hoc_ governing_body established_ governing_body other_type_of_ governing_body fascist_ state communist_ dictatorship religious_ dictatorship government_ of_USSRy_1943
Learning strategies Inductive learning from Examples Deductive (explanation-based) learning Introduction Analogical learning Abductive learning Multistrategy learning
Approach: Compare the positive and the negative examples of a concept, in terms of their similarities and differences, and learn the concept as a generalized description of the similarities of the positive examples.  This allows the agent to recognize other entities as being instances of the learned concept.  Illustration Positive examples of cups:   P1  P2  ... Negative examples of cups:   N1  … A description of the cup concept:   has-handle(x), ... Empirical inductive concept learning from examples Given Learn
The learning problem Given •  a language of instances; •   a language of generalizations; •   a set of positive examples (E1, ..., En) of a concept •  a set of negative examples (C1, ... , Cm) of the same concept •  a learning bias •  other background knowledge Determine •  a concept description which is a generalization of the positive    examples that does not cover any of the negative examples Purpose of concept learning: Predict if an instance is an example of the learned concept.
Generalization and specialization rules ,[object Object],[object Object],[object Object],Indicate various generalizations of the following sentence: Students who have lived in Fairfax for 3 years.
Generalization (and specialization) rules Climbing the generalization hierarchy Dropping condition Generalizing numbers Adding alternatives Turning constants into variables
Climbing/descending the generalization hierarchy Generalizes an expression by replacing a concept with a more general one.   ?O1  is single_state_force has_as_governing_body  ?O2 ?O2  is representative_democracy generalization specialization representative_democracy    democratic_government The set of single state forces governed by representative democracies democratic_government    representative_democracy ?O1  is single_state_force has_as_governing_body  ?O2 ?O2  is democratic_government The set of single state forces governed by democracies democratic_government representative_democracy parliamentary_democracy
Basic idea of version space concept learning Consider the examples E1, … , E2 in sequence. UB + Initialize the lower bound to the first positive example (LB=E1) and the upper bound (UB) to the most general generalization of E1. LB + + UB LB If the next example is a positive one, then generalize LB as little as possible to cover it. _ + + UB LB If the next example is a negative one, then specialize UB as little as possible to uncover it and to remain more general than LB. _ + + UB=LB _ _ + + … Repeat the above two steps with the rest of examples until UB=LB. This is the learned concept.
The candidate elimination algorithm ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Illustration of the candidate elimination algorithm Language of instances:   (shape, size) shape: {ball, brick, cube} size: {large, small} Language of generalizations:   (shape, size) shape: {ball, brick, cube, any-shape} size: {large, small, any-size} Learning process: Input examples : ball small + shape size  class ball large + brick small – cube large – -(brick, small) G = {(ball, any-size) (any-shape, large)} 2 -(cube, large) G = {(ball, any-size)} 3 G = {(any-shape, any-size)} S = {(ball, large)} +(ball, large) +(ball, large) 1 1 +(ball, small) S = {(ball, any-size)} || 4
General features of the empirical inductive methods  Require many examples. Do not need much domain knowledge. Improve the competence of the agent.   The version space method relies on an exhaustive bi-directional search and is computationally intensive. This limits its practical applicability.   Practical empirical inductive learning methods (such as ID3) rely on heuristic search to hypothesize the concept.
Learning strategies Inductive learning from Examples Deductive (explanation-based) learning Introduction Analogical learning Abductive learning Multistrategy learning
Explanation-based learning problem Given A training example cup(o1)    color(o1, white), made-of(o1, plastic), light-mat(plastic),    has-handle(o1), has-flat-bottom(o1), up-concave(o1),... Learning goal A specification of the desirable features of the concept to be learned  (e.g. the learned concept should have only features from the example) Background knowledge Complete knowledge that allows proving that the training example represents the concept: cup(x)    liftable(x), stable(x), open-vessel(x). liftable(x)    light(x), graspable(x). stable(x)    has-flat-bottom(x).  … Determine A concept definition representing a deductive generalization of the training example that satisfies the learning goal. cup(x)    made-of(x, y),light-mat(y),has-handle(x),has-flat-bottom(x),up-concave(x). Purpose of learning Improve the problem solving efficiency of the agent.
Explanation-based learning method 1. Construct an explanation that proves that the training example is an example of the concept to be learned. 2. Generalize the explanation as much as possible so that the proof still holds, and extract from it a concept definition that satisfies the learning goal. A example of a cup cup(o1): color(o1, white), made-of(o1, plastic), light-mat(plastic), has-handle(o1), has-flat-bottom(o1), up-concave(o1),...   cup(o1) stable(o1) liftable(o1) graspable(o1) light(o1) light-mat(plastic) made-of(o1,plastic) ... ... has-handle(o1) cup(x) stable(x) liftable(x) graspable(x) light(x) light-mat(y) made-of(x,y) ... ... has-handle(x) The proof identifies the characteristic features: Proof generalization generalizes them: •  has-handle(o1) is needed to prove cup(o1) •  color(o1,white) is not needed to prove cup(o1) •  made-of(o1, plastic) is needed to prove cup(o1) •  made-of(o1, plastic) is generalized  to made-of(x, y); •  the material needs not be plastic.
Discussion How does this learning method improve the efficiency of the problem solving process? The proof identifies the characteristic features Proof generalization generalizes them Do we need a training example to learn an operational definition of the concept?  Why? The learner does not need a training example. It can simply build proof trees from top-down, starting with an abstract definition of the concept and growing the tree until the leaves are operational features. However, without a training example the learner will learn many operational definitions. The training example focuses the learner on the most typical example. A cup is recognized by using a single rule rather than building a proof tree. cup(o1) stable(o1) liftable(o1) graspable(o1) light(o1) light-mat(plastic) made-of(o1,plastic) ... ... has-handle(o1) cup(x) stable(x) liftable(x) graspable(x) light(x) light-mat(y) made-of(x,y) ... ... has-handle(x)
•  Needs only one example   •  Requires complete knowledge about the concept  (which makes this learning strategy impractical).   •  Improves agent's efficiency in problem solving   General features of explanation-based learning •  Shows the importance of explanations in learning
Learning strategies Inductive learning from Examples Deductive (explanation-based) learning Introduction Analogical learning Abductive learning Multistrategy learning
Learning by analogy Learning by analogy means acquiring new knowledge about an input entity by transferring it from a known similar entity.   The hydrogen atom is like our solar system. The Sun has a greater mass than the Earth and attracts it, causing the Earth to revolve around the Sun. The nucleus also has a greater mass then the electron and attracts it. Therefore it is plausible that the electron also revolves around the nucleus.
Discussion Which is the central intuition supporting learning by analogy: If two entities are similar in some respects then they could be similar in other respects as well.  Examples of analogies: Pressure Drop is like Voltage Drop A variable in a programming language is like a box. Provide other examples of analogies.
Learning by analogy: the learning problem Given:  •  A partially known target entity T and a goal concerning it. •  Background knowledge containing known entities. Find: •  New knowledge about T obtained from a source entity S belonging to the background knowledge. Partially understood structure of the hydrogen atom under study. Knowledge from different domains, including astronomy, geography, etc. In a hydrogen atom the electron revolves around the nucleus, in a similar way in which a planet revolves around the sun.
Learning by analogy: the learning method •  ACCESS: find a known entity that is analogous with the input entity. •  MATCHING: match the two entities and hypothesize knowledge. •  EVALUATION: test the hypotheses. •  LEARNING: store or generalize the new knowledge. Store that, in a hydrogen atom, the electron revolves around the nucleus. By generalization from the solar system and the hydrogen atom, learn the abstract concept that a central force can cause revolution. Based on what is known about the hydrogen atom and the solar system identify the solar system as a source for the hydrogen atom. One may map the nucleus to the sun and the electron to the planet, allowing one to infer that the electron revolves around the nucleus because the nucleus attracts the electron and the mass of the nucleus is greater than the mass of the electron. A specially designed experiment shows that indeed the electron revolves around the nucleus.
Discussion How could we define problem solving by analogy?   How does analogy help to solve new problems? How does analogy help? Why not just study the structure of the hydrogen atom to discover that new knowledge? We anyway need to perform an experiment to test that the electron revolves around the hydrogen atom.
•  Requires a huge amount of background knowledge from a wide variety of domains.   •  Improves agent's competence through knowledge base refinement. General features of analogical learning •  Is a very powerful reasoning method with incomplete knowledge •  Reduces the solution finding to solution verification.
Learning strategies Inductive learning from Examples Deductive (explanation-based) learning Introduction Analogical learning Abductive learning Multistrategy learning
Abductive learning Finds the hypothesis that explains best an observation (or collection of data) and assumes it as being true. The learning problem •  Let D be a collection of data •  Find all the hypotheses that (causally) explain D •  Find the hypothesis H that explains D better than   other hypotheses  •  Assert that H is true The learning method Illustration: There is smoke in the East building. Fire causes smoke. Hypothesize that there is a fire in the East building.
Abduction (cont.) Provide other examples of abductive reasoning. Consider the observation: “University Dr. is wet” Use abductive learning. Which are other potential explanations? What real world applications of abductive reasoning can you imagine? Raining causes the streets to be wet. I hypothesize that it has rained on the University Dr.
Learning strategies Inductive learning from Examples Deductive (explanation-based) learning Introduction Analogical learning Abductive learning Multistrategy learning
Multistrategy learning Multistrategy learning is concerned with developing learning agents that synergistically integrate two or more learning strategies in order to solve learning tasks that are beyond the capabilities of the individual learning strategies that are integrated.
Examples Learning from examples Explanation- based learning Multistrategy Knowledge learning needed Effect on agent's behavior Type of inference Complementariness of learning strategies needed many one several improves competence improves efficiency improves competence or/ and efficiency induction deduction induction and/ or deduction complete incomplete knowledge very little knowledge
Overview Basic bibliography Learning strategies Instructable agents: the Disciple approach
Instructable agents: the Disciple approach Modeling expert’s reasoning Rule learning Introduction Application and evaluation Multi-agent problem solving and learning  Integrated modeling, learning, solving
The knowledge engineer attempts to understand how the subject matter expert reasons and solves problems and then encodes the acquired expertise into the system's knowledge base. This modeling and representation of expert knowledge is long, painful and inefficient. How are agents built and why it is hard Why?
Elaborate a theory and methodology for the mixed-initiative, end-to-end development of knowledge bases and agents by subject matter experts, with limited assistance from knowledge engineers. The Disciple approach: Problem statement How is this approach addresses the knowledge acquisition bottleneck? The expert teaches the agent to perform various  tasks in a way that resembles  how the expert would  teach a person. The agent learns from the expert, building, verifying and improving its knowledge base Approach Mixed-initiative reasoning between an expert that has the knowledge to be formalized and a learning agent shell that knows how to formalize it. Interface Problem Solving Learning Ontology + Rules
Mainframe Computers Software systems developed and used by computer experts Vision on the evolution of computer systems Personal Computers Software systems developed by computer experts and used by persons that are not computer experts Learning Agents Software systems developed and used by persons that are not computer experts
General architecture of Disciple-RKF Mixed  Initiative  Manager  Instances Solving Rules Teaching, Learning and Problem Solving Intelligent User  Interface Autonomous Problem Solving Modeling Knowledge Base Management Ontology Import Ontology Instances Solving Problem Rules Mixed_Initiative Problem Solving Natural Language Generation Ontology Editors and Browsers Rule Learning Rule Refinement Scenario Elicitation Task Learning
Knowledge base: Object ontology + reasoning rules Object ontology A hierarchical representation of the types of objects from the COG domain. A hierarchical representation of the types of features of the objects from the COG domain.
Plausible Lower Bound Condition ?O1 is  will_of_the_people_of_US_1943 ?O2 is  US_1943 has_as_people ?O6  has_as_governing_body ?O5 ?O3 is  European_Axis_1943 ?O4 is Dominance_of_Europe_by_European_Axis ?O5 is  government_of_US_1943 has_as_will ?O7 ?O6 is people_of_US_1945 has_as_will ?O1 ?O7 is will_of_the_government_of_US_1943 reflects ?O1 IF:  Test whether the will of the people can make a state  accept the strategic goal of an opposing force The will of the people is ?O1 The state is ?O2 The opposing force is ?O3 The goal is ?O4 Plausible Upper Bound Condition ?O1 is  will_of_agent ?O2 is  force has_as_people ?O6  has_as_governing_body ?O5 ?O3 is  (strategic_COG_relevant_factor agent) ?O4 is  force_goal ?O5 is  representative_democracy  has_as_will ?O7 ?O6 is people has_as_will ?O1 ?O7  is will_of_agent reflects ?O1 THEN:  Test whether the will of the people, that controls the government, can make a state accept the strategic goal of an opposing force The will of the people is ?O1 The government is ?O5 The state is ?O2 The opposing force is ?O3 The goal is ?O4 Task reduction rule
The developed Disciple approach Modeling the problem solving process of the subject matter expert and development of the object ontology of the agent. Teaching of the agent by the subject matter expert.
Instructable agents: the Disciple approach Modeling expert’s reasoning Rule learning Introduction Application and evaluation Multi-agent problem solving and learning  Integrated modeling, learning, solving
Application domain The center of gravity of an entity (state, alliance, coalition, or group) is the foundation of capability, the hub of all power and movement, upon which everything depends, the point against which all the energies should be directed. Carl Von Clausewitz, “On War,” 1832. If a combatant eliminates or influences the enemy’s strategic center of gravity, then the enemy will lose control of its power and resources and will eventually fall to defeat. If the combatant fails to adequately protect his own strategic center of gravity, he invites disaster. Identification of strategic Center of Gravity candidates in war scenarios
Modeling the identification of center of gravity candidates Identify and test a strategic COG candidate for the Sicily_1943 scenario Sicily_1943 is a war scenario Identify and test a strategic COG candidate for Sicily_1943 which is a war scenario What kind of scenario is Sicily_1943? I need to  Therefore I need to  Which is an opposing force in the Sicily_1943 scenario? Allied_Forces_1943 Identify and test a strategic COG candidate for Allied_Forces_1943 European_Axis_1943 Therefore I need to  Identify and test a strategic COG candidate for European_Axis_1943 Therefore I need to  … Is Allied_Forces_1943 a single-member force or a multi-member force? Allied_Forces_1943 is a multi-member force Identify and test a strategic COG candidate for Allied_Forces_1943 which is a multi-member force Therefore I need to  …
I need to  Test whether the will of the people of US_1943 can make US_1943 accept the strategic goal of European_Axis_1943 which is ‘Dominance of Europe by European Axis’ What is the strategic goal of European_Axis_1943? ‘ Dominance of Europe by European Axis’ Therefore I need to  Test the will of the people of US_1943 which is a strategic COG candidate with respect to the people of US_1943 Let us assume that the people of US_1943 would accept ‘Dominance of Europe by European Axis’. Could the people of US_1943 make the government of US_1943 accept ‘Dominance of Europe by European Axis’? Yes, because US_1943 is a representative democracy and the government of US_1943 reflects the will of the people of US_1943 Therefore I need to  Let us assume that the people of US_1943 would accept ‘Dominance of Europe by European Axis’. Could the people of US_1943 make the military of US_1943 accept ‘Dominance of Europe by European Axis’? Yes, because US_1943 is a representative democracy and the will of the military of US_1943 reflects the will of the people of US_1943 Therefore  conclude that The will of the people of US_1943 is a strategic COG candidate that cannot be eliminated Test whether the will of the people of US_1943 that controls the government of US_1943 can make US_1943 accept the strategic goal of European_Axis_1943 which is ‘Dominance of Europe by European Axis’ …
Modeling advisor: helping the expert to express his reasoning
Instructable agents: the Disciple approach Modeling expert’s reasoning Rule learning Introduction Application and evaluation Multi-agent problem solving and learning  Integrated modeling, learning, solving
IF Identify and test a strategic COG candidate corresponding to a member of the ?O1 Question Which is a member of ?O1 ? Answer ?O2 THEN Identify and test a strategic COG candidate for ?O2 Natural Language Logic Rule learning IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Plausible Upper Bound Condition  ?O1 is multi_member_force has_as_member  ?O2 ?O2  is force Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member  ?O2 ?O2 is single_state_force explanation ?O1 has_as_member ?O2 Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943  US_1943 Identify and test a strategic COG candidate for US_1943 Which is a member of Allied_Forces_1943? I need to  Therefore I need to  INFORMAL STRUCTURE OF THE RULE FORMAL STRUCTURE OF THE RULE
1. Formalize the tasks Identify and test a strategic COG candidate for  US_1943 I need to  Therefore I need to  Identify and test a strategic COG candidate corresponding to a member of the  Allied_Forces_1943   Identify and test a strategic COG candidate for a force The force is  US_1943 I need to  Therefore I need to  Identify and test a strategic COG candidate corresponding to a member of a force The force is  Allied_Forces_1943
Task formalization interface
2. Find an explanation of why the example is correct  US_1943 has_as_member Allied_Forces_1943 The explanation is the best possible approximation of the question and the answer, in the object ontology. US_1943 Identify and test a strategic COG candidate for  US_1943 Which is a member of  Allied_Forces_1943 ? I need to  Therefore I need to  Identify and test a strategic COG candidate corresponding to a member of the  Allied_Forces_1943
Explanation generation and selection interface
3. Generate the plausible version space rule Condition ?O1   is  Allied_Forces_1943  has_as_member  ?O2 ?O2   is  US_1943 Most general generalization IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Plausible Upper Bound Condition  ?O1 is multi_member_force has_as_member  ?O2 ?O2  is force Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member  ?O2 ?O2 is single_state_force explanation ?O1 has_as_member ?O2 Most specific generalization US_1943 has_as_member Allied_Forces_1943 Rewrite as Which is the justification for these generalizations?
Analogical reasoning similar example explains? similar Identify and test a strategic COG candidate for a force   The force is  Germany_1943 I need to  Therefore I need to  Identify and test a strategic COG candidate corresponding to a member of a force   The force is  European_Axis_1943   initial example explanation explains Identify and test a strategic COG candidate for a force   The force is  US_1943 I need to  Therefore I need to  Identify and test a strategic COG candidate corresponding to a member of a force   The force is  Allied_Forces_1943   US_1943 has_as_ member Allied_Forces_1943 similar explanation less general than Analogy criterion less general than ?O2 has_as_ member ?O1 force multi_member_force instance_of instance_of Germany_1943 has_as_ member European_Axis_1943 similar
Generalization by analogy Any value of ?O1 should be an instance of:  DOMAIN(has_as_member)    RANGE(The force is) =    multi_member_force     force  = multi_member_force Any value of ?O2 should be an instance of:  RANGE(has_as_member)    RANGE(The force is)  = force    force =  force   Knowledge-base constraints on the generalization: initial example explains Identify and test a strategic COG candidate for a force   The force is  US_1943 I need to  Therefore I need to  Identify and test a strategic COG candidate corresponding to a member of a force   The force is  Allied_Forces_1943   US_1943 has_as_ member Allied_Forces_1943 generalization explains ?O2 has_as_ member ?O1 force multi_member_force instance_of instance_of Identify and test a strategic COG candidate for a force   The force is ?O2 I need to  Therefore I need to  Identify and test a strategic COG candidate corresponding to a member of a force   The force is ?O1
Instructable agents: the Disciple approach Modeling expert’s reasoning Rule learning Introduction Integrated modeling, learning, solving Application and evaluation Multi-agent problem solving and learning
Control of modeling, learning and solving Input Task Generated Reduction Mixed-Initiative Problem Solving Ontology + Rules Reject Reduction Accept Reduction New Reduction Rule Refinement Task Refinement Rule Refinement Modeling Formalization Learning Solution
US_1943 Identify and test a strategic COG candidate for  US_1943 Which is a member of  Allied_Forces_1943 ? I need to  Therefore I need to  Identify and test a strategic COG candidate corresponding to a member of the  Allied_Forces_1943   Provides an example 1 Rule_15 Learns 2 Rule_15 ? Applies Germany_1943 Identify and test a strategic COG candidate for  Germany_1943 Which is a member of  European_Axis_1943 ? Therefore I need to  3 Accepts the example 4 Rule_15 Refines 5 I need to  Identify and test a strategic COG candidate corresponding to a member of the  European_Axis_1943   …
Rule refinement with a positive example Condition satisfied by positive example ?O1   is   European_Axis_1943    has_as_member  ?O3 ?O2   is   Germany_1943 less general than Positive example that satisfies the upper bound explanation European_Axis_1943 has_as_member  Germany_1943 IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Plausible Upper Bound Condition  ?O1 is multi_member_force has_as_member  ?O2 ?O2  is force Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member  ?O2 ?O2 is single_state_force explanation ?O1 has_as_member ?O2 Identify and test a strategic COG candidate for  Germany_1943 I need to  Therefore I need to  Identify and test a strategic COG candidate corresponding to a member of the  European_Axis_1943
Condition satisfied by the positive example ?O1   is   European_Axis_1943   has_as_member  ?O2 ?O2   is   Germany_1943 Plausible Upper Bound Condition ?O1  is   multi_member_force   has_as_member  ?O2 ?O2  is   force Plausible Lower Bound Condition (from rule) ?O1  is  equal_partners_multi_state_alliance  has_as_member  ?O2 ?O2  is  single_state_force Minimal generalization of the plausible lower bound minimal generalization less general than (or at most as general as) New Plausible Lower Bound Condition ?O1   is  multi_state_alliance   has_as_member  ?O2 ?O2   is  single_state_force
single_state_force single_group_force multi_state_force multi_group_force multi_state_alliance multi_state_coalition equal_partners_ multi_state_alliance dominant_partner_ multi_state_alliance equal_partners_ multi_state_coalition dominant_partner_ multi_state_coalition composition_of_forces multi_member_force single_member_force Forces hierarchy Allied_Forces_1943 European_Axis_1943 force Germany_1943 US_1943 multi_state_alliance   is the minimal generalization of  equals_partners_multi_state_alliance  that covers  European_Axis_1943 …
Refined rule less general than IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Plausible Upper Bound Condition  ?O1 is multi_member_force has_as_member  ?O2 ?O2  is force Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member  ?O2 ?O2 is single_state_force explanation ?O1 has_as_member ?O2 IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Plausible Upper Bound Condition  ?O1 is multi_member_force has_as_member  ?O2 ?O2  is force Plausible Lower Bound Condition ?O1 is multi_state_alliance has_as_member  ?O2 ?O2 is single_state_force explanation ?O1 has_as_member ?O2
Which are some learning strategies used by Disciple? Rule learning Rule refinement
Instructable agents: the Disciple approach Modeling expert’s reasoning Rule learning Introduction Application and evaluation Multi-agent problem solving and learning  Integrated modeling, learning, solving
Students use Disciple as an intelligent assistant  that supports them to develop a Center of  Gravity analysis report for a war scenario. 319jw Case Studies in Center of Gravity  Analysis (COG), Term II and Term III Students act as subject matter experts that teach personal Disciple agents their own  reasoning in Center of Gravity analysis. 589 Military Applications of Artificial Intelligence (MAAI) ,Term III Students  develop input scenarios Students develop agents Use of Disciple in Army War College courses
The use of Disciple is an assignment that is well suited to the course's learning objectives The use of Disciple was a useful learning experience Disciple helped me to learn to perform a strategic center of gravity analysis of a scenario Disciple should be used in  future versions of this course COG Winter 2002: Expert assessments
I think that a subject matter expert can use Disciple to build an agent, with limited assistance from a knowledge engineer 589 MAAI Spring 2002: SME assessment
Instructable agents: the Disciple approach Modeling expert’s reasoning Rule learning Introduction Application and evaluation Multi-agent problem solving and learning  Integrated modeling, learning, solving
Modeling, problem solving, and learning as collaborative agents Reasoning tree similar with the current model Learning Modeling Problem  Solving Mixed Initiative New situations that need to be modeled New positive and negative examples for rule refinement Analysis of the  instantiations for a rule Partially learned rules with informal and formal descriptions Rules similar with the current example Informal description of the reasoning New correct examples for learning Example Analyzer Example Composer Rule Analyzer Explanation Generator Rule Analogy Engine Solution Analyzer Reasoning Tree Analyzer Mixed-Initiative PS
Question-answering based task reduction S 1 S 11a S 1n S 11b1 S 11bm T 11bm T 11b1 T 1n T 11a … … T 1 Q 1 S 11b T 11b A 1n S 11 A 11 … … A 11b1 A 11bm S 11b Q 11b Let T1 be the problem solving task to be performed. Finding a solution is an iterative process where, at each step, we consider some relevant information that leads us to reduce the current task to a simpler task or to several simpler tasks.  The question Q associated with the current task identifies the type of information to be considered.  The answer A identifies that piece of information and leads us to the reduction of the current task. ,[object Object],[object Object],[object Object],[object Object]
Intelligence Analysis
EXPERT PERSONALIZED LEARNING ASSISTANT Local Shared KB PERSONALIZED LEARNING ASSISTANT Expert’s Model External- Expertise Agent Information- Interaction Agent Awareness Agent Support Agent Support Agent Support Agent Ontology Rules KB KB KB KB KB Global KB EXPERT A possible multi-agent architecture for Disciple
Bibliography Mitchell T.M.,  Machine Learning,  McGraw Hill, 1997. Shavlik J.W. and Dietterich T. (Eds.),  Readings in Machine Learning , Morgan Kaufmann, 1990. Buchanan B., Wilkins D. (Eds.),  Readings in Knowledge Acquisition and Learning: Automating the Construction and the Improvement of Programs , Morgan Kaufmann, 1992. Langley P.,  Elements of   Machine Learning,  Morgan Kaufmann, 1996. Michalski R.S., Carbonell J.G., Mitchell T.M. (Eds),  Machine Learning: An Artificial Intelligence Approach,  Morgan Kaufmann, 1983 (Vol. 1), 1986 (Vol. 2). Kodratoff Y. and Michalski R.S. (Eds.)  Machine Learning: An Artificial Intelligence Approach  (Vol. 3), Morgan Kaufmann Publishers, Inc., 1990. Michalski R.S. and Tecuci G. (Eds.),  Machine Learning: A Multistrategy Approach  (Vol. 4), Morgan Kaufmann Publishers, San Mateo, CA, 1994. Tecuci G. and Kodratoff Y. (Eds.),  Machine Learning and Knowledge Acquisition: Integrated Approaches,  Academic Press, 1995. Tecuci G.,  Building Intelligent Agents: An Apprenticeship Multistrategy Learning  Theory, Methodology, Tool and Case Studies , Academic Press, 1998.

Contenu connexe

Tendances

Proposed entrancetestsyllabus
Proposed entrancetestsyllabusProposed entrancetestsyllabus
Proposed entrancetestsyllabus
bikram ...
 
Moore_slides.ppt
Moore_slides.pptMoore_slides.ppt
Moore_slides.ppt
butest
 
Supervised Corpus-based Methods for Word Sense Disambiguation
Supervised Corpus-based Methods for Word Sense DisambiguationSupervised Corpus-based Methods for Word Sense Disambiguation
Supervised Corpus-based Methods for Word Sense Disambiguation
butest
 
Metamathematics of contexts
Metamathematics of contextsMetamathematics of contexts
Metamathematics of contexts
Hossam Saraya
 
slides
slidesslides
slides
butest
 

Tendances (18)

Discrete Mathematics
Discrete MathematicsDiscrete Mathematics
Discrete Mathematics
 
Structured Knowledge Representation
Structured Knowledge RepresentationStructured Knowledge Representation
Structured Knowledge Representation
 
AI Lesson 09
AI Lesson 09AI Lesson 09
AI Lesson 09
 
Artificial intelligence cs607 handouts lecture 11 - 45
Artificial intelligence   cs607 handouts lecture 11 - 45Artificial intelligence   cs607 handouts lecture 11 - 45
Artificial intelligence cs607 handouts lecture 11 - 45
 
Knowledge Representation in Artificial intelligence
Knowledge Representation in Artificial intelligence Knowledge Representation in Artificial intelligence
Knowledge Representation in Artificial intelligence
 
Truth as a logical connective?
Truth as a logical connective?Truth as a logical connective?
Truth as a logical connective?
 
Discrete Mathematics
Discrete MathematicsDiscrete Mathematics
Discrete Mathematics
 
Proposed entrancetestsyllabus
Proposed entrancetestsyllabusProposed entrancetestsyllabus
Proposed entrancetestsyllabus
 
Frames
FramesFrames
Frames
 
DETERMINING CUSTOMER SATISFACTION IN-ECOMMERCE
DETERMINING CUSTOMER SATISFACTION IN-ECOMMERCEDETERMINING CUSTOMER SATISFACTION IN-ECOMMERCE
DETERMINING CUSTOMER SATISFACTION IN-ECOMMERCE
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
Moore_slides.ppt
Moore_slides.pptMoore_slides.ppt
Moore_slides.ppt
 
Ai lecture 09(unit03)
Ai lecture  09(unit03)Ai lecture  09(unit03)
Ai lecture 09(unit03)
 
Supervised Corpus-based Methods for Word Sense Disambiguation
Supervised Corpus-based Methods for Word Sense DisambiguationSupervised Corpus-based Methods for Word Sense Disambiguation
Supervised Corpus-based Methods for Word Sense Disambiguation
 
artificial intelligence
artificial intelligenceartificial intelligence
artificial intelligence
 
Metamathematics of contexts
Metamathematics of contextsMetamathematics of contexts
Metamathematics of contexts
 
Logic
LogicLogic
Logic
 
slides
slidesslides
slides
 

En vedette (15)

Agents of Learning
Agents of LearningAgents of Learning
Agents of Learning
 
Learning agents
Learning agentsLearning agents
Learning agents
 
Highly Autonomous Self Learning AI Agent
Highly Autonomous Self Learning AI AgentHighly Autonomous Self Learning AI Agent
Highly Autonomous Self Learning AI Agent
 
E learning agents
E learning agentsE learning agents
E learning agents
 
Lecture3 - Machine Learning
Lecture3 - Machine LearningLecture3 - Machine Learning
Lecture3 - Machine Learning
 
#3 formal methods – propositional logic
#3 formal methods – propositional logic#3 formal methods – propositional logic
#3 formal methods – propositional logic
 
Process Synchronization And Deadlocks
Process Synchronization And DeadlocksProcess Synchronization And Deadlocks
Process Synchronization And Deadlocks
 
Machine Learning
Machine LearningMachine Learning
Machine Learning
 
Chap19
Chap19Chap19
Chap19
 
Learning
LearningLearning
Learning
 
Syntax and semantics of propositional logic
Syntax and semantics of propositional logicSyntax and semantics of propositional logic
Syntax and semantics of propositional logic
 
Planning
PlanningPlanning
Planning
 
AI: Planning and AI
AI: Planning and AIAI: Planning and AI
AI: Planning and AI
 
AI: Learning in AI
AI: Learning in AI AI: Learning in AI
AI: Learning in AI
 
Propositional And First-Order Logic
Propositional And First-Order LogicPropositional And First-Order Logic
Propositional And First-Order Logic
 

Similaire à Learning Agents by Prof G. Tecuci

Basics of Machine Learning
Basics of Machine LearningBasics of Machine Learning
Basics of Machine Learning
butest
 
Introduction to Machine Learning.
Introduction to Machine Learning.Introduction to Machine Learning.
Introduction to Machine Learning.
butest
 
Machine Learning and Inductive Inference
Machine Learning and Inductive InferenceMachine Learning and Inductive Inference
Machine Learning and Inductive Inference
butest
 
课堂讲义(最后更新:2009-9-25)
课堂讲义(最后更新:2009-9-25)课堂讲义(最后更新:2009-9-25)
课堂讲义(最后更新:2009-9-25)
butest
 

Similaire à Learning Agents by Prof G. Tecuci (20)

week3.pdf
week3.pdfweek3.pdf
week3.pdf
 
Introduction to Machine Learning
Introduction to Machine LearningIntroduction to Machine Learning
Introduction to Machine Learning
 
Basics of Machine Learning
Basics of Machine LearningBasics of Machine Learning
Basics of Machine Learning
 
S10
S10S10
S10
 
S10
S10S10
S10
 
Fundamentals of Quantitative Analysis
Fundamentals of Quantitative AnalysisFundamentals of Quantitative Analysis
Fundamentals of Quantitative Analysis
 
Introduction to Machine Learning.
Introduction to Machine Learning.Introduction to Machine Learning.
Introduction to Machine Learning.
 
Bloom's Taxonomy
Bloom's Taxonomy Bloom's Taxonomy
Bloom's Taxonomy
 
Poggi analytics - ebl - 1
Poggi   analytics - ebl - 1Poggi   analytics - ebl - 1
Poggi analytics - ebl - 1
 
Machine learning
Machine learningMachine learning
Machine learning
 
Discrete Mathematics Lecture Notes
Discrete Mathematics Lecture NotesDiscrete Mathematics Lecture Notes
Discrete Mathematics Lecture Notes
 
Bloom
BloomBloom
Bloom
 
Machine Learning and Inductive Inference
Machine Learning and Inductive InferenceMachine Learning and Inductive Inference
Machine Learning and Inductive Inference
 
syllabus-CBR.pdf
syllabus-CBR.pdfsyllabus-CBR.pdf
syllabus-CBR.pdf
 
Generalization abstraction
Generalization abstractionGeneralization abstraction
Generalization abstraction
 
课堂讲义(最后更新:2009-9-25)
课堂讲义(最后更新:2009-9-25)课堂讲义(最后更新:2009-9-25)
课堂讲义(最后更新:2009-9-25)
 
Outcomes based teaching learning plan (obtlp) number theory 2
Outcomes based teaching learning plan (obtlp) number theory 2Outcomes based teaching learning plan (obtlp) number theory 2
Outcomes based teaching learning plan (obtlp) number theory 2
 
Machine Learning
Machine LearningMachine Learning
Machine Learning
 
Learning sets of rules, Sequential Learning Algorithm,FOIL
Learning sets of rules, Sequential Learning Algorithm,FOILLearning sets of rules, Sequential Learning Algorithm,FOIL
Learning sets of rules, Sequential Learning Algorithm,FOIL
 
RuleML2015: Rule Generalization Strategies in Incremental Learning of Disjunc...
RuleML2015: Rule Generalization Strategies in Incremental Learning of Disjunc...RuleML2015: Rule Generalization Strategies in Incremental Learning of Disjunc...
RuleML2015: Rule Generalization Strategies in Incremental Learning of Disjunc...
 

Plus de butest

EL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBEEL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBE
butest
 
1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同
butest
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIAL
butest
 
Timeline: The Life of Michael Jackson
Timeline: The Life of Michael JacksonTimeline: The Life of Michael Jackson
Timeline: The Life of Michael Jackson
butest
 
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
butest
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIAL
butest
 
Com 380, Summer II
Com 380, Summer IICom 380, Summer II
Com 380, Summer II
butest
 
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet JazzThe MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
butest
 
MICHAEL JACKSON.doc
MICHAEL JACKSON.docMICHAEL JACKSON.doc
MICHAEL JACKSON.doc
butest
 
Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1
butest
 
Facebook
Facebook Facebook
Facebook
butest
 
Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...
butest
 
Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...
butest
 
NEWS ANNOUNCEMENT
NEWS ANNOUNCEMENTNEWS ANNOUNCEMENT
NEWS ANNOUNCEMENT
butest
 
C-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.docC-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.doc
butest
 
MAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.docMAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.doc
butest
 
Mac OS X Guide.doc
Mac OS X Guide.docMac OS X Guide.doc
Mac OS X Guide.doc
butest
 
WEB DESIGN!
WEB DESIGN!WEB DESIGN!
WEB DESIGN!
butest
 

Plus de butest (20)

EL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBEEL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBE
 
1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIAL
 
Timeline: The Life of Michael Jackson
Timeline: The Life of Michael JacksonTimeline: The Life of Michael Jackson
Timeline: The Life of Michael Jackson
 
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIAL
 
Com 380, Summer II
Com 380, Summer IICom 380, Summer II
Com 380, Summer II
 
PPT
PPTPPT
PPT
 
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet JazzThe MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
 
MICHAEL JACKSON.doc
MICHAEL JACKSON.docMICHAEL JACKSON.doc
MICHAEL JACKSON.doc
 
Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1
 
Facebook
Facebook Facebook
Facebook
 
Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...
 
Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...
 
NEWS ANNOUNCEMENT
NEWS ANNOUNCEMENTNEWS ANNOUNCEMENT
NEWS ANNOUNCEMENT
 
C-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.docC-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.doc
 
MAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.docMAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.doc
 
Mac OS X Guide.doc
Mac OS X Guide.docMac OS X Guide.doc
Mac OS X Guide.doc
 
hier
hierhier
hier
 
WEB DESIGN!
WEB DESIGN!WEB DESIGN!
WEB DESIGN!
 

Learning Agents by Prof G. Tecuci

  • 1. Learning Agents Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci
  • 2. Overview Basic bibliography Learning strategies Instructable agents: the Disciple approach
  • 3. Learning strategies Inductive learning from examples Deductive (explanation-based) learning Introduction Analogical learning Abductive learning Multistrategy learning
  • 4. Machine Learning is the domain of Artificial Intelligence which is concerned with building adaptive computer systems that are able to improve their competence and/or efficiency through learning from input data or from their own problem solving experience. What is Machine Learning What does it mean to improve competence? What does it mean to improve efficiency?
  • 5. Two complementary dimensions for learning A system is improving its competence if it learns to solve a broader class of problems, and to make fewer mistakes in problem solving. A system is improving its efficiency, if it learns to solve the problems from its area of competence faster or by using fewer resources. Competence Efficiency
  • 6. The architecture of a learning agent Ontology Rules/Cases/Methods Problem Solving Engine Learning Agent User/ Environment Output/ Sensors Effectors Input/ Knowledge Base Learning Engine Implements learning methods for extending and refining the knowledge base to improve agent’s competence and/or efficiency in problem solving. Implements a general problem solving method that uses the knowledge from the knowledge base to interpret the input and provide an appropriate output. Data structures that represent the objects from the application domain, general laws governing them, actions that can be performed with them, etc.
  • 7.
  • 8.
  • 9. Basic ontological elements: instances and concepts An instance is a representation of a particular entity from the application domain. A concept is a representation of a set of instances. government_of_Britain_1943 government_of_US_1943 state_government instance_of instance_of “ state_government” represents the set of all entities that are governments of states. This set includes “government_of_US_1943” and “government_of_Britain_1943” which are called positive examples. government_of_US_1943 government_of_Britain_1943 state_government “ instance_of” is the relationship between an instance and the concept to which it belongs. An entity which is not an instance of a concept is called a negative example of that concept.
  • 10. Concept generality A concept P is more general than another concept Q if and only if the set of instances represented by P includes the set of instances represented by Q. state_government democratic_government representative_ democracy totalitarian_ government parliamentary_ democracy Example: “ subconcept_of” is the relationship between a concept and a more general concept. state_government subconcept_of democratic_government
  • 11. A generalization hierarchy feudal_god_ king_government totalitarian_ government democratic_ government theocratic_ government state_government military_ dictatorship police_ state religious_ dictatorship representative_ democracy parliamentary_ democracy theocratic_ democracy monarchy governing_body other_state_ government dictator deity_figure chief_and_ tribal_council autocratic_ leader democratic_ council_ or_board group_governing_body other_ group_ governing_ body government_ of_Italy_1943 government_ of_Germany_1943 government_ of_US_1943 government_ of_Britain_1943 ad_hoc_ governing_body established_ governing_body other_type_of_ governing_body fascist_ state communist_ dictatorship religious_ dictatorship government_ of_USSRy_1943
  • 12. Learning strategies Inductive learning from Examples Deductive (explanation-based) learning Introduction Analogical learning Abductive learning Multistrategy learning
  • 13. Approach: Compare the positive and the negative examples of a concept, in terms of their similarities and differences, and learn the concept as a generalized description of the similarities of the positive examples. This allows the agent to recognize other entities as being instances of the learned concept. Illustration Positive examples of cups: P1 P2 ... Negative examples of cups: N1 … A description of the cup concept: has-handle(x), ... Empirical inductive concept learning from examples Given Learn
  • 14. The learning problem Given • a language of instances; • a language of generalizations; • a set of positive examples (E1, ..., En) of a concept • a set of negative examples (C1, ... , Cm) of the same concept • a learning bias • other background knowledge Determine • a concept description which is a generalization of the positive examples that does not cover any of the negative examples Purpose of concept learning: Predict if an instance is an example of the learned concept.
  • 15.
  • 16. Generalization (and specialization) rules Climbing the generalization hierarchy Dropping condition Generalizing numbers Adding alternatives Turning constants into variables
  • 17. Climbing/descending the generalization hierarchy Generalizes an expression by replacing a concept with a more general one. ?O1 is single_state_force has_as_governing_body ?O2 ?O2 is representative_democracy generalization specialization representative_democracy  democratic_government The set of single state forces governed by representative democracies democratic_government  representative_democracy ?O1 is single_state_force has_as_governing_body ?O2 ?O2 is democratic_government The set of single state forces governed by democracies democratic_government representative_democracy parliamentary_democracy
  • 18. Basic idea of version space concept learning Consider the examples E1, … , E2 in sequence. UB + Initialize the lower bound to the first positive example (LB=E1) and the upper bound (UB) to the most general generalization of E1. LB + + UB LB If the next example is a positive one, then generalize LB as little as possible to cover it. _ + + UB LB If the next example is a negative one, then specialize UB as little as possible to uncover it and to remain more general than LB. _ + + UB=LB _ _ + + … Repeat the above two steps with the rest of examples until UB=LB. This is the learned concept.
  • 19.
  • 20. Illustration of the candidate elimination algorithm Language of instances: (shape, size) shape: {ball, brick, cube} size: {large, small} Language of generalizations: (shape, size) shape: {ball, brick, cube, any-shape} size: {large, small, any-size} Learning process: Input examples : ball small + shape size class ball large + brick small – cube large – -(brick, small) G = {(ball, any-size) (any-shape, large)} 2 -(cube, large) G = {(ball, any-size)} 3 G = {(any-shape, any-size)} S = {(ball, large)} +(ball, large) +(ball, large) 1 1 +(ball, small) S = {(ball, any-size)} || 4
  • 21. General features of the empirical inductive methods Require many examples. Do not need much domain knowledge. Improve the competence of the agent. The version space method relies on an exhaustive bi-directional search and is computationally intensive. This limits its practical applicability. Practical empirical inductive learning methods (such as ID3) rely on heuristic search to hypothesize the concept.
  • 22. Learning strategies Inductive learning from Examples Deductive (explanation-based) learning Introduction Analogical learning Abductive learning Multistrategy learning
  • 23. Explanation-based learning problem Given A training example cup(o1)  color(o1, white), made-of(o1, plastic), light-mat(plastic), has-handle(o1), has-flat-bottom(o1), up-concave(o1),... Learning goal A specification of the desirable features of the concept to be learned (e.g. the learned concept should have only features from the example) Background knowledge Complete knowledge that allows proving that the training example represents the concept: cup(x)  liftable(x), stable(x), open-vessel(x). liftable(x)  light(x), graspable(x). stable(x)  has-flat-bottom(x). … Determine A concept definition representing a deductive generalization of the training example that satisfies the learning goal. cup(x)  made-of(x, y),light-mat(y),has-handle(x),has-flat-bottom(x),up-concave(x). Purpose of learning Improve the problem solving efficiency of the agent.
  • 24. Explanation-based learning method 1. Construct an explanation that proves that the training example is an example of the concept to be learned. 2. Generalize the explanation as much as possible so that the proof still holds, and extract from it a concept definition that satisfies the learning goal. A example of a cup cup(o1): color(o1, white), made-of(o1, plastic), light-mat(plastic), has-handle(o1), has-flat-bottom(o1), up-concave(o1),... cup(o1) stable(o1) liftable(o1) graspable(o1) light(o1) light-mat(plastic) made-of(o1,plastic) ... ... has-handle(o1) cup(x) stable(x) liftable(x) graspable(x) light(x) light-mat(y) made-of(x,y) ... ... has-handle(x) The proof identifies the characteristic features: Proof generalization generalizes them: • has-handle(o1) is needed to prove cup(o1) • color(o1,white) is not needed to prove cup(o1) • made-of(o1, plastic) is needed to prove cup(o1) • made-of(o1, plastic) is generalized to made-of(x, y); • the material needs not be plastic.
  • 25. Discussion How does this learning method improve the efficiency of the problem solving process? The proof identifies the characteristic features Proof generalization generalizes them Do we need a training example to learn an operational definition of the concept? Why? The learner does not need a training example. It can simply build proof trees from top-down, starting with an abstract definition of the concept and growing the tree until the leaves are operational features. However, without a training example the learner will learn many operational definitions. The training example focuses the learner on the most typical example. A cup is recognized by using a single rule rather than building a proof tree. cup(o1) stable(o1) liftable(o1) graspable(o1) light(o1) light-mat(plastic) made-of(o1,plastic) ... ... has-handle(o1) cup(x) stable(x) liftable(x) graspable(x) light(x) light-mat(y) made-of(x,y) ... ... has-handle(x)
  • 26. • Needs only one example • Requires complete knowledge about the concept (which makes this learning strategy impractical). • Improves agent's efficiency in problem solving General features of explanation-based learning • Shows the importance of explanations in learning
  • 27. Learning strategies Inductive learning from Examples Deductive (explanation-based) learning Introduction Analogical learning Abductive learning Multistrategy learning
  • 28. Learning by analogy Learning by analogy means acquiring new knowledge about an input entity by transferring it from a known similar entity. The hydrogen atom is like our solar system. The Sun has a greater mass than the Earth and attracts it, causing the Earth to revolve around the Sun. The nucleus also has a greater mass then the electron and attracts it. Therefore it is plausible that the electron also revolves around the nucleus.
  • 29. Discussion Which is the central intuition supporting learning by analogy: If two entities are similar in some respects then they could be similar in other respects as well. Examples of analogies: Pressure Drop is like Voltage Drop A variable in a programming language is like a box. Provide other examples of analogies.
  • 30. Learning by analogy: the learning problem Given: • A partially known target entity T and a goal concerning it. • Background knowledge containing known entities. Find: • New knowledge about T obtained from a source entity S belonging to the background knowledge. Partially understood structure of the hydrogen atom under study. Knowledge from different domains, including astronomy, geography, etc. In a hydrogen atom the electron revolves around the nucleus, in a similar way in which a planet revolves around the sun.
  • 31. Learning by analogy: the learning method • ACCESS: find a known entity that is analogous with the input entity. • MATCHING: match the two entities and hypothesize knowledge. • EVALUATION: test the hypotheses. • LEARNING: store or generalize the new knowledge. Store that, in a hydrogen atom, the electron revolves around the nucleus. By generalization from the solar system and the hydrogen atom, learn the abstract concept that a central force can cause revolution. Based on what is known about the hydrogen atom and the solar system identify the solar system as a source for the hydrogen atom. One may map the nucleus to the sun and the electron to the planet, allowing one to infer that the electron revolves around the nucleus because the nucleus attracts the electron and the mass of the nucleus is greater than the mass of the electron. A specially designed experiment shows that indeed the electron revolves around the nucleus.
  • 32. Discussion How could we define problem solving by analogy? How does analogy help to solve new problems? How does analogy help? Why not just study the structure of the hydrogen atom to discover that new knowledge? We anyway need to perform an experiment to test that the electron revolves around the hydrogen atom.
  • 33. • Requires a huge amount of background knowledge from a wide variety of domains. • Improves agent's competence through knowledge base refinement. General features of analogical learning • Is a very powerful reasoning method with incomplete knowledge • Reduces the solution finding to solution verification.
  • 34. Learning strategies Inductive learning from Examples Deductive (explanation-based) learning Introduction Analogical learning Abductive learning Multistrategy learning
  • 35. Abductive learning Finds the hypothesis that explains best an observation (or collection of data) and assumes it as being true. The learning problem • Let D be a collection of data • Find all the hypotheses that (causally) explain D • Find the hypothesis H that explains D better than other hypotheses • Assert that H is true The learning method Illustration: There is smoke in the East building. Fire causes smoke. Hypothesize that there is a fire in the East building.
  • 36. Abduction (cont.) Provide other examples of abductive reasoning. Consider the observation: “University Dr. is wet” Use abductive learning. Which are other potential explanations? What real world applications of abductive reasoning can you imagine? Raining causes the streets to be wet. I hypothesize that it has rained on the University Dr.
  • 37. Learning strategies Inductive learning from Examples Deductive (explanation-based) learning Introduction Analogical learning Abductive learning Multistrategy learning
  • 38. Multistrategy learning Multistrategy learning is concerned with developing learning agents that synergistically integrate two or more learning strategies in order to solve learning tasks that are beyond the capabilities of the individual learning strategies that are integrated.
  • 39. Examples Learning from examples Explanation- based learning Multistrategy Knowledge learning needed Effect on agent's behavior Type of inference Complementariness of learning strategies needed many one several improves competence improves efficiency improves competence or/ and efficiency induction deduction induction and/ or deduction complete incomplete knowledge very little knowledge
  • 40. Overview Basic bibliography Learning strategies Instructable agents: the Disciple approach
  • 41. Instructable agents: the Disciple approach Modeling expert’s reasoning Rule learning Introduction Application and evaluation Multi-agent problem solving and learning Integrated modeling, learning, solving
  • 42. The knowledge engineer attempts to understand how the subject matter expert reasons and solves problems and then encodes the acquired expertise into the system's knowledge base. This modeling and representation of expert knowledge is long, painful and inefficient. How are agents built and why it is hard Why?
  • 43. Elaborate a theory and methodology for the mixed-initiative, end-to-end development of knowledge bases and agents by subject matter experts, with limited assistance from knowledge engineers. The Disciple approach: Problem statement How is this approach addresses the knowledge acquisition bottleneck? The expert teaches the agent to perform various tasks in a way that resembles how the expert would teach a person. The agent learns from the expert, building, verifying and improving its knowledge base Approach Mixed-initiative reasoning between an expert that has the knowledge to be formalized and a learning agent shell that knows how to formalize it. Interface Problem Solving Learning Ontology + Rules
  • 44. Mainframe Computers Software systems developed and used by computer experts Vision on the evolution of computer systems Personal Computers Software systems developed by computer experts and used by persons that are not computer experts Learning Agents Software systems developed and used by persons that are not computer experts
  • 45. General architecture of Disciple-RKF Mixed Initiative Manager Instances Solving Rules Teaching, Learning and Problem Solving Intelligent User Interface Autonomous Problem Solving Modeling Knowledge Base Management Ontology Import Ontology Instances Solving Problem Rules Mixed_Initiative Problem Solving Natural Language Generation Ontology Editors and Browsers Rule Learning Rule Refinement Scenario Elicitation Task Learning
  • 46. Knowledge base: Object ontology + reasoning rules Object ontology A hierarchical representation of the types of objects from the COG domain. A hierarchical representation of the types of features of the objects from the COG domain.
  • 47. Plausible Lower Bound Condition ?O1 is will_of_the_people_of_US_1943 ?O2 is US_1943 has_as_people ?O6 has_as_governing_body ?O5 ?O3 is European_Axis_1943 ?O4 is Dominance_of_Europe_by_European_Axis ?O5 is government_of_US_1943 has_as_will ?O7 ?O6 is people_of_US_1945 has_as_will ?O1 ?O7 is will_of_the_government_of_US_1943 reflects ?O1 IF: Test whether the will of the people can make a state accept the strategic goal of an opposing force The will of the people is ?O1 The state is ?O2 The opposing force is ?O3 The goal is ?O4 Plausible Upper Bound Condition ?O1 is will_of_agent ?O2 is force has_as_people ?O6 has_as_governing_body ?O5 ?O3 is (strategic_COG_relevant_factor agent) ?O4 is force_goal ?O5 is representative_democracy has_as_will ?O7 ?O6 is people has_as_will ?O1 ?O7 is will_of_agent reflects ?O1 THEN: Test whether the will of the people, that controls the government, can make a state accept the strategic goal of an opposing force The will of the people is ?O1 The government is ?O5 The state is ?O2 The opposing force is ?O3 The goal is ?O4 Task reduction rule
  • 48. The developed Disciple approach Modeling the problem solving process of the subject matter expert and development of the object ontology of the agent. Teaching of the agent by the subject matter expert.
  • 49. Instructable agents: the Disciple approach Modeling expert’s reasoning Rule learning Introduction Application and evaluation Multi-agent problem solving and learning Integrated modeling, learning, solving
  • 50. Application domain The center of gravity of an entity (state, alliance, coalition, or group) is the foundation of capability, the hub of all power and movement, upon which everything depends, the point against which all the energies should be directed. Carl Von Clausewitz, “On War,” 1832. If a combatant eliminates or influences the enemy’s strategic center of gravity, then the enemy will lose control of its power and resources and will eventually fall to defeat. If the combatant fails to adequately protect his own strategic center of gravity, he invites disaster. Identification of strategic Center of Gravity candidates in war scenarios
  • 51. Modeling the identification of center of gravity candidates Identify and test a strategic COG candidate for the Sicily_1943 scenario Sicily_1943 is a war scenario Identify and test a strategic COG candidate for Sicily_1943 which is a war scenario What kind of scenario is Sicily_1943? I need to Therefore I need to Which is an opposing force in the Sicily_1943 scenario? Allied_Forces_1943 Identify and test a strategic COG candidate for Allied_Forces_1943 European_Axis_1943 Therefore I need to Identify and test a strategic COG candidate for European_Axis_1943 Therefore I need to … Is Allied_Forces_1943 a single-member force or a multi-member force? Allied_Forces_1943 is a multi-member force Identify and test a strategic COG candidate for Allied_Forces_1943 which is a multi-member force Therefore I need to …
  • 52. I need to Test whether the will of the people of US_1943 can make US_1943 accept the strategic goal of European_Axis_1943 which is ‘Dominance of Europe by European Axis’ What is the strategic goal of European_Axis_1943? ‘ Dominance of Europe by European Axis’ Therefore I need to Test the will of the people of US_1943 which is a strategic COG candidate with respect to the people of US_1943 Let us assume that the people of US_1943 would accept ‘Dominance of Europe by European Axis’. Could the people of US_1943 make the government of US_1943 accept ‘Dominance of Europe by European Axis’? Yes, because US_1943 is a representative democracy and the government of US_1943 reflects the will of the people of US_1943 Therefore I need to Let us assume that the people of US_1943 would accept ‘Dominance of Europe by European Axis’. Could the people of US_1943 make the military of US_1943 accept ‘Dominance of Europe by European Axis’? Yes, because US_1943 is a representative democracy and the will of the military of US_1943 reflects the will of the people of US_1943 Therefore conclude that The will of the people of US_1943 is a strategic COG candidate that cannot be eliminated Test whether the will of the people of US_1943 that controls the government of US_1943 can make US_1943 accept the strategic goal of European_Axis_1943 which is ‘Dominance of Europe by European Axis’ …
  • 53. Modeling advisor: helping the expert to express his reasoning
  • 54. Instructable agents: the Disciple approach Modeling expert’s reasoning Rule learning Introduction Application and evaluation Multi-agent problem solving and learning Integrated modeling, learning, solving
  • 55. IF Identify and test a strategic COG candidate corresponding to a member of the ?O1 Question Which is a member of ?O1 ? Answer ?O2 THEN Identify and test a strategic COG candidate for ?O2 Natural Language Logic Rule learning IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force explanation ?O1 has_as_member ?O2 Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 US_1943 Identify and test a strategic COG candidate for US_1943 Which is a member of Allied_Forces_1943? I need to Therefore I need to INFORMAL STRUCTURE OF THE RULE FORMAL STRUCTURE OF THE RULE
  • 56. 1. Formalize the tasks Identify and test a strategic COG candidate for US_1943 I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Identify and test a strategic COG candidate for a force The force is US_1943 I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of a force The force is Allied_Forces_1943
  • 58. 2. Find an explanation of why the example is correct US_1943 has_as_member Allied_Forces_1943 The explanation is the best possible approximation of the question and the answer, in the object ontology. US_1943 Identify and test a strategic COG candidate for US_1943 Which is a member of Allied_Forces_1943 ? I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943
  • 59. Explanation generation and selection interface
  • 60. 3. Generate the plausible version space rule Condition ?O1 is Allied_Forces_1943 has_as_member ?O2 ?O2 is US_1943 Most general generalization IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force explanation ?O1 has_as_member ?O2 Most specific generalization US_1943 has_as_member Allied_Forces_1943 Rewrite as Which is the justification for these generalizations?
  • 61. Analogical reasoning similar example explains? similar Identify and test a strategic COG candidate for a force The force is Germany_1943 I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of a force The force is European_Axis_1943 initial example explanation explains Identify and test a strategic COG candidate for a force The force is US_1943 I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of a force The force is Allied_Forces_1943 US_1943 has_as_ member Allied_Forces_1943 similar explanation less general than Analogy criterion less general than ?O2 has_as_ member ?O1 force multi_member_force instance_of instance_of Germany_1943 has_as_ member European_Axis_1943 similar
  • 62. Generalization by analogy Any value of ?O1 should be an instance of: DOMAIN(has_as_member)  RANGE(The force is) = multi_member_force  force = multi_member_force Any value of ?O2 should be an instance of: RANGE(has_as_member)  RANGE(The force is) = force  force = force Knowledge-base constraints on the generalization: initial example explains Identify and test a strategic COG candidate for a force The force is US_1943 I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of a force The force is Allied_Forces_1943 US_1943 has_as_ member Allied_Forces_1943 generalization explains ?O2 has_as_ member ?O1 force multi_member_force instance_of instance_of Identify and test a strategic COG candidate for a force The force is ?O2 I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1
  • 63. Instructable agents: the Disciple approach Modeling expert’s reasoning Rule learning Introduction Integrated modeling, learning, solving Application and evaluation Multi-agent problem solving and learning
  • 64. Control of modeling, learning and solving Input Task Generated Reduction Mixed-Initiative Problem Solving Ontology + Rules Reject Reduction Accept Reduction New Reduction Rule Refinement Task Refinement Rule Refinement Modeling Formalization Learning Solution
  • 65. US_1943 Identify and test a strategic COG candidate for US_1943 Which is a member of Allied_Forces_1943 ? I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Provides an example 1 Rule_15 Learns 2 Rule_15 ? Applies Germany_1943 Identify and test a strategic COG candidate for Germany_1943 Which is a member of European_Axis_1943 ? Therefore I need to 3 Accepts the example 4 Rule_15 Refines 5 I need to Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943 …
  • 66. Rule refinement with a positive example Condition satisfied by positive example ?O1 is European_Axis_1943 has_as_member ?O3 ?O2 is Germany_1943 less general than Positive example that satisfies the upper bound explanation European_Axis_1943 has_as_member Germany_1943 IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force explanation ?O1 has_as_member ?O2 Identify and test a strategic COG candidate for Germany_1943 I need to Therefore I need to Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943
  • 67. Condition satisfied by the positive example ?O1 is European_Axis_1943 has_as_member ?O2 ?O2 is Germany_1943 Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force Plausible Lower Bound Condition (from rule) ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force Minimal generalization of the plausible lower bound minimal generalization less general than (or at most as general as) New Plausible Lower Bound Condition ?O1 is multi_state_alliance has_as_member ?O2 ?O2 is single_state_force
  • 68. single_state_force single_group_force multi_state_force multi_group_force multi_state_alliance multi_state_coalition equal_partners_ multi_state_alliance dominant_partner_ multi_state_alliance equal_partners_ multi_state_coalition dominant_partner_ multi_state_coalition composition_of_forces multi_member_force single_member_force Forces hierarchy Allied_Forces_1943 European_Axis_1943 force Germany_1943 US_1943 multi_state_alliance is the minimal generalization of equals_partners_multi_state_alliance that covers European_Axis_1943 …
  • 69. Refined rule less general than IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force explanation ?O1 has_as_member ?O2 IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force Plausible Lower Bound Condition ?O1 is multi_state_alliance has_as_member ?O2 ?O2 is single_state_force explanation ?O1 has_as_member ?O2
  • 70. Which are some learning strategies used by Disciple? Rule learning Rule refinement
  • 71. Instructable agents: the Disciple approach Modeling expert’s reasoning Rule learning Introduction Application and evaluation Multi-agent problem solving and learning Integrated modeling, learning, solving
  • 72. Students use Disciple as an intelligent assistant that supports them to develop a Center of Gravity analysis report for a war scenario. 319jw Case Studies in Center of Gravity Analysis (COG), Term II and Term III Students act as subject matter experts that teach personal Disciple agents their own reasoning in Center of Gravity analysis. 589 Military Applications of Artificial Intelligence (MAAI) ,Term III Students develop input scenarios Students develop agents Use of Disciple in Army War College courses
  • 73. The use of Disciple is an assignment that is well suited to the course's learning objectives The use of Disciple was a useful learning experience Disciple helped me to learn to perform a strategic center of gravity analysis of a scenario Disciple should be used in future versions of this course COG Winter 2002: Expert assessments
  • 74. I think that a subject matter expert can use Disciple to build an agent, with limited assistance from a knowledge engineer 589 MAAI Spring 2002: SME assessment
  • 75. Instructable agents: the Disciple approach Modeling expert’s reasoning Rule learning Introduction Application and evaluation Multi-agent problem solving and learning Integrated modeling, learning, solving
  • 76. Modeling, problem solving, and learning as collaborative agents Reasoning tree similar with the current model Learning Modeling Problem Solving Mixed Initiative New situations that need to be modeled New positive and negative examples for rule refinement Analysis of the instantiations for a rule Partially learned rules with informal and formal descriptions Rules similar with the current example Informal description of the reasoning New correct examples for learning Example Analyzer Example Composer Rule Analyzer Explanation Generator Rule Analogy Engine Solution Analyzer Reasoning Tree Analyzer Mixed-Initiative PS
  • 77.
  • 79. EXPERT PERSONALIZED LEARNING ASSISTANT Local Shared KB PERSONALIZED LEARNING ASSISTANT Expert’s Model External- Expertise Agent Information- Interaction Agent Awareness Agent Support Agent Support Agent Support Agent Ontology Rules KB KB KB KB KB Global KB EXPERT A possible multi-agent architecture for Disciple
  • 80. Bibliography Mitchell T.M., Machine Learning, McGraw Hill, 1997. Shavlik J.W. and Dietterich T. (Eds.), Readings in Machine Learning , Morgan Kaufmann, 1990. Buchanan B., Wilkins D. (Eds.), Readings in Knowledge Acquisition and Learning: Automating the Construction and the Improvement of Programs , Morgan Kaufmann, 1992. Langley P., Elements of Machine Learning, Morgan Kaufmann, 1996. Michalski R.S., Carbonell J.G., Mitchell T.M. (Eds), Machine Learning: An Artificial Intelligence Approach, Morgan Kaufmann, 1983 (Vol. 1), 1986 (Vol. 2). Kodratoff Y. and Michalski R.S. (Eds.) Machine Learning: An Artificial Intelligence Approach (Vol. 3), Morgan Kaufmann Publishers, Inc., 1990. Michalski R.S. and Tecuci G. (Eds.), Machine Learning: A Multistrategy Approach (Vol. 4), Morgan Kaufmann Publishers, San Mateo, CA, 1994. Tecuci G. and Kodratoff Y. (Eds.), Machine Learning and Knowledge Acquisition: Integrated Approaches, Academic Press, 1995. Tecuci G., Building Intelligent Agents: An Apprenticeship Multistrategy Learning Theory, Methodology, Tool and Case Studies , Academic Press, 1998.

Notes de l'éditeur

  1. To learn a good concept description through this learning strategy requires a very large set of positive and negative examples. On the other hand, this is the only information that the agent needs. That is, the agent does not require prior knowledge to perform this type of learning. The result of this learning strategy is the increase of the problem solving competence of the agent. Indeed, the agent will learn to do things it was not able to do before. For instance, it will learn to recognize cups, something that it was not able to do before. Most empirical inductive learning algorithms use heuristic methods to search an incredibly large space of possibilities for the learned concept. The version space method, which does not rely on heuristic search (it performs an exhaustive search), assumes that the representation language is complete, and a single concept exists, has a very important theoretical value, but a more limited practical relevance.
  2. The goal of this learning strategy is to improve the efficiency in problem solving. The agent is able to perform some task but in an inefficient way. We would like to teach the agent to perform the task faster. Consider, for instance, an agent that is able to recognize cups. The agent receives a description of a cup that includes many features. The agent will recognize that this object is a cup by performing a complex reasoning process, based on its prior knowledge. This process is illustrated by the proof tree from the left hand side of this slide. The object o1 is made of plastic which is a light material. Therefore o1 is light. o1 has a handle and therefore it is graspable. Being light and graspable, it is liftable. An so on … being liftable, stable and an open vessel, it is a cup. However, the agent can learn from this process to recognize a cup faster. Notice that the agent used the fact that o1 has a handle in order to prove that o1 is a cup. This means that having a handle is an important feature. On the other hand the agent did not use the the color of o1 to prove that o1 is a cup. This means that color is not important. Notice how the agent reaches the same conclusions as in learning from examples, but through a different line of reasoning, and based on a different type of information. The next step in the learning process is to generalize the tree from the left hand side into the tree from the right hand side. While the tree from the left hand side proves that the specific object o1 is a cup, the tree from the right hand side shows that any object x that is made of some light material y, has a handle and some other features is a cup. Therefore, to recognize that an object o2 is a cup, the agent only needs to look for the presence of these features discovered as important. It no longer needs to build a complex proof tree. Therefore cup recognition is done much faster. Finally, notice that the agent needs only one example to learn from. However, it needs a lot of prior knowledge to prove that this example is a cup. Providing such prior knowledge to the agent is a very complex task.
  3. Disciple-RKF is the most current shell from the Disciple family. Its architecture is organized into three main components: 1) the intelligent user interface, 2) the teaching, learning and problem solving component, and 3) the knowledge base management component. The intelligent user interface component allows the experts to communicate with Disciple in a manner as close as possible to the way they communicate in their environment. The teaching, learning and problem solving component is responsible for knowledge formation and problem solving, and contains specific modules for domain modeling, rule learning and refinement, as well as for mixed-initiative and autonomous problem solving. The knowledge base management component contains modules for managing the knowledge base of Disciple and for importing knowledge from external knowledge repositories.
  4. This slide shows the interaction between the expert and the agent when the agent has already learned some rules. This interaction is governed by the mixed-initiative problem solver. The expert formulates the initial task. Then the agent attempts to reduce this task by using the previously learned rules. Let us assume that the agent succeeded to propose a reduction to the current task. The expert has to accept it if it is correct, or he has to reject it, if it is incorrect. If the reduction proposed by the agent is accepted by the expert, the rule that generated it and its component tasks are generalized. Then the process resumes, the agent attempting to reduce the new task. If the reduction proposed by the agent is rejected, then the agent will have to specialize the rule, and possibly its component tasks. In this case the expert will have to indicate the correct reduction, going through the normal steps of modeling, formalization, and learning. Similarly, when the agent cannot propose a reduction of the current task, the expert will have to indicate it, again going through the steps of modeling, formalization and learning. The control of this interaction is done by the mixed-initiative problem solver tool.