Barangay Council for the Protection of Children (BCPC) Orientation.pptx
Explanation in the Semantic Web
1. Explanation in Semantic Web: an
overview
RakebulHasan
PhD student, INRIA Sophia Antipolis-Méditerranée
2. • PhD topic: Solving upstream and downstream problems of a
distributed query on the semantic web
– Task 4: Traces and explanations
• Task4.2: Opening query-solving mechanisms.
• 2009: MSc in Computer Science, University of Trento, Italy
– CliP-MoKi: a collaborative tool for the modeling of clinical guidelines
• Previous employer: Semantic Technology Institute
Innsbruck, Austria
– Information diversity in the Web
1
3. • Early research in the expert systems
• Explanation in the Semantic Web
• Future work
2
4. Explanation
“An information processing operation that takes
the operation of an information processing
system as input and generates a description of
that processing operation as an output.”
- Wick and Thompson, 1992
3
5. Early research on explanation facilities
• Reasons that first gave rise to explanation
facilities
– Debugging expert systems
– Assuring that the reasoning process was correct
– Understanding the problem domain
– Convincing the human users
4
7. The expert systems should be able to provide
information about how answers were
obtained if users are expected to
understand, trust and use the conclusions
6
8. First generation of expert systems
• MYCIN and its derivatives
(GUIDON, NEOMYCIN)
– Why and how explanations
– Explanation based on invoked rule trace
7
10. useful for knowledgeable users
experienced programmer
little justification for less knowledgeable users
9
11. • The reasoning strategies employed by
programs do not form a good basis for
understandable explanations
• Categorization of knowledge and explicit
representation of linkages between different
types of knowledge are important
10
12. Explainable Expert System (EES)
• Explicit representation of “strategic”
knowledge
– Relation between goals and plans-> capability
descriptions
• Explicit representation of design rationale
– ‘Good’ explanations/justifications
• Abstract explanations of the reasoning process
W. Swartoutet al. Explanations in knowledge systems: Design for explainable
expert systems. IEEE Expert: Intelligent Systems and Their
Applications, 6(3):58–64, 1991.
11
13. Reconstructive Explainer (Rex)
• Reasoning and explanation construction are
done separately
• Representation of domain knowledge along
with domain rule knowledge (causality)
• A causal chain of explanation is constructed
M. R. Wick. Second generation expert system explanation. In Second Generation
Expert Systems, pages 614–640. 1993
12
14. Reconstructive Explainer (Rex)
We have a concrete dam under an excessive load. I attempted to find the cause of the
excessive load. Not knowing the solution and based on the broken pipes in the
foundation of the dam, and the downstream sliding of the dam, and the high uplift
pressures acting on the dam, and the slow drainage of water from the upstream side of
the dam to the downstream side I was able to make an initial hypothesis. To achieve this
1 used the strategy of striving to simply determine causal relationships. In attempting to
determine causes, I found that the internal erosion of soil from under the dam causes
broken pipes causing slow drainage resulting in uplift and in turn sliding. This led me to
hypothesize that internal erosion was the cause of the excessive load. Feeling confident
in this solution, I concluded that the internal erosion of soil from under the dam was the
cause of the excessive load.
The story teller tree
13
15. DesignExpert
• A second knowledge representation
– Communication domain knowledge (CDK):
knowledge about the domain knowledge
– Domain communication (DCK): knowledge about
how to communicate in the domain
– The purpose is to communicate explanations
• This representation is populated by the expert
systems as it reasons, not in a separate
process afterwards
R. Barzilayet al. A new approach to expert system explanations. In
9thInternational Workshop on Natural Language Generation, pages 78–87. 1998.
14
17. • Categorization of knowledge and explicit
representation of problem solving steps are
necessary for generating natural and complete
explanation
• Explanation should be able to change its
content according to the varying users and
context
16
18. Explanation in Semantic Web
• Query answering:
– The traditional Web: explicitly stored information
retrieved
– The Semantic Web:
• requires more processing steps than database retrieval
• results often require inference capabilities
• mashup, multiple sources, distributed services, etc
17
19. Similar to the Expert Systems, the Semantic Web
applications should be able to provide
information on how the results are obtained if
users are expected to understand, trust and
use the conclusions.
18
20. -Distributed
-Openness
“Linking Open Data cloud diagram, by Richard Cyganiak and AnjaJentzsch. http://lod-cloud.net/”
19
22. “Oh, yeah?” button to support the user in
assessing the reliability of information
encountered on the Web
21
Tim Berners-Lee
23. Explanation criteria in Semantic Web
• Types of explanations
– Justifications
– Provenance
• Trust
• Consumption of explanations
– Machine consumption
– Human consumption
• User expertise
D. L. McGuinnesset al. Explaining Semantic Web Applications. In Semantic
Web Engineering in the Knowledge Society. 2008. 22
24. Semantic Web Features
(an explanation perspective)
• Collaboration
• Autonomy
• Ontologies
23
25. Collaboration
• Interaction and sharing of knowledge between
agents
• The flow of information should be explained
• Provenance based explanation will add
transparency
24
26. Autonomy
• The ability of an agent to act independently
• Reasoning process should be explained
25
28. Inference Web (IW)
• A knowledge provenance infrastructure
– Provenance, metadata about sources
– Explanation, manipulation trace information
– Trust, rating the sources
27
29. • Proof Markup Language (PML) Ontology
– Proof interlingua
– Representation of justifications
– Representation of provenance information
– Representation of trust information
28
30. • IWBase
– Registry of meta-information related to proofs and
explanations
• Inference rules; ontologies; inference engines
• IW Toolkit
– Tools aimed at human users to
browse, debug, explain, and abstract the
knowledge encoded in PML.
29
31. abstraction of a piece of a proof
Step-by-step view focusing on one step with a list of follow-up actions
30
32. Accountability In RDF (AIR)
A Semantic Web-based rule language focusing
on generation and tracking of explanation for
inferences and actions.
L. Kagalet al. Gasping for AIR-why we need linked rules and justifications on the
semantic web. Rapport technique MIT- CSAIL-TR-2011-023, Massachusetts Institute of
Technology, 2011.
31
33. AIR Features
• Coping with logical inconsistencies
• Scoped contextualized reasoning
• Capturing and tracking provenance
– Deduction traces or justification
• Linked Rules which allow rules to be linked
and re-used
32
34. AIR Ontology
• Two independent ontologies
– An ontology for specifying AIR rules
– An ontology for describing justifications
33
35. Given as input:
a set of AIR rules
a RDFgraph
an AIR reasoner produces
justifications for the inferences made
34
36. Proof Explanation in Semantic Web
A nonmonotonic rule system based on
defeasible logic to extract and represent
explanations on the Semantic Web
G. Antoniou et al. Proof Explanation for the Semantic Web Using Defeasible Logic. In Zili
Zhang and JörgSiekmann, editeurs, Knowl- edge Science, Engineering and
Management, volume 4798 of Lecture Notes in Computer Science, pages 186–197.
Springer Berlin / Heidelberg, 2007 35
37. • Extension of RuleML
– Formal representation of explanation of defeasible
logic based reasoning
• Automatic generation of explanation
– Proof tree represented using the RuleML
extension
36
39. Remarks on Explanation in Semantic
Web
• Justification (rule trace) based explanation
– Abstraction not researched enough
• User adaption
• Understanding of domain knowledge is
difficult
• Representation, computation, combination, a
nd presentation of trust not researched
enough in this context
38
40. Future work at Edelweiss (Outline)
• Corese 3.0
– implements RDF, RDFS, SPARQL and Inference
Rules
• SPARQL with RDFS entailment
• SPARQL with Rules
39
41. • Justification explanation
– RDFS entailments
– SPARQL Rules
• Abstraction of justification explanation
• User adaption
– User modelling
40
42. • Communication
– Presentation and provision mechanisms of
explanation
• Provenance explanation
• Domain understanding
– Explanation based on term definitions
41
43. References
• K.W. Darlington. Designing for Explanation in
Health Care Applications of Expert
Systems, SAGE Open, SAGE Publications, 2011
• S.R. Haynes. Explanation in Information
Systems: A Design Rationale Approach. PhD
thesis, The London School of Economics, 2001
• InformaTion I et al. Explanation in expert
systems: A survey. University of Southern
California, 1988
42
Opening query-solving mechanisms to users: explaining query-searching process and inferences, and the errors encountered. Suggesting changes to queries, suggesting alternative queries.Explaining performances. Help in formulating queries and understanding of results and resolution process. Handling and explaining the distribution of a query over several sources: Decomposing and routing sub-queries. Following the process. Using this approach to detect conflicts between different contributors.
Reasoning process was correct -> knowledgeable usersUnderstanding the problem domain -> naïve users
Reasons why explanation capabilities are crucial for the success of the expert systems - explanations enable an understanding of the content of the knowledge base and reasoning process -> usability - Educating users about the domain and capabilities of the system - Debugging of the system during the development - the idea was to persuade users that the obtained results are correct, hence enabling trust in the reasoning capabilities of the systems -> acceptability, specially in domains such as safety critical domain
“Why” question: how is the information useful to me? – to ascend the goal tree and display each rule that was fired until the top-level goal is reached“How”: how did you arrive to this conclusion or rule? – descending the goal tree from the conclusion through various rules fired during the sessionTrace: very much relevant in kolflow in T4.1 (Alter Ego Assistant: reasoning over interaction traces)
In these works explanation of expert systems relied on, structural, strategic, and support knowledge capture in a rule base, but the linkage between these levels, an essential element of system functionality, were not explicitly represented there for unavailable to the programs attempting to construct complete and natural explanations - Strategic knowledge, the problem solving steps and heuristics - Structural knowledge, classification of the rules and the methods defining how rules can be combinedSupport knowledge, low-level detailed information that was used to relate a rule to the underlying casual process in the world, the facts that justify the existence of a given rule.Extra info-------------Structural knowledge can be seen as the bridge between generic problem solving and knowledge representation strategies as strategic level, the domain specific hypotheses, goals, and rules of a particular knowledge-baseSupport knowledge was used to justify a rule, to connect it to observed phenomena and empirically support generalisation in the problem domain.So support knowledge plays a central role in the translation of domain knowledge into a system model and of the system model to a system structure. “why” explanations provide the justification for how the system model is formed relative to the problem domain, and in how the translation from the system model to the system structured is performed.Methods for representing structure-strategic-support relationship were not explicated… the role of support knowledge beyond the giving essential insights was not developed
Explicit “strategic” knowledge - knowledge about how to reason, and domain-specific knowledge -> the problem solving steps and heuristics - strategic knowledge: how does a particular action relate to the overall goal? representation of design rational: why are actions reasonable in view of domain goals?- terminological domain knowledge: definition of terms
Different from previous work: in EES, knowledge engineers had to consider explanation while designing the domain knowledge, problem solving knowledgeIn Rex “Functional representation” A separate knowledge representation called “Functional representation” is used to generate explanation in a separate process than reasoningtwo different kind of knowledge: domain knowledge, and domain rule knowledge (mainly, causality) which is used to derive an “explanation path” through the domain knowledge representationGiven a Conclusion -> a path to the empty hypothesis is generated which maps into the second KB (textbook kb)
A second knowledge representation separate from the expert system’s domain knowledgeCDK: domain knowledge that is only needed for communication (knowledge that will be communicated), not for reasoning. DCK: knowledge about the communication medium
A second knowledge representation separate from the expert system’s domain knowledgeCDK: domain knowledge that is only needed for communication (knowledge that will be communicated), not for reasoning. DCK: knowledge about the communication medium
Additional characteristics: Distributed data: Users need to understand where the information is coming fromOpenness -> Conflicts in the knowledge, anyone can say anything about anythingTransparency of the process of obtaining a result is important to enable understanding of the obtained result
Whenever a user encounteres a piece of information that they would like to verify, pressing such a button would produce an explanation of the trustworthiness of the displayed information.
Justification: An understandable explanation based on an abstraction of the justifications (transition log of the manipulation steps)Provenance metadata allows providing another kind of explanation providing details on information sources.Trust: In a distributed settings what enables trust, how to compute, represent, present, combine trustMachine consumption: Interoperable representation of justification, provenance and trustHuman consumption: Human computer interface (HCI) issues such as the level of user expertise and the context of the problem should be considered in the explanations that are aimed for human consumption.
how these criteria relate to the Semantic Web application characteristicsCollaboration involves interaction and sharing of knowledge between agents that are dedicated to solve a particular problem
Collaboration involves interaction and sharing of knowledge between agents that are dedicated to solve a particular problemExample: Semantic Wikis, multi-agent systems and composition of Semantic Web services
Autonomy of an individual agent can be seen as the ability to act independently.Explanation plays an important role in applications with lower degree of autonomy as well. For example, in search engines which have a lower degree of autonomy, explanation facilitates improved query refinement by enabling users better understand the process of obtaining search results.
Ontologies can be effectively used to develop an interlingua to enable an interoperable explanation
Proof Markup Language (PML) Ontology - Semantic Web based representation for exchanging explanations including ▪ provenance information - annotating the sources of knowledge ▪ justification information - annotating the steps for deriving the conclusions or executing workflows ▪ trust information - annotating trustworthiness assertions about knowledge and sources
In this case, the reasoner used a number of steps to derive that crab was a subclass of seafood. This portion of the proof is displayed in the Dag style in the middle of Figure 4 (inside the blue round-angled box). The user may specify an abstraction rule to reduce the multi-step proof fragment into a one-step proof fragment (class-transitivity inference) on the left side of Figure 4.
coping with logical inconsistencies by allowing isolation of reasoning results which can cause inconsistencies in the global stateScoped re-use: In certain cases, such as when a rule or its creator are not completely trusted or when all inferences of a rule are not of equal quality, executing a rule in its entirety and accepting all its inferences is not feasible. AIR allows for the recursive execution of rules against a certain context and its conclusions to be selectively queried
AIR reasoner produces a set of justification for the inferences made, described using the AIR justification ontology, given as input a set of AIR rules described using AIR rule ontology and a RDF graph or an empty graph.
Defeasible logic allows simple rule-based approach to reasoning with inconsistent knowledge items.Intuitively, monotonicityindicates that learning a new piece of knowledge cannot reduce the set of what is known.Intuitively, nonmonotonicity indicates that new information in the knowledge base can reduce the set of what is known..
Justification:Existence of justification knowledge/ explicit representation of the rules that can justify existence of derived knowledge
built on top of the KGRAM generic SPARQL interpreter
Abstraction of justification: experimenting with different methodologies such the ones presented in DesignExpert (having a second knowledge model for explanation and populating it during the process of reasoning), Rex (generating explanation after the reasoning process is finished, generating explanation from the obtained result and a separate explanation knowledge model through a separate reasoning process)User adaption: the knowledge and expectation of recipient of the explanations should be considered example: dialogue planning for interactive explanation, dynamic follow up questions Different types, representation of explanation with varying explanation content based on user models tutoring systems for teaching to trace users’ learning progress and adapting system explanation based on thatUser modelling: Understanding user needs, preferences, understanding different kind of presentation for different types of usersThe system must be aware of the users skill levels and goals and adapt explanation content based on that User modelling would be also useful for providing alternative query suggestions, analysing query errors
Provenance explanation Who, when, where information adds more natural elements to the explanation/ journalistic approach (Slagle 1989 (JOE)) Explanation of collaborative interaction traces, distributed data sourcesTerminological explanation: Terminological explanations provide knowledge of concepts and relationships of a domain that domain experts use to communicate with each other. The inclusion of terminological explanations is sometimes necessary because in order for one to understand a domain, one must understand the terms used to describe the domain. Explanations before advice—that is, during the question input phase—could also be generated, called feedforward explanations.Terminological explanations are a category of explanations that are frequently used with feedforward explanations and would frequently be implemented using canned text. They are more likely to be used by novice or nonclinical userssuch as patients, rather than the more knowledgeable users. Terminological explanations provide generic rather than case-specific knowledge