SlideShare a Scribd company logo
1 of 195
1
Introduction to the Semantic Web
(tutorial)
Lucas Kaim Thelen
July 2018
Lucas.Kaim@maine.edu
Modified from Ivan Herman, W3C
talk
2
Let’s organize a trip to Budapest using the
Web!
3
You try to find a proper flight with …
4
… a big, reputable airline, or …
5
… the airline of the target country, or …
6
… or a low cost one
7
You have to find a hotel, so you look for…
8
… a really cheap accommodation, or …
9
… or a really luxurious one, or …
10
… an intermediate one …
11
oops, that is no good, the page is in
Hungarian that almost nobody
understands, but…
12
… this one could work
13
Of course, you could decide to trust a
specialized site…
14
… like this one, or…
15
… or this one
16
You may want to know something about
Budapest; look for some photographs…
17
… on flickr …
18
… on Google …
19
… or you can look at mine
20
… or a (social) travel site
21
What happened here?
•
You had to consult a large number of sites, all
different in style, purpose, possibly language…
•
You had to mentally integrate all those information
to achieve your goals
•
We all know that, sometimes, this is a long and
tedious process!
22
•
All those pages are only tips of respective icebergs:
•
the real data is hidden somewhere in databases, XML files,
Excel sheets, …
•
you have only access to what the Web page designers allow
you to see
23
•
Specialized sites (Expedia, TripAdvisor) do a bit
more:
•
they gather and combine data from other sources (usually
with the approval of the data owners)
•
but they still control how you see those sources
•
But sometimes you want to personalize: access the
original data and combine it yourself!
24
Here is another example…
25
Another example: social sites. I have a list
of “friends” by…
26
… Dopplr,
27
… Twine,
28
… LinkedIn,
29
… and, of course, Facebook
30
•
I had to type in and connect with friends again and
again for each site independently
•
This is even worse then before: feed the icebergs,
but I still do not have an easy access to data…
31
What would we like to have?
•
Use the data on the Web the same way as we do
with documents:
•
be able to link to data (independently of their presentation)
•
use that data the way I want (present it, mine it, etc)
•
agents, programs, scripts, etc, should be able to interpret
part of that data
32
Put it another way…
•
We would like to extend the current Web to a “Web
of data”:
•
allow for applications to exploit the data directly
33
But wait! Isn’t what mashup sites are
already doing?
34
A “mashup” example:
35
•
In some ways, yes, and that shows the huge power
of what such Web of data provides
•
But mashup sites are forced to do very ad-hoc jobs
•
various data sources expose their data via Web Services
•
each with a different API, a different logic, different structure
•
these sites are forced to reinvent the wheel many times
because there is no standard way of doing things
36
Put it another way (again)…
•
We would like to extend the current Web to a
standard way for a “Web of data”
37
But what does this mean?
•
What makes the current (document) Web work?
•
people create different documents
•
they give an address to it (ie, a URI) and make it accessible
to others on the Web
38
Steven’s site on Amsterdam
(done for some visiting friends)
39
Then some magic happens…
•
Others discover the site and they link to it
•
The more they link to it, the more important and
well known the page becomes
•
remember, this is what, eg, Google exploits!
•
This is the “Network effect”: some pages become
important, and others begin to rely on it even if the
author did not expect it…
40
This could be expected…
41
but this one, from the other side of the Globe,
was not…
42
What would that mean for a Web of Data?
•
Lessons learned: we should be able to:
•
“publish” the data to make it known on the Web
•
standard ways should be used instead of ad-hoc approaches
•
the analogous approach to documents: give URI-s to the data
•
make it possible to “link” to that URI from other sources of
data (not only Web pages)
•
ie, applications should not be forced to make targeted
developments to access the data
•
generic, standard approaches should suffice
•
and let the network effect work its way…
43
But it is a little bit more complicated
•
On the traditional Web, humans are implicitly taken
into account
•
A Web link has a “context” that a person may use
44
Eg: address field on my page:
45
… leading to this page
46
•
A human understands that this is my institution’s
home page
•
He/she knows what it means (realizes that it is a
research institute in Amsterdam)
•
On a Web of Data, something is missing; machines
can’t make sense of the link alone
47
•
New lesson learned:
•
extra information (“label”) must be added to a link: “this links
to my institution, which is a research institute”
•
this information should be machine readable
•
this is a characterization (or “classification”) of both the link
and its target
•
in some cases, the classification should allow for some
limited “reasoning”
48
Let us put it together
•
What we need for a Web of Data:
•
use URI-s to publish data, not only full documents
•
allow the data to link to other data
•
characterize/classify the data and the links (the “terms”) to
convey some extra meaning
•
and use standards for all these!
49
Information Theory
What is the difference between information and
data? Information is usable data. Data by itself
does not have meaning which is why it is not
information.
No one really cares about the data. The information
that one can learn from the data is what truly is
important.
So what can this tell us about the future?
50
So What is the Semantic Web?
51
It is a collection of standard technologies
to realize a Web of Data
52
•
It is that simple…
•
Of course, the devil is in the details
•
a common model has to be provided for machines to
describe, query, etc, the data and their connections
•
the “classification” of the terms can become very complex
for specific knowledge areas: this is where ontologies,
thesauri, etc, enter the game…
53
In what follows…
•
We will use a simplistic example to introduce the
main technical concepts
•
The details will be for later during the course
54
The rough structure of data integration
1. Map the various data onto an abstract data
representation
•
make the data independent of its internal representation…
1. Merge the resulting representations
2. Start making queries on the whole!
•
queries that could not have been done on the individual data
sets
55
Predicate logic
Predicate logic is the foundation for building facts
about the data.
Facts are represented by subject predicate object.
The predicate is the relationship between the
subject and the object.
By adding more facts we can link data to other data.
By linking data together with facts we are really
building a graph that can be used to answer key
questions that would be hard to impossible without
supporting a semantic web.
56
A simplified bookstore data (dataset “A”)
ID Author Title Publisher Year
ISBN0-00-651409-X The Glass Palace 2000id_xyz id_qpr
ID Name Home Page
ID City
Harper Collins London
id_xyz Ghosh, Amitav http://www.amitavghosh.com
Publ. Name
id_qpr
57
1st
: export your data as a set of relations
58
Some notes on the exporting the data
•
Relations form a graph
•
the nodes refer to the “real” data or contain some literal
•
how the graph is represented in machine is immaterial for
now
•
Data export does not necessarily mean physical
conversion of the data
•
relations can be generated on-the-fly at query time
•
via SQL “bridges”
•
scraping HTML pages
•
extracting data from Excel sheets
•
etc.
•
One can export part of the data
59
Another bookstore data (dataset “F”)
A B D E
1 ID Titre Original
2
ISBN0 2020386682 A13 ISBN-0-00-651409-X
3
6 ID Auteur
7 ISBN-0-00-651409-X A12
11
12
13
Traducteur
Le Palais
des
miroirs
Nom
Ghosh, Amitav
Besse, Christianne
60
2nd
: export your second set of data
61
3rd
: start merging your data
62
3rd
: start merging your data (cont.)
63
3rd
: merge identical resources
64
Start making queries…
•
User of data “F” can now ask queries like:
•
“give me the title of the original”
•
well, … « donnes-moi le titre de l’original »
•
This information is not in the dataset “F”…
•
…but can be retrieved by merging with dataset “A”!
65
Resource Description Framework
RDF and OWL
RDF is the way we write the facts. Facts can be
added as new information is collected or new
relationships are created.
A database and relational theory has one big flaw
which is it presupposes you know the relationships
before you build the database.
In a semantic web you can add facts about your data
as new information is collected.
https://www.w3.org/TR/rdf-concepts/
https://www.w3.org/2001/sw/wiki/OWL
66
Parliament and GeoSPARQL
Parliament is a triple store where you store your
facts.
http://parliament.semwebcentral.org/
GeoSPARQL is a geographic query language that
allows you to query the RDF data. This supports
spatial temporal reasoning.
http://www.opengeospatial.org/standards/geosparql
67
Demo
http://data.ordnancesurvey.co.uk/datasets/opennames
68
However, more can be achieved…
•
We “feel” that a:author and f:auteur should be
the same
•
But an automatic merge doest not know that!
•
Let us add some extra information to the merged
data:
•
a:author same as f:auteur
•
both identify a “Person”
•
a term that a community may have already defined:
•
a “Person” is uniquely identified by his/her name and, say,
homepage
•
it can be used as a “category” for certain type of resources
69
3rd
revisited: use the extra knowledge
70
Start making richer queries!
•
User of dataset “F” can now query:
•
“donnes-moi la page d’accueil de l’auteur de l’originale”
•
well… “give me the home page of the original’s ‘auteur’”
•
The information is not in datasets “F” or “A”…
•
…but was made available by:
•
merging datasets “A” and datasets “F”
•
adding three simple extra statements as an extra “glue”
71
Combine with different datasets
•
Using, e.g., the “Person”, the dataset can be
combined with other sources
•
For example, data in Wikipedia can be extracted
using dedicated tools
•
e.g., the “dbpedia” project can extract the “infobox”
information from Wikipedia already…
72
Merge with Wikipedia data
73
Merge with Wikipedia data
74
Merge with Wikipedia data
75
Is that surprising?
•
It may look like it but, in fact, it should not be…
•
What happened via automatic means is done every
day by Web users!
•
The difference: a bit of extra rigour so that
machines could do this, too
76
What did we do?
•
We combined different datasets that
•
are somewhere on the web
•
are of different formats (mysql, excel sheet, XHTML, etc)
•
have different names for relations
•
We could combine the data because some URI-s
were identical (the ISBN-s in this case)
•
We could add some simple additional information
(the “glue”), possibly using common terminologies
that a community has produced
•
As a result, new relations could be found and
retrieved
77
It could become even more powerful
•
We could add extra knowledge to the merged
datasets
•
e.g., a full classification of various types of library data
•
geographical information
•
etc.
•
This is where ontologies, extra rules, etc, come in
•
ontologies/rule sets can be relatively simple and small, or
huge, or anything in between…
•
Even more powerful queries can be asked as a
result
78
What did we do? (cont)
79
The Basis: RDF
80
RDF triples
•
Let us begin to formalize what we did!
•
we “connected” the data…
•
but a simple connection is not enough… data should be
named somehow
•
hence the RDF Triples: a labelled connection between two
resources
81
RDF triples (cont.)
•
An RDF Triple (s,p,o) is such that:
•
“s”, “p” are URI-s, ie, resources on the Web; “o” is a URI or
a literal
•
“s”, “p”, and “o” stand for “subject”, “property”, and “object”
•
here is the complete triple:
•
RDF is a general model for such triples (with
machine readable formats like RDF/XML, Turtle, N3,
RXR, …)
(<http://…isbn…6682>, <http://…/original>, <http://…isbn…409X>)(<http://…isbn…6682>, <http://…/original>, <http://…isbn…409X>)
82
RDF triples (cont.)
•
Resources can use any URI, e.g.:
•
http://www.example.org/file.xml#element(home)
•
http://www.example.org/file.html#home
•
http://www.example.org/file2.xml#xpath1(//q[@a=b])
•
URI-s can also denote non Web entities:
•
http://www.ivan-herman.net/me is me
•
not my home page, not my publication list, but me
•
RDF triples form a directed, labelled graph
83
A simple RDF example (in RDF/XML)
<rdf:Description rdf:about="http://…/isbn/2020386682">
<f:titre xml:lang="fr">Le palais des mirroirs</f:titre>
<f:original rdf:resource="http://…/isbn/000651409X"/>
</rdf:Description>
<rdf:Description rdf:about="http://…/isbn/2020386682">
<f:titre xml:lang="fr">Le palais des mirroirs</f:titre>
<f:original rdf:resource="http://…/isbn/000651409X"/>
</rdf:Description>
(Note: namespaces are used to simplify the URI-s)
84
A simple RDF example (in Turtle)
<http://…/isbn/2020386682>
f:titre "Le palais des mirroirs"@fr ;
f:original <http://…/isbn/000651409X> .
<http://…/isbn/2020386682>
f:titre "Le palais des mirroirs"@fr ;
f:original <http://…/isbn/000651409X> .
85
“Internal” nodes
•
Consider the following statement:
•
“the publisher is a «thing» that has a name and an address”
•
Until now, nodes were identified with a URI. But…
•
…what is the URI of «thing»?
86
Internal identifier (“blank nodes”)
•
Syntax is serialization dependent
•
A234 is invisible from outside (it is not a “real”
URI!); it is an internal identifier for a resource
<rdf:Description rdf:about="http://…/isbn/000651409X">
<a:publisher rdf:nodeID="A234"/>
</rdf:Description>
<rdf:Description rdf:nodeID="A234">
<a:p_name>HarpersCollins</a:p_name>
<a:city>HarpersCollins</a:city>
</rdf:Description>
<rdf:Description rdf:about="http://…/isbn/000651409X">
<a:publisher rdf:nodeID="A234"/>
</rdf:Description>
<rdf:Description rdf:nodeID="A234">
<a:p_name>HarpersCollins</a:p_name>
<a:city>HarpersCollins</a:city>
</rdf:Description>
<http://…/isbn/2020386682> a:publisher _:A234.
_:A234 a:p_name "HarpersCollins".
<http://…/isbn/2020386682> a:publisher _:A234.
_:A234 a:p_name "HarpersCollins".
87
Blank nodes: the system can also do it
•
Let the system create a “nodeID” internally (you do
not really care about the name…)
<rdf:Description rdf:about="http://…/isbn/000651409X">
<a:publisher>
<rdf:Description>
<a:p_name>HarpersCollins</a:p_name>
…
</rdf:Description>
</a:publisher>
</rdf:Description>
<rdf:Description rdf:about="http://…/isbn/000651409X">
<a:publisher>
<rdf:Description>
<a:p_name>HarpersCollins</a:p_name>
…
</rdf:Description>
</a:publisher>
</rdf:Description>
88
Same in Turtle
<http://…/isbn/000651409X> a:publisher [
a:p_name "HarpersCollins";
…
].
<http://…/isbn/000651409X> a:publisher [
a:p_name "HarpersCollins";
…
].
89
Blank nodes: some more remarks
•
Blank nodes require attention when merging
•
blanks nodes with identical nodeID-s in different graphs are
different
•
implementations must be careful…
•
Many applications prefer not to use blank nodes
and define new URI-s “on-the-fly”
90
RDF in programming practice
•
For example, using Java+Jena (HP’s Bristol Lab):
•
a “Model” object is created
•
the RDF file is parsed and results stored in the Model
•
the Model offers methods to retrieve:
•
triples
•
(property,object) pairs for a specific subject
•
(subject,property) pairs for specific object
•
etc.
•
the rest is conventional programming…
•
Similar tools exist in Python, PHP, etc.
91
Jena example
// create a model
Model model=new ModelMem();
Resource subject=model.createResource("URI_of_Subject")
// 'in' refers to the input file
model.read(new InputStreamReader(in));
StmtIterator iter=model.listStatements(subject,null,null);
while(iter.hasNext()) {
st = iter.next();
p = st.getProperty();
o = st.getObject();
do_something(p,o);
}
// create a model
Model model=new ModelMem();
Resource subject=model.createResource("URI_of_Subject")
// 'in' refers to the input file
model.read(new InputStreamReader(in));
StmtIterator iter=model.listStatements(subject,null,null);
while(iter.hasNext()) {
st = iter.next();
p = st.getProperty();
o = st.getObject();
do_something(p,o);
}
92
Merge in practice
•
Environments merge graphs automatically
•
e.g., in Jena, the Model can load several files
•
the load merges the new statements automatically
93
Example: integrate experimental data
•
Goal: reuse of older
experimental data
•
Keep data in
databases or XML,
just export key “fact”
as RDF
•
Use a faceted
browser to visualize
and interact with the
result
Courtesy of Nigel Wilkinson, Lee Harland, Pfizer Ltd, Melliyal Annamalai, Oracle (SWEO Case Study)
94
One level higher up
(RDFS, Datatypes)
95
Need for RDF schemas
•
First step towards the “extra knowledge”:
•
define the terms we can use
•
what restrictions apply
•
what extra relationships are there?
•
Officially: “RDF Vocabulary Description Language”
•
the term “Schema” is retained for historical reasons…
96
Classes, resources, …
•
Think of well known traditional ontologies or
taxonomies:
•
use the term “novel”
•
“every novel is a fiction”
•
“«The Glass Palace» is a novel”
•
etc.
•
RDFS defines resources and classes:
•
everything in RDF is a “resource”
•
“classes” are also resources, but…
•
…they are also a collection of possible resources (i.e.,
“individuals”)
•
“fiction”, “novel”, …
97
Classes, resources, … (cont.)
•
Relationships are defined among classes and
resources:
•
“typing”: an individual belongs to a specific class
•
“«The Glass Palace» is a novel”
•
to be more precise: “«http://.../000651409X» is a novel”
•
“subclassing”: all instances of one are also the instances of
the other (“every novel is a fiction”)
•
RDFS formalizes these notions in RDF
98
Classes, resources in RDF(S)
•
RDFS defines the meaning of these terms
•
(these are all special URI-s, we just use the namespace
abbreviation)
99
Schema example in RDF/XML
•
The schema part:
<rdf:Description rdf:ID="Novel">
<rdf:type
rdf:resource="http://www.w3.org/2000/01/rdf-schema#Class"/>
</rdf:Description>
<rdf:Description rdf:ID="Novel">
<rdf:type
rdf:resource="http://www.w3.org/2000/01/rdf-schema#Class"/>
</rdf:Description>
•
The RDF data on a specific novel:
<rdf:Description rdf:about="http://…/isbn/000651409X">
<rdf:type rdf:resource="http://…/bookSchema.rdf#Novel"/>
</rdf:Description>
<rdf:Description rdf:about="http://…/isbn/000651409X">
<rdf:type rdf:resource="http://…/bookSchema.rdf#Novel"/>
</rdf:Description>
100
Further remarks on types
•
A resource may belong to several classes
•
rdf:type is just a property…
•
“«The Glass Palace» is a novel, but «The Glass Palace» is
also an «inventory item»…”
•
i.e., it is not like a datatype!
•
The type information may be very important for
applications
•
e.g., it may be used for a categorization of possible nodes
•
probably the most frequently used RDF property…
•
(remember the “Person” in our example?)
101
Inferred properties
•
is not in the original RDF data…
•
…but can be inferred from the RDFS rules
•
RDFS environments return that triple, too
(<http://…/isbn/000651409X> rdf:type #Fiction)(<http://…/isbn/000651409X> rdf:type #Fiction)
102
Inference: let us be formal…
•
The RDF Semantics document has a list of (33)
entailment rules:
•
“if such and such triples are in the graph, add this and this”
•
do that recursively until the graph does not change
•
The relevant rule for our example:
If:
uuu rdfs:subClassOf xxx .
vvv rdf:type uuu .
Then add:
vvv rdf:type xxx .
If:
uuu rdfs:subClassOf xxx .
vvv rdf:type uuu .
Then add:
vvv rdf:type xxx .
103
Properties
•
Property is a special class (rdf:Property)
•
properties are also resources identified by URI-s
•
There is also a possibility for a “sub-property”
•
all resources bound by the “sub” are also bound by the other
•
Range and domain of properties can be specified
•
i.e., what type of resources serve as object and subject
104
Property specification serialized
•
In RDF/XML:
<rdf:Property rdf:ID="title">
<rdfs:domain rdf:resource="#Fiction"/>
<rdfs:range rdf:resource="http://...#Literal"/>
</rdf:Property>
<rdf:Property rdf:ID="title">
<rdfs:domain rdf:resource="#Fiction"/>
<rdfs:range rdf:resource="http://...#Literal"/>
</rdf:Property>
•
In Turtle:
:title
rdf:type rdf:Property;
rdfs:domain :Fiction;
rdfs:range rdfs:Literal.
:title
rdf:type rdf:Property;
rdfs:domain :Fiction;
rdfs:range rdfs:Literal.
105
What does this mean?
•
Again, new relations can be deduced. Indeed, if
:title
rdf:type rdf:Property;
rdfs:domain :Fiction;
rdfs:range rdfs:Literal.
<http://…/isbn/000651409X> :title "The Glass Palace" .
:title
rdf:type rdf:Property;
rdfs:domain :Fiction;
rdfs:range rdfs:Literal.
<http://…/isbn/000651409X> :title "The Glass Palace" .
•
then the system can infer that:
<http://…/isbn/000651409X> rdf:type :Fiction .<http://…/isbn/000651409X> rdf:type :Fiction .
106
Literals
•
Literals may have a data type
•
floats, integers, booleans, etc, defined in XML Schemas
•
full XML fragments
•
(Natural) language can also be specified
107
Examples for datatypes
<http://…/isbn/000651409X>
:page_number "543"^^xsd:integer ;
:publ_date "2000"^^xsd:gYear ;
:price "6.99"^^xsd:float .
<http://…/isbn/000651409X>
:page_number "543"^^xsd:integer ;
:publ_date "2000"^^xsd:gYear ;
:price "6.99"^^xsd:float .
108
A bit of RDFS can take you far…
•
Remember the power of merge?
•
We could have used, in our example:
•
f:auteur is a subproperty of a:author and vice versa
(although we will see other ways to do that…)
•
Of course, in some cases, more complex
knowledge is necessary (see later…)
109
Example: find the right experts at NASA
•
Expertise locater for nearly 70,000 NASA civil
servants, using RDF integration techniques over 6
or 7 geographically distributed databases, data
sources, and web services…
Michael Grove, Clark & Parsia, LLC, and Andrew Schain, NASA, (SWEO Case Study)
110
How to get RDF Data?
(Microformats, GRDDL, RDFa)
111
Simple approach
•
Write RDF/XML or Turtle “manually”
•
In some cases that is necessary, but it really does
not scale…
112
RDF with XHTML
•
Obviously, a huge source of information
•
By adding some “meta” information, the same
source can be reused for, eg, data integration,
better mashups, etc
•
typical example: your personal information, like address,
should be readable for humans and processable by
machines
•
Two solutions have emerged:
•
extract the structure from the page and convert the content
into RDF
•
add RDF statements directly into XHTML via RDFa
113
Extract RDF
•
Use intelligent “scrapers” or “wrappers” to extract a
structure (hence RDF) from a Web pages or XML
files…
•
… and then generate RDF automatically (e.g., via
an XSLT script)
114
Formalizing the scraper approach: GRDDL
•
GRDDL formalizes the scraper approach. For
example:
<html xmlns="http://www.w3.org/1999/">
<head profile="http://www.w3.org/2003/g/data-view">
<title>Some Document</title>
<link rel="transformation" href="http:…/dc-extract.xsl"/>
<meta name="DC.Subject" content="Some subject"/>
...
</head>
...
<span class="date">2006-01-02</span>
...
</html>
<html xmlns="http://www.w3.org/1999/">
<head profile="http://www.w3.org/2003/g/data-view">
<title>Some Document</title>
<link rel="transformation" href="http:…/dc-extract.xsl"/>
<meta name="DC.Subject" content="Some subject"/>
...
</head>
...
<span class="date">2006-01-02</span>
...
</html>
•
yields, through dc-extract.xsl:
<>
dc:subject "Some subject";
dc:date "2006-01-02" .
<>
dc:subject "Some subject";
dc:date "2006-01-02" .
115
GRDDL
•
The transformation itself has to be provided for
each set of conventions
•
A more general syntax is defined for XML formats
in general (e.g., via the namespace document)
•
a method to get data in other formats to RDF (e.g., XBRL)
116
Example for “structure”: microformats
•
Not a Semantic Web specification, originally
•
there is a separate microformat community
•
Approach: re-use (X)HTML attributes and elements
to add “meta” information
•
typically @abbr, @class, @title, …
•
different community agreements for different applications
117
RDFa
•
RDFa extends (X)HTML a bit by:
•
defining general attributes to add metadata to any elements
•
provides an almost complete “serialization” of RDF in
XHTML
•
It is a bit like the microformats/GRDDL approach
but fully generic
118
RDFa example
•
For example:
<div about="http://uri.to.newsitem">
<span property="dc:date">March 23, 2004</span>
<span property="dc:title">Rollers hit casino for £1.3m</span>
By <span property="dc:creator">Steve Bird</span>. See
<a href="http://www.a.b.c/d.avi" rel="dcmtype:MovingImage">
also video footage</a>…
</div>
<div about="http://uri.to.newsitem">
<span property="dc:date">March 23, 2004</span>
<span property="dc:title">Rollers hit casino for £1.3m</span>
By <span property="dc:creator">Steve Bird</span>. See
<a href="http://www.a.b.c/d.avi" rel="dcmtype:MovingImage">
also video footage</a>…
</div>
•
yields, through an RDFa processor:
<http://uri.to.newsitem>
dc:date "March 23, 2004";
dc:title "Rollers hit casino for £1.3m;
dc:creator "Steve Bird";
dcmtype:MovingImage <http://www.a.b.c/d.avi>.
<http://uri.to.newsitem>
dc:date "March 23, 2004";
dc:title "Rollers hit casino for £1.3m;
dc:creator "Steve Bird";
dcmtype:MovingImage <http://www.a.b.c/d.avi>.
119
Example: Yahoo’s SearchMonkey
•
Search based results may be customized via small
applications
•
Metadata in pages (in RDFa, microformats etc) are
reused
Courtesy of Peter Mika, Yahoo! Research, (SWEO Case Study)
120
Example: RDFa data by the
London Gazette
121
Example: RDFa data by the
London Gazette
122
Bridge to relational databases
•
Data on the Web are mostly stored in databases
•
“Bridges” are being defined:
•
a layer between RDF and the relational data
•
RDB tables are “mapped” to RDF graphs, possibly on the fly
•
different mapping approaches are being used
•
a number RDB systems offer this facility already (eg,
Oracle, OpenLink, …)
•
A survey on mapping techniques has been
published at W3C
•
W3C plans to engage in a standardization work in
this area
123
Linking Data
124
Linking Open Data Project
•
Goal: “expose” open datasets in RDF
•
Set RDF links among the data items from different
datasets
•
Set up query endpoints
•
Altogether billions of triples, millions of links…
125
Example data source: DBpedia
•
DBpedia is a community effort to
•
extract structured (“infobox”) information from Wikipedia
•
provide a query endpoint to the dataset
•
interlink the DBpedia dataset with other datasets on the
Web
126
Extracting Wikipedia structured data
@prefix dbpedia <http://dbpedia.org/resource/>.
@prefix dbterm <http://dbpedia.org/property/>.
dbpedia:Amsterdam
dbterm:officialName “Amsterdam” ;
dbterm:longd “4” ;
dbterm:longm “53” ;
dbterm:longs “32” ;
...
dbterm:leaderTitle “Mayor” ;
dbterm:leaderName dbpedia:Job_Cohen ;
...
dbterm:areaTotalKm “219” ;
...
dbpedia:ABN_AMRO
dbterm:location dbpedia:Amsterdam ;
...
@prefix dbpedia <http://dbpedia.org/resource/>.
@prefix dbterm <http://dbpedia.org/property/>.
dbpedia:Amsterdam
dbterm:officialName “Amsterdam” ;
dbterm:longd “4” ;
dbterm:longm “53” ;
dbterm:longs “32” ;
...
dbterm:leaderTitle “Mayor” ;
dbterm:leaderName dbpedia:Job_Cohen ;
...
dbterm:areaTotalKm “219” ;
...
dbpedia:ABN_AMRO
dbterm:location dbpedia:Amsterdam ;
...
127
Automatic links among open datasets
<http://dbpedia.org/resource/Amsterdam>
owl:sameAs <http://rdf.freebase.com/ns/...> ;
owl:sameAs <http://sws.geonames.org/2759793> ;
...
<http://dbpedia.org/resource/Amsterdam>
owl:sameAs <http://rdf.freebase.com/ns/...> ;
owl:sameAs <http://sws.geonames.org/2759793> ;
...
<http://sws.geonames.org/2759793>
owl:sameAs <http://dbpedia.org/resource/Amsterdam>
wgs84_pos:lat “52.3666667” ;
wgs84_pos:long “4.8833333” ;
geo:inCountry <http://www.geonames.org/countries/#NL> ;
...
<http://sws.geonames.org/2759793>
owl:sameAs <http://dbpedia.org/resource/Amsterdam>
wgs84_pos:lat “52.3666667” ;
wgs84_pos:long “4.8833333” ;
geo:inCountry <http://www.geonames.org/countries/#NL> ;
...
Processors can switch automatically from one to the other…
128
The LOD “cloud”, March 2008
129
The LOD “cloud”, September 2008
130
The LOD “cloud”, March 2009
131
Example: mapping application on an iPhone
Courtesy of Chris Bizer and Christian Becker, Freie Universität, Berlin
132
Example: mapping application on an iPhone
Courtesy of Chris Bizer and Christian Becker, Freie Universität, Berlin
133
Query RDF Data
(SPARQL)
134
RDF data access
•
How do I query the RDF data?
•
e.g., how do I get to the DBpedia data?
135
Querying RDF graphs
•
Remember the Jena idiom:
StmtIterator iter=model.listStatements(subject,null,null);
while(iter.hasNext()) {
st = iter.next();
p = st.getProperty(); o = st.getObject();
do_something(p,o);
StmtIterator iter=model.listStatements(subject,null,null);
while(iter.hasNext()) {
st = iter.next();
p = st.getProperty(); o = st.getObject();
do_something(p,o);
•
In practice, more complex queries into the RDF
data are necessary
•
something like: “give me the (a,b) pair of resources, for
which there is an x such that (x parent a) and (b brother x)
holds” (ie, return the uncles)
•
these rules may become quite complex
•
The goal of SPARQL (Query Language for RDF)
136
Analyse the Jena example
StmtIterator iter=model.listStatements(subject,null,null);
while(iter.hasNext()) {
st = iter.next();
p = st.getProperty(); o = st.getObject();
do_something(p,o);
StmtIterator iter=model.listStatements(subject,null,null);
while(iter.hasNext()) {
st = iter.next();
p = st.getProperty(); o = st.getObject();
do_something(p,o);
•
The (subject,?p,?o) is a pattern for what we
are looking for (with ?p and ?o as “unknowns”)
137
General: graph patterns
•
The fundamental idea: use graph patterns
•
the pattern contains unbound symbols
•
by binding the symbols, subgraphs of the RDF graph are
selected
•
if there is such a selection, the query returns bound
resources
138
Our Jena example in SPARQL
SELECT ?p ?o
WHERE {subject ?p ?o}
SELECT ?p ?o
WHERE {subject ?p ?o}
•
The triples in WHERE define the graph pattern,
with ?p and ?o “unbound” symbols
•
The query returns all p,o pairs
139
Simple SPARQL example
SELECT ?isbn ?price ?currency # note: not ?x!
WHERE {?isbn a:price ?x. ?x rdf:value ?price. ?x p:currency ?currency.}
SELECT ?isbn ?price ?currency # note: not ?x!
WHERE {?isbn a:price ?x. ?x rdf:value ?price. ?x p:currency ?currency.}
140
Simple SPARQL example
•
Returns:
[[<..49X>,33,£], [<..49X>,50,€], [<..6682>,60,€],
[<..6682>,78,$]]
SELECT ?isbn ?price ?currency # note: not ?x!
WHERE {?isbn a:price ?x. ?x rdf:value ?price. ?x p:currency ?currency.}
SELECT ?isbn ?price ?currency # note: not ?x!
WHERE {?isbn a:price ?x. ?x rdf:value ?price. ?x p:currency ?currency.}
141
Pattern constraints
SELECT ?isbn ?price ?currency # note: not ?x!
WHERE { ?isbn a:price ?x. ?x rdf:value ?price. ?x p:currency ?currency.
FILTER(?currency == € }
SELECT ?isbn ?price ?currency # note: not ?x!
WHERE { ?isbn a:price ?x. ?x rdf:value ?price. ?x p:currency ?currency.
FILTER(?currency == € }
•
Returns: [[<..409X>,50,€], [<..6682>,60,€]]
142
Other SPARQL features
•
Limit the number of returned results; remove
duplicates, sort them, …
•
Optional branches in the query
•
Specify several data sources (via URI-s) within the
query (essentially, a merge!)
•
Construct a graph combining a separate pattern
and the query results
•
Use datatypes and/or language tags when
matching a pattern
143
SPARQL usage in practice
•
SPARQL is usually used over the network
•
separate documents define the protocol and the result
format
•
SPARQL Protocol for RDF with HTTP and SOAP bindings
•
SPARQL results in XML or JSON formats
•
Big datasets usually offer “SPARQL endpoints”
using this protocol
•
typical example: SPARQL endpoint to DBpedia
144
SPARQL as a unifying point
145
Example: integrate Chinese medical data
•
Integration of a large number of TCM databases
•
around 80 databases, around 200,000 records each
•
A visual tool to map databases to the semantic
layer using a specialized ontology
•
Form based query interface for end users
Courtesy of Huajun Chen, Zhejiang University, (SWEO Case Study)
146
Ontologies
(OWL)
147
Ontologies
•
RDFS is useful, but does not solve all possible
requirements
•
Complex applications may want more possibilities:
•
characterization of properties
•
identification of objects with different URI-s
•
disjointness or equivalence of classes
•
construct classes, not only name them
•
can a program reason about some terms? E.g.:
•
“if «Person» resources «A» and «B» have the same
«foaf:email» property, then «A» and «B» are identical”
•
etc.
148
Ontologies (cont.)
•
The term ontologies is used in this respect:
•
RDFS can be considered as a simple ontology
language
•
Languages should be a compromise between
•
rich semantics for meaningful applications
•
feasibility, implementability
“defines the concepts and relationships used to describe
and represent an area of knowledge”
“defines the concepts and relationships used to describe
and represent an area of knowledge”
149
Web Ontology Language = OWL
•
OWL is an extra layer, a bit like RDF Schemas
•
own namespace, own terms
•
it relies on RDF Schemas
•
It is a separate recommendation
•
actually… there is a 2004 version of OWL (“OWL 1”)
•
and there is an update (“OWL 2”) that should be finalized in
2009
•
you will surely hear about it at the conference…
150
OWL is complex…
•
OWL is a large set of additional terms
•
We will not cover the whole thing here…
151
Term equivalences
•
For classes:
•
owl:equivalentClass: two classes have the same
individuals
•
owl:disjointWith: no individuals in common
•
For properties:
•
owl:equivalentProperty
•
remember the a:author vs. f:auteur
•
owl:propertyDisjointWith
•
For individuals:
•
owl:sameAs: two URIs refer to the same concept
(“individual”)
•
owl:differentFrom: negation of owl:sameAs
152
Connecting to French…
153
Typical usage of owl:sameAs
•
Linking our example of Amsterdam from one data
set (DBpedia) to the other (Geonames):
<http://dbpedia.org/resource/Amsterdam>
owl:sameAs <http://sws.geonames.org/2759793>;
<http://dbpedia.org/resource/Amsterdam>
owl:sameAs <http://sws.geonames.org/2759793>;
•
This is the main mechanism of “Linking” in the
Linking Open Data project
154
Property characterization
•
In OWL, one can characterize the behaviour of
properties (symmetric, transitive, functional, inverse
functional…)
•
One property may be the inverse of another
•
OWL also separates data and object properties
•
“datatype property” means that its range are typed literals
155
What this means is…
•
If the following holds in our triples:
:email rdf:type owl:InverseFunctionalProperty.
<A> :email "mailto:a@b.c".
<B> :email "mailto:a@b.c".
:email rdf:type owl:InverseFunctionalProperty.
<A> :email "mailto:a@b.c".
<B> :email "mailto:a@b.c".
•
I.e., new relationships were discovered again
(beyond what RDFS could do)
<A> owl:sameAs <B>.<A> owl:sameAs <B>.
then, processed through OWL, the following holds,
too:
156
Classes in OWL
•
In RDFS, you can subclass existing classes…
that’s all
•
In OWL, you can construct classes from existing
ones:
•
enumerate its content
•
through intersection, union, complement
•
Etc
157
Classes in OWL (cont)
•
OWL makes a stronger conceptual distinction
between classes and individuals
•
there is a separate term for owl:Class, to make the
difference (a specialization of the RDFS class)
•
individuals are separated into a special class called
owl:Thing
•
Eg, a precise classification would be:
ex:Person rdf:type owl:Class.
<uri-for-Amitav-Ghosh>
rdf:type owl:Thing;
rdf:type owl:Person .
ex:Person rdf:type owl:Class.
<uri-for-Amitav-Ghosh>
rdf:type owl:Thing;
rdf:type owl:Person .
158
Classes contents can be enumerated
•
I.e., the class consists of exactly of those
individuals
:£ rdf:type owl:Thing.
:€ rdf:type owl:Thing.
:$ rdf:type owl:Thing.
:Currency
rdf:type owl:Class;
owl:oneOf (:€ :£ :$).
:£ rdf:type owl:Thing.
:€ rdf:type owl:Thing.
:$ rdf:type owl:Thing.
:Currency
rdf:type owl:Class;
owl:oneOf (:€ :£ :$).
159
Union of classes can be defined
•
Other possibilities: complementOf,
intersectionOf, …
:Novel rdf:type owl:Class.
:Short_Story rdf:type owl:Class.
:Poetry rdf:type owl:Class.
:Literature rdf:type owl:Class;
owl:unionOf (:Novel :Short_Story :Poetry).
:Novel rdf:type owl:Class.
:Short_Story rdf:type owl:Class.
:Poetry rdf:type owl:Class.
:Literature rdf:type owl:Class;
owl:unionOf (:Novel :Short_Story :Poetry).
160
For example…
If:
:Novel rdf:type owl:Class.
:Short_Story rdf:type owl:Class.
:Poetry rdf:type owl:Class.
:Literature rdf:type owl:Class;
owl:unionOf (:Novel :Short_Story :Poetry).
<myWork> rdf:type :Novel .
:Novel rdf:type owl:Class.
:Short_Story rdf:type owl:Class.
:Poetry rdf:type owl:Class.
:Literature rdf:type owl:Class;
owl:unionOf (:Novel :Short_Story :Poetry).
<myWork> rdf:type :Novel .
<myWork> rdf:type :Literature .<myWork> rdf:type :Literature .
then the following holds, too:
161
It can be a bit more complicated…
If:
:Novel rdf:type owl:Class.
:Short_Story rdf:type owl:Class.
:Poetry rdf:type owl:Class.
:Literature rdf:type owlClass;
owl:unionOf (:Novel :Short_Story :Poetry).
fr:Roman owl:equivalentClass :Novel .
<myWork> rdf:type fr:Roman .
:Novel rdf:type owl:Class.
:Short_Story rdf:type owl:Class.
:Poetry rdf:type owl:Class.
:Literature rdf:type owlClass;
owl:unionOf (:Novel :Short_Story :Poetry).
fr:Roman owl:equivalentClass :Novel .
<myWork> rdf:type fr:Roman .
<myWork> rdf:type :Literature .<myWork> rdf:type :Literature .
then, through the combination of different terms,
the following still holds:
162
What we have so far…
•
The OWL features listed so far are already fairly
powerful
•
E.g., various databases can be linked via
owl:sameAs, functional or inverse functional
properties, etc.
•
Many inferred relationship can be found using a
traditional rule engine
163
However… that may not be enough
•
Very large vocabularies might require even more
complex features
•
typical usage example: definition of all concepts in a health
care environment
•
a major issue: the way classes (i.e., “concepts”) are defined
•
OWL includes those extra features but… the
inference engines become (much) more complex
164
Property value restrictions
•
Classes are created by restricting the property
values on its individuals
•
For example: how would I characterize a “listed
price”?
•
it is a price (which may be a general term), but one that is
given in one of the “allowed” currencies (say, €, £, or $)
•
more formally:
•
the value of “p:currency”, when applied to a resource on
listed price, must be of one of those values…
•
…thereby defining the class of “listed price”
165
Restrictions formally
•
Defines a class of type owl:Restriction with a
•
reference to the property that is constrained
•
definition of the constraint itself
•
One can, e.g., subclass from this node when
defining a particular class
:Listed_Price rdfs:subClassOf [
rdf:type owl:Restriction;
owl:onProperty p:currency;
owl:allValuesFrom :Currency.
].
:Listed_Price rdfs:subClassOf [
rdf:type owl:Restriction;
owl:onProperty p:currency;
owl:allValuesFrom :Currency.
].
166
Possible usage…
If:
<something> rdf:type :Currency .<something> rdf:type :Currency .
then the following holds:
:Listed_Price rdfs:subClassOf [
rdf:type owl:Restriction;
owl:onProperty p:currency;
owl:allValuesFrom :Currency.
].
:price rdf:type :Listed_Price .
:price p:currency <something> .
:Listed_Price rdfs:subClassOf [
rdf:type owl:Restriction;
owl:onProperty p:currency;
owl:allValuesFrom :Currency.
].
:price rdf:type :Listed_Price .
:price p:currency <something> .
167
Other restrictions
•
allValuesFrom could be replaced by:
•
someValuesFrom
•
e.g., I could have said: there should be a price given in at
least one of those currencies
•
hasValue, when restricted to one specific value
•
Cardinality restrictions: instead of looking at the
values of properties, their number is considered
•
eg, a specific property should occur exactly once
168
But: OWL is hard!
•
The combination of class constructions with various
restrictions is extremely powerful
•
What we have so far follows the same logic as
before
•
extend the basic RDF and RDFS possibilities with new
features
•
define their semantics, ie, what they “mean” in terms of
relationships
•
expect to infer new relationships based on those
•
However… a full inference procedure is hard
•
not implementable with simple rule engines, for example
169
OWL “species”
•
OWL species comes to the fore:
•
restricting which terms can be used and under what
circumstances (restrictions)
•
if one abides to those restrictions, then simpler inference
engines can be used
•
They reflect compromises: expressibility vs.
implementability
170
OWL Full
•
No constraints on any of the constructs
•
owl:Class is just syntactic sugar for rdfs:Class
•
owl:Thing is equivalent to rdfs:Resource
•
this means that:
•
Class can also be an individual, a URI can denote a property
as well as a Class
•
e.g., it is possible to talk about class of classes, apply properties
on them
•
etc
•
etc.
•
Extension of RDFS in all respects
•
But: no system may exist that infers everything one
might expect
171
OWL Full usage
•
Nevertheless OWL Full is essential
•
it gives a generic framework to express many things
•
some application just need to express and interchange
terms (with possible scruffiness)
•
Applications may control what terms are used and
how
•
in fact, they may define their own sub-language via, eg, a
vocabulary
•
thereby ensuring a manageable inference procedure
172
OWL DL
•
A number of restrictions are defined
•
classes, individuals, object and datatype properties, etc, are
fairly strictly separated
•
object properties must be used with individuals
•
i.e., properties are really used to create relationships between
individuals
•
no characterization of datatype properties
•
…
•
But: well known inference algorithms exist!
173
Examples for restrictions
•
The following is not “legal” OWL DL:
<q> rdf:type <A>. # A is a class, q is an individual
<r> rdf:type <q>. # error: q cannot be used for a class, too
<A> ex:something <B>. # error: properties are for individuals only
<q> ex:something <s>. # error: same property cannot be used as
<p> ex:something “54”. # object and datatype property
<q> rdf:type <A>. # A is a class, q is an individual
<r> rdf:type <q>. # error: q cannot be used for a class, too
<A> ex:something <B>. # error: properties are for individuals only
<q> ex:something <s>. # error: same property cannot be used as
<p> ex:something “54”. # object and datatype property
174
OWL DL usage
•
Abiding to the restrictions means that very large
ontologies can be developed that require precise
procedures
•
eg, in the medical domain, biological research, energy
industry, financial services (eg, XBRL), etc
•
the number of classes and properties described this way
can go up to the many thousands
•
OWL DL has become a language of choice to
define and manage formal ontologies in general
•
even if their usage is not necessarily on the Web
175
OWL 2 defines further species
a.k.a. “profiles”
•
Further restrictions on how terms can be used and
what inferences can be expected
•
Classification and instance queries in polynomial time:
OWL-EL
•
Implementable on top of conventional relational database
engines: OWL-QL
•
Implementable on top of traditional rule engines: OWL-RL
176
Ontology development
•
The hard work is to create the ontologies
•
requires a good knowledge of the area to be described
•
some communities have good expertise already (e.g.,
librarians)
•
OWL is just a tool to formalize ontologies
•
large scale ontologies are often developed in a community
process
•
Ontologies should be shared and reused
•
can be via the simple namespace mechanisms…
•
…or via explicit import
177
Must I use large ontologies?
•
NO!!!
•
Many applications are possible with RDFS and a
just a little bit of OWL
•
a few terms, whose meaning is defined in OWL, and that
application can handle directly
•
OWL RL is a step to create such a generic OWL level
•
Big ontologies can be expensive (both in time and
money); use them only when really necessary!
178
Ontologies examples
•
eClassOwl: eBusiness ontology for products and
services, 75,000 classes and 5,500 properties
•
National Cancer Institute’s ontology: about 58,000
classes
•
Open Biomedical Ontologies Foundry: a collection
of ontologies, including the Gene Ontology to
describe gene and gene product attributes in any
organism or protein sequence and annotation
terminology and data (UniProt)
•
BioPAX: for biological pathway data
179
Example: improved search via ontology
•
Search results are re-ranked using ontologies
•
Related terms are highlighted, usable for further
search
180
Example: improved search via ontology
•
Same dataset, different ontology
•
(ontology is on non-animal experimentation)
181
Example: Eli Lilly’s target assessment tool
•
Prioritization of
drug target,
integrating data
from different
sources and
formats
•
Integration,
search via
ontologies
(proprietary and
public)
Courtesy of Susie Stephens, Eli Lilly (SWEO Case Study)
182
What have we achieved?
(putting all this together)
183
Other SW technologies
•
There are other technologies that we do not have
time for here
•
find RDF data associated with general URI-s: POWDER
•
bridge to thesauri, glossaries, etc: SKOS
•
use Rule engines on RDF data
184
Remember the integration example?
185
Same with what we learned
186
Example: personalized tourist itinerary
•
Integration of
relevant data in
Zaragoza (using
RDF and ontologies)
•
Use rules on the
RDF data to provide
a proper itinerary
Courtesy of Jesús Fernández, Mun. of Zaragoza, and Antonio Campos, CTIC (SWEO Use Case)
187
Available documents, resources
188
Available specifications: Primers, Guides
•
The “RDF Primer” and the “OWL Guide” give a
formal introduction to RDF(S) and OWL
•
GRDDL and RDFa Primers have also been
published
•
The W3C Semantic Web Activity Homepage has
links to all the specifications:
•
http://www.w3.org/2001/sw/
189
“Core” vocabularies
•
There are also a number widely used “core
vocabularies”
•
Dublin Core: about information resources, digital libraries,
with extensions for rights, permissions, digital right
management
•
FOAF: about people and their organizations
•
DOAP: on the descriptions of software projects
•
SIOC: Semantically-Interlinked Online Communities
•
vCard in RDF
•
…
•
One should never forget: ontologies/vocabularies
must be shared and reused!
190
Some books
•
G. Antoniu and F. van Harmelen: Semantic Web
Primer, 2nd
edition in 2008
•
D. Allemang and J. Hendler: Semantic Web for the
Working Ontologist, 2008
•
Jeffrey Pollock: Semantic Web for Dummies, 2009
•
…
See the separate Wiki page collecting book references:
http://esw.w3.org/topic/SwBooks
191
Further information
•
Planet RDF aggregates a number of SW blogs:
•
http://planetrdf.com/
•
Semantic Web Interest Group
•
a forum developers with archived (and public) mailing list,
and a constant IRC presence on freenode.net#swig
•
anybody can sign up on the list:
•
http://www.w3.org/2001/sw/interest/
192
Lots of Tools (not an exhaustive list!)
•
Categories:
•
Triple Stores
•
Inference engines
•
Converters
•
Search engines
•
Middleware
•
CMS
•
Semantic Web browsers
•
Development environments
•
Semantic Wikis
•
…
•
Some names:
•
Jena, AllegroGraph, Mulgara,
Sesame, flickurl, …
•
TopBraid Suite, Virtuoso
environment, Falcon, Drupal 7,
Redland, Pellet, …
•
Disco, Oracle 11g, RacerPro,
IODT, Ontobroker, OWLIM, Tallis
Platform, …
•
RDF Gateway, RDFLib, Open
Anzo, DartGrid, Zitgist, Ontotext,
Protégé, …
•
Thetus publisher, SemanticWorks,
SWI-Prolog, RDFStore…
•
…
193
Conclusions
•
The Semantic Web is about creating a Web of
Data
•
There is a great and very active user and
developer community, with new applications
•
witness the size and diversity of this event
194
Thank you for your attention!
These slides are also available on the Web:
http://www.w3.org/2009/Talks/0615-SanJose-tutorial-IH/
195
Work Cited:
https://www.w3.org/2009/Talks/0615-SanJose-tutorial-
https://my.usgs.gov/confluence/display/cdi/Semantic+W
http://data.ordnancesurvey.co.uk/datasets/opennames

More Related Content

What's hot

Designing a second generation of open data platforms
Designing a second generation of open data platformsDesigning a second generation of open data platforms
Designing a second generation of open data platformsYannis Charalabidis
 
Data Big and Broad (Oxford, 2012)
Data Big and Broad (Oxford, 2012)Data Big and Broad (Oxford, 2012)
Data Big and Broad (Oxford, 2012)James Hendler
 
The Social Semantic Web and Linked Data
The Social Semantic Web and Linked DataThe Social Semantic Web and Linked Data
The Social Semantic Web and Linked DataAlexandre Passant
 
How To Make Friends And Inference People
How To Make Friends And Inference PeopleHow To Make Friends And Inference People
How To Make Friends And Inference PeopleDan Brickley
 
Introduction to Linked Data
Introduction to Linked DataIntroduction to Linked Data
Introduction to Linked DataJuan Sequeda
 
Place graphs are the new social graphs
Place graphs are the new social graphsPlace graphs are the new social graphs
Place graphs are the new social graphsMatt Biddulph
 
SSSW2015 Data Workflow Tutorial
SSSW2015 Data Workflow TutorialSSSW2015 Data Workflow Tutorial
SSSW2015 Data Workflow TutorialSSSW
 
The Semantic Web – A Vision Come True, or Giving Up the Great Plan?
The Semantic Web – A Vision Come True, or Giving Up the Great Plan?The Semantic Web – A Vision Come True, or Giving Up the Great Plan?
The Semantic Web – A Vision Come True, or Giving Up the Great Plan?Martin Hepp
 
Tutorial Data Management and workflows
Tutorial Data Management and workflowsTutorial Data Management and workflows
Tutorial Data Management and workflowsSSSW
 
Linked Data (1st Linked Data Meetup Malmö)
Linked Data (1st Linked Data Meetup Malmö)Linked Data (1st Linked Data Meetup Malmö)
Linked Data (1st Linked Data Meetup Malmö)Anja Jentzsch
 
Big Data: Architectures and Approaches
Big Data: Architectures and ApproachesBig Data: Architectures and Approaches
Big Data: Architectures and ApproachesThoughtworks
 
Increasing the Impact of Visualization Research
Increasing the Impact of Visualization ResearchIncreasing the Impact of Visualization Research
Increasing the Impact of Visualization ResearchKrist Wongsuphasawat
 
moscow_developer_day
moscow_developer_daymoscow_developer_day
moscow_developer_dayxlight
 
IRE "Better Watchdog" workshop presentation "Data: Now I've got it, what do I...
IRE "Better Watchdog" workshop presentation "Data: Now I've got it, what do I...IRE "Better Watchdog" workshop presentation "Data: Now I've got it, what do I...
IRE "Better Watchdog" workshop presentation "Data: Now I've got it, what do I...J T "Tom" Johnson
 
Semantic Security : Authorization on the Web with Ontologies
Semantic Security : Authorization on the Web with OntologiesSemantic Security : Authorization on the Web with Ontologies
Semantic Security : Authorization on the Web with OntologiesAmit Jain
 

What's hot (20)

Designing a second generation of open data platforms
Designing a second generation of open data platformsDesigning a second generation of open data platforms
Designing a second generation of open data platforms
 
Data Big and Broad (Oxford, 2012)
Data Big and Broad (Oxford, 2012)Data Big and Broad (Oxford, 2012)
Data Big and Broad (Oxford, 2012)
 
Tf in-context
Tf in-contextTf in-context
Tf in-context
 
The Social Semantic Web and Linked Data
The Social Semantic Web and Linked DataThe Social Semantic Web and Linked Data
The Social Semantic Web and Linked Data
 
When?
When?When?
When?
 
Linked Data Tutorial
Linked Data TutorialLinked Data Tutorial
Linked Data Tutorial
 
Ld4 dh tutorial
Ld4 dh tutorialLd4 dh tutorial
Ld4 dh tutorial
 
How To Make Friends And Inference People
How To Make Friends And Inference PeopleHow To Make Friends And Inference People
How To Make Friends And Inference People
 
Introduction to Linked Data
Introduction to Linked DataIntroduction to Linked Data
Introduction to Linked Data
 
My Spark Journey
My Spark JourneyMy Spark Journey
My Spark Journey
 
Place graphs are the new social graphs
Place graphs are the new social graphsPlace graphs are the new social graphs
Place graphs are the new social graphs
 
SSSW2015 Data Workflow Tutorial
SSSW2015 Data Workflow TutorialSSSW2015 Data Workflow Tutorial
SSSW2015 Data Workflow Tutorial
 
The Semantic Web – A Vision Come True, or Giving Up the Great Plan?
The Semantic Web – A Vision Come True, or Giving Up the Great Plan?The Semantic Web – A Vision Come True, or Giving Up the Great Plan?
The Semantic Web – A Vision Come True, or Giving Up the Great Plan?
 
Tutorial Data Management and workflows
Tutorial Data Management and workflowsTutorial Data Management and workflows
Tutorial Data Management and workflows
 
Linked Data (1st Linked Data Meetup Malmö)
Linked Data (1st Linked Data Meetup Malmö)Linked Data (1st Linked Data Meetup Malmö)
Linked Data (1st Linked Data Meetup Malmö)
 
Big Data: Architectures and Approaches
Big Data: Architectures and ApproachesBig Data: Architectures and Approaches
Big Data: Architectures and Approaches
 
Increasing the Impact of Visualization Research
Increasing the Impact of Visualization ResearchIncreasing the Impact of Visualization Research
Increasing the Impact of Visualization Research
 
moscow_developer_day
moscow_developer_daymoscow_developer_day
moscow_developer_day
 
IRE "Better Watchdog" workshop presentation "Data: Now I've got it, what do I...
IRE "Better Watchdog" workshop presentation "Data: Now I've got it, what do I...IRE "Better Watchdog" workshop presentation "Data: Now I've got it, what do I...
IRE "Better Watchdog" workshop presentation "Data: Now I've got it, what do I...
 
Semantic Security : Authorization on the Web with Ontologies
Semantic Security : Authorization on the Web with OntologiesSemantic Security : Authorization on the Web with Ontologies
Semantic Security : Authorization on the Web with Ontologies
 

Similar to Introduction to the Semantic Web

2018 GIS in Development: Semantic Web
2018 GIS in Development: Semantic Web2018 GIS in Development: Semantic Web
2018 GIS in Development: Semantic WebGIS in the Rockies
 
Introduction to Semantic Web
Introduction to Semantic WebIntroduction to Semantic Web
Introduction to Semantic WebIvan Herman
 
The Unreasonable Effectiveness of Metadata
The Unreasonable Effectiveness of MetadataThe Unreasonable Effectiveness of Metadata
The Unreasonable Effectiveness of MetadataJames Hendler
 
Linking American Art to the Cloud
Linking American Art to the CloudLinking American Art to the Cloud
Linking American Art to the CloudGeorgina Goodlander
 
Intro to Linked Open Data in Libraries, Archives & Museums
Intro to Linked Open Data in Libraries, Archives & MuseumsIntro to Linked Open Data in Libraries, Archives & Museums
Intro to Linked Open Data in Libraries, Archives & MuseumsJon Voss
 
Linked Open Data and American Art
Linked Open Data and American ArtLinked Open Data and American Art
Linked Open Data and American ArtGeorgina Goodlander
 
On Beyond OWL: challenges for ontologies on the Web
On Beyond OWL: challenges for ontologies on the WebOn Beyond OWL: challenges for ontologies on the Web
On Beyond OWL: challenges for ontologies on the WebJames Hendler
 
What is New in W3C land?
What is New in W3C land?What is New in W3C land?
What is New in W3C land?Ivan Herman
 
(PROJEKTURA) open data big data @tgg osijek
(PROJEKTURA) open data big data @tgg osijek(PROJEKTURA) open data big data @tgg osijek
(PROJEKTURA) open data big data @tgg osijekRatko Mutavdzic
 
Introduction to APIs and Linked Data
Introduction to APIs and Linked DataIntroduction to APIs and Linked Data
Introduction to APIs and Linked DataAdrian Stevenson
 
Complex Networks: Science, Programming, and Databases
Complex Networks: Science, Programming, and DatabasesComplex Networks: Science, Programming, and Databases
Complex Networks: Science, Programming, and DatabasesS.M. Mahdi Seyednezhad, Ph.D.
 
Mining Web content for Enhanced Search
Mining Web content for Enhanced Search Mining Web content for Enhanced Search
Mining Web content for Enhanced Search Roi Blanco
 

Similar to Introduction to the Semantic Web (20)

2018 GIS in Development: Semantic Web
2018 GIS in Development: Semantic Web2018 GIS in Development: Semantic Web
2018 GIS in Development: Semantic Web
 
Introduction to Semantic Web
Introduction to Semantic WebIntroduction to Semantic Web
Introduction to Semantic Web
 
Slides.ppt
Slides.pptSlides.ppt
Slides.ppt
 
Slides.ppt
Slides.pptSlides.ppt
Slides.ppt
 
Slides.ppt
Slides.pptSlides.ppt
Slides.ppt
 
The Unreasonable Effectiveness of Metadata
The Unreasonable Effectiveness of MetadataThe Unreasonable Effectiveness of Metadata
The Unreasonable Effectiveness of Metadata
 
Semantic Web - Introduction
Semantic Web - IntroductionSemantic Web - Introduction
Semantic Web - Introduction
 
SWT Lecture Session 1 - Introduction
SWT Lecture Session 1 - IntroductionSWT Lecture Session 1 - Introduction
SWT Lecture Session 1 - Introduction
 
Linking American Art to the Cloud
Linking American Art to the CloudLinking American Art to the Cloud
Linking American Art to the Cloud
 
Intro to Linked Open Data in Libraries, Archives & Museums
Intro to Linked Open Data in Libraries, Archives & MuseumsIntro to Linked Open Data in Libraries, Archives & Museums
Intro to Linked Open Data in Libraries, Archives & Museums
 
Linked Open Data and American Art
Linked Open Data and American ArtLinked Open Data and American Art
Linked Open Data and American Art
 
Open data and linked data
Open data and linked dataOpen data and linked data
Open data and linked data
 
On Beyond OWL: challenges for ontologies on the Web
On Beyond OWL: challenges for ontologies on the WebOn Beyond OWL: challenges for ontologies on the Web
On Beyond OWL: challenges for ontologies on the Web
 
What is New in W3C land?
What is New in W3C land?What is New in W3C land?
What is New in W3C land?
 
(PROJEKTURA) open data big data @tgg osijek
(PROJEKTURA) open data big data @tgg osijek(PROJEKTURA) open data big data @tgg osijek
(PROJEKTURA) open data big data @tgg osijek
 
Introduction to APIs and Linked Data
Introduction to APIs and Linked DataIntroduction to APIs and Linked Data
Introduction to APIs and Linked Data
 
NISO/DCMI Webinar: Schema.org and Linked Data: Complementary Approaches to Pu...
NISO/DCMI Webinar: Schema.org and Linked Data: Complementary Approaches to Pu...NISO/DCMI Webinar: Schema.org and Linked Data: Complementary Approaches to Pu...
NISO/DCMI Webinar: Schema.org and Linked Data: Complementary Approaches to Pu...
 
Linked data 20171106
Linked data 20171106Linked data 20171106
Linked data 20171106
 
Complex Networks: Science, Programming, and Databases
Complex Networks: Science, Programming, and DatabasesComplex Networks: Science, Programming, and Databases
Complex Networks: Science, Programming, and Databases
 
Mining Web content for Enhanced Search
Mining Web content for Enhanced Search Mining Web content for Enhanced Search
Mining Web content for Enhanced Search
 

More from GIS Colorado

GISCO Fall 2018: Colorado 811: Changes and Challenges – Brian Collison
GISCO Fall 2018: Colorado 811: Changes and Challenges – Brian CollisonGISCO Fall 2018: Colorado 811: Changes and Challenges – Brian Collison
GISCO Fall 2018: Colorado 811: Changes and Challenges – Brian CollisonGIS Colorado
 
GISCO Fall 2018: Senate Bill 18-167 and GIS – Dave Murray
GISCO Fall 2018: Senate Bill 18-167 and GIS – Dave MurrayGISCO Fall 2018: Senate Bill 18-167 and GIS – Dave Murray
GISCO Fall 2018: Senate Bill 18-167 and GIS – Dave MurrayGIS Colorado
 
GISCO Fall 2018: Bike Network Equity: A GIS and Qualitative Analysis of Ameri...
GISCO Fall 2018: Bike Network Equity: A GIS and Qualitative Analysis of Ameri...GISCO Fall 2018: Bike Network Equity: A GIS and Qualitative Analysis of Ameri...
GISCO Fall 2018: Bike Network Equity: A GIS and Qualitative Analysis of Ameri...GIS Colorado
 
Locating patented mining claims in San Juan County, Colorado
Locating patented mining claims in San Juan County, ColoradoLocating patented mining claims in San Juan County, Colorado
Locating patented mining claims in San Juan County, ColoradoGIS Colorado
 
PMP Certification for GIS
PMP Certification for GISPMP Certification for GIS
PMP Certification for GISGIS Colorado
 
Plats mosaic presentation
Plats mosaic presentationPlats mosaic presentation
Plats mosaic presentationGIS Colorado
 
Upgrading a web map framework
Upgrading a web map frameworkUpgrading a web map framework
Upgrading a web map frameworkGIS Colorado
 
Spike Photo Measurement and Mobile Data Collection (James Pardue)
Spike Photo Measurement and Mobile Data Collection (James Pardue)Spike Photo Measurement and Mobile Data Collection (James Pardue)
Spike Photo Measurement and Mobile Data Collection (James Pardue)GIS Colorado
 
Using GIS to Visualize and Analyze Environmental Time-Series Data as Raster M...
Using GIS to Visualize and Analyze Environmental Time-Series Data as Raster M...Using GIS to Visualize and Analyze Environmental Time-Series Data as Raster M...
Using GIS to Visualize and Analyze Environmental Time-Series Data as Raster M...GIS Colorado
 
Street Centerline Maintenance and NG9-1-1 (Walker Feigal-Henry)
Street Centerline Maintenance and NG9-1-1 (Walker Feigal-Henry)Street Centerline Maintenance and NG9-1-1 (Walker Feigal-Henry)
Street Centerline Maintenance and NG9-1-1 (Walker Feigal-Henry)GIS Colorado
 
Pitkin Maps & More (Mary Lakner)
Pitkin Maps & More (Mary Lakner)Pitkin Maps & More (Mary Lakner)
Pitkin Maps & More (Mary Lakner)GIS Colorado
 
Flask, ArcPy, and You (Mike Davlantes)
Flask, ArcPy, and You (Mike Davlantes)Flask, ArcPy, and You (Mike Davlantes)
Flask, ArcPy, and You (Mike Davlantes)GIS Colorado
 
Automating Regional Data Integration with Python & ArcPy (Heather Widlund)
Automating Regional Data Integration with Python & ArcPy (Heather Widlund)Automating Regional Data Integration with Python & ArcPy (Heather Widlund)
Automating Regional Data Integration with Python & ArcPy (Heather Widlund)GIS Colorado
 
The San Luis Valley GIS/GPS Authority (Pete Magee)
The San Luis Valley GIS/GPS Authority (Pete Magee)The San Luis Valley GIS/GPS Authority (Pete Magee)
The San Luis Valley GIS/GPS Authority (Pete Magee)GIS Colorado
 
GIS in Local Government (Dave Murray)
GIS in Local Government (Dave Murray)GIS in Local Government (Dave Murray)
GIS in Local Government (Dave Murray)GIS Colorado
 
Highland Garden Village Community GIS Data Viewer (Andrea Santoro, Laura Atki...
Highland Garden Village Community GIS Data Viewer (Andrea Santoro, Laura Atki...Highland Garden Village Community GIS Data Viewer (Andrea Santoro, Laura Atki...
Highland Garden Village Community GIS Data Viewer (Andrea Santoro, Laura Atki...GIS Colorado
 
The Shape of the 2020 Census (Jim Castagneri)
The Shape of the 2020 Census (Jim Castagneri)The Shape of the 2020 Census (Jim Castagneri)
The Shape of the 2020 Census (Jim Castagneri)GIS Colorado
 
Giant State Maps (Rebecca Theobold)
Giant State  Maps (Rebecca Theobold)Giant State  Maps (Rebecca Theobold)
Giant State Maps (Rebecca Theobold)GIS Colorado
 
Unique & Powerful Geospatial Field Data Collection Solutions (Zach Edwards)
Unique & Powerful Geospatial Field Data Collection Solutions (Zach Edwards)Unique & Powerful Geospatial Field Data Collection Solutions (Zach Edwards)
Unique & Powerful Geospatial Field Data Collection Solutions (Zach Edwards)GIS Colorado
 

More from GIS Colorado (20)

GISCO Fall 2018: Colorado 811: Changes and Challenges – Brian Collison
GISCO Fall 2018: Colorado 811: Changes and Challenges – Brian CollisonGISCO Fall 2018: Colorado 811: Changes and Challenges – Brian Collison
GISCO Fall 2018: Colorado 811: Changes and Challenges – Brian Collison
 
GISCO Fall 2018: Senate Bill 18-167 and GIS – Dave Murray
GISCO Fall 2018: Senate Bill 18-167 and GIS – Dave MurrayGISCO Fall 2018: Senate Bill 18-167 and GIS – Dave Murray
GISCO Fall 2018: Senate Bill 18-167 and GIS – Dave Murray
 
GISCO Fall 2018: Bike Network Equity: A GIS and Qualitative Analysis of Ameri...
GISCO Fall 2018: Bike Network Equity: A GIS and Qualitative Analysis of Ameri...GISCO Fall 2018: Bike Network Equity: A GIS and Qualitative Analysis of Ameri...
GISCO Fall 2018: Bike Network Equity: A GIS and Qualitative Analysis of Ameri...
 
State GIS Update
State GIS UpdateState GIS Update
State GIS Update
 
Locating patented mining claims in San Juan County, Colorado
Locating patented mining claims in San Juan County, ColoradoLocating patented mining claims in San Juan County, Colorado
Locating patented mining claims in San Juan County, Colorado
 
PMP Certification for GIS
PMP Certification for GISPMP Certification for GIS
PMP Certification for GIS
 
Plats mosaic presentation
Plats mosaic presentationPlats mosaic presentation
Plats mosaic presentation
 
Upgrading a web map framework
Upgrading a web map frameworkUpgrading a web map framework
Upgrading a web map framework
 
Spike Photo Measurement and Mobile Data Collection (James Pardue)
Spike Photo Measurement and Mobile Data Collection (James Pardue)Spike Photo Measurement and Mobile Data Collection (James Pardue)
Spike Photo Measurement and Mobile Data Collection (James Pardue)
 
Using GIS to Visualize and Analyze Environmental Time-Series Data as Raster M...
Using GIS to Visualize and Analyze Environmental Time-Series Data as Raster M...Using GIS to Visualize and Analyze Environmental Time-Series Data as Raster M...
Using GIS to Visualize and Analyze Environmental Time-Series Data as Raster M...
 
Street Centerline Maintenance and NG9-1-1 (Walker Feigal-Henry)
Street Centerline Maintenance and NG9-1-1 (Walker Feigal-Henry)Street Centerline Maintenance and NG9-1-1 (Walker Feigal-Henry)
Street Centerline Maintenance and NG9-1-1 (Walker Feigal-Henry)
 
Pitkin Maps & More (Mary Lakner)
Pitkin Maps & More (Mary Lakner)Pitkin Maps & More (Mary Lakner)
Pitkin Maps & More (Mary Lakner)
 
Flask, ArcPy, and You (Mike Davlantes)
Flask, ArcPy, and You (Mike Davlantes)Flask, ArcPy, and You (Mike Davlantes)
Flask, ArcPy, and You (Mike Davlantes)
 
Automating Regional Data Integration with Python & ArcPy (Heather Widlund)
Automating Regional Data Integration with Python & ArcPy (Heather Widlund)Automating Regional Data Integration with Python & ArcPy (Heather Widlund)
Automating Regional Data Integration with Python & ArcPy (Heather Widlund)
 
The San Luis Valley GIS/GPS Authority (Pete Magee)
The San Luis Valley GIS/GPS Authority (Pete Magee)The San Luis Valley GIS/GPS Authority (Pete Magee)
The San Luis Valley GIS/GPS Authority (Pete Magee)
 
GIS in Local Government (Dave Murray)
GIS in Local Government (Dave Murray)GIS in Local Government (Dave Murray)
GIS in Local Government (Dave Murray)
 
Highland Garden Village Community GIS Data Viewer (Andrea Santoro, Laura Atki...
Highland Garden Village Community GIS Data Viewer (Andrea Santoro, Laura Atki...Highland Garden Village Community GIS Data Viewer (Andrea Santoro, Laura Atki...
Highland Garden Village Community GIS Data Viewer (Andrea Santoro, Laura Atki...
 
The Shape of the 2020 Census (Jim Castagneri)
The Shape of the 2020 Census (Jim Castagneri)The Shape of the 2020 Census (Jim Castagneri)
The Shape of the 2020 Census (Jim Castagneri)
 
Giant State Maps (Rebecca Theobold)
Giant State  Maps (Rebecca Theobold)Giant State  Maps (Rebecca Theobold)
Giant State Maps (Rebecca Theobold)
 
Unique & Powerful Geospatial Field Data Collection Solutions (Zach Edwards)
Unique & Powerful Geospatial Field Data Collection Solutions (Zach Edwards)Unique & Powerful Geospatial Field Data Collection Solutions (Zach Edwards)
Unique & Powerful Geospatial Field Data Collection Solutions (Zach Edwards)
 

Recently uploaded

"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 

Recently uploaded (20)

"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 

Introduction to the Semantic Web

  • 1. 1 Introduction to the Semantic Web (tutorial) Lucas Kaim Thelen July 2018 Lucas.Kaim@maine.edu Modified from Ivan Herman, W3C talk
  • 2. 2 Let’s organize a trip to Budapest using the Web!
  • 3. 3 You try to find a proper flight with …
  • 4. 4 … a big, reputable airline, or …
  • 5. 5 … the airline of the target country, or …
  • 6. 6 … or a low cost one
  • 7. 7 You have to find a hotel, so you look for…
  • 8. 8 … a really cheap accommodation, or …
  • 9. 9 … or a really luxurious one, or …
  • 11. 11 oops, that is no good, the page is in Hungarian that almost nobody understands, but…
  • 12. 12 … this one could work
  • 13. 13 Of course, you could decide to trust a specialized site…
  • 14. 14 … like this one, or…
  • 16. 16 You may want to know something about Budapest; look for some photographs…
  • 19. 19 … or you can look at mine
  • 20. 20 … or a (social) travel site
  • 21. 21 What happened here? • You had to consult a large number of sites, all different in style, purpose, possibly language… • You had to mentally integrate all those information to achieve your goals • We all know that, sometimes, this is a long and tedious process!
  • 22. 22 • All those pages are only tips of respective icebergs: • the real data is hidden somewhere in databases, XML files, Excel sheets, … • you have only access to what the Web page designers allow you to see
  • 23. 23 • Specialized sites (Expedia, TripAdvisor) do a bit more: • they gather and combine data from other sources (usually with the approval of the data owners) • but they still control how you see those sources • But sometimes you want to personalize: access the original data and combine it yourself!
  • 24. 24 Here is another example…
  • 25. 25 Another example: social sites. I have a list of “friends” by…
  • 29. 29 … and, of course, Facebook
  • 30. 30 • I had to type in and connect with friends again and again for each site independently • This is even worse then before: feed the icebergs, but I still do not have an easy access to data…
  • 31. 31 What would we like to have? • Use the data on the Web the same way as we do with documents: • be able to link to data (independently of their presentation) • use that data the way I want (present it, mine it, etc) • agents, programs, scripts, etc, should be able to interpret part of that data
  • 32. 32 Put it another way… • We would like to extend the current Web to a “Web of data”: • allow for applications to exploit the data directly
  • 33. 33 But wait! Isn’t what mashup sites are already doing?
  • 35. 35 • In some ways, yes, and that shows the huge power of what such Web of data provides • But mashup sites are forced to do very ad-hoc jobs • various data sources expose their data via Web Services • each with a different API, a different logic, different structure • these sites are forced to reinvent the wheel many times because there is no standard way of doing things
  • 36. 36 Put it another way (again)… • We would like to extend the current Web to a standard way for a “Web of data”
  • 37. 37 But what does this mean? • What makes the current (document) Web work? • people create different documents • they give an address to it (ie, a URI) and make it accessible to others on the Web
  • 38. 38 Steven’s site on Amsterdam (done for some visiting friends)
  • 39. 39 Then some magic happens… • Others discover the site and they link to it • The more they link to it, the more important and well known the page becomes • remember, this is what, eg, Google exploits! • This is the “Network effect”: some pages become important, and others begin to rely on it even if the author did not expect it…
  • 40. 40 This could be expected…
  • 41. 41 but this one, from the other side of the Globe, was not…
  • 42. 42 What would that mean for a Web of Data? • Lessons learned: we should be able to: • “publish” the data to make it known on the Web • standard ways should be used instead of ad-hoc approaches • the analogous approach to documents: give URI-s to the data • make it possible to “link” to that URI from other sources of data (not only Web pages) • ie, applications should not be forced to make targeted developments to access the data • generic, standard approaches should suffice • and let the network effect work its way…
  • 43. 43 But it is a little bit more complicated • On the traditional Web, humans are implicitly taken into account • A Web link has a “context” that a person may use
  • 44. 44 Eg: address field on my page:
  • 45. 45 … leading to this page
  • 46. 46 • A human understands that this is my institution’s home page • He/she knows what it means (realizes that it is a research institute in Amsterdam) • On a Web of Data, something is missing; machines can’t make sense of the link alone
  • 47. 47 • New lesson learned: • extra information (“label”) must be added to a link: “this links to my institution, which is a research institute” • this information should be machine readable • this is a characterization (or “classification”) of both the link and its target • in some cases, the classification should allow for some limited “reasoning”
  • 48. 48 Let us put it together • What we need for a Web of Data: • use URI-s to publish data, not only full documents • allow the data to link to other data • characterize/classify the data and the links (the “terms”) to convey some extra meaning • and use standards for all these!
  • 49. 49 Information Theory What is the difference between information and data? Information is usable data. Data by itself does not have meaning which is why it is not information. No one really cares about the data. The information that one can learn from the data is what truly is important. So what can this tell us about the future?
  • 50. 50 So What is the Semantic Web?
  • 51. 51 It is a collection of standard technologies to realize a Web of Data
  • 52. 52 • It is that simple… • Of course, the devil is in the details • a common model has to be provided for machines to describe, query, etc, the data and their connections • the “classification” of the terms can become very complex for specific knowledge areas: this is where ontologies, thesauri, etc, enter the game…
  • 53. 53 In what follows… • We will use a simplistic example to introduce the main technical concepts • The details will be for later during the course
  • 54. 54 The rough structure of data integration 1. Map the various data onto an abstract data representation • make the data independent of its internal representation… 1. Merge the resulting representations 2. Start making queries on the whole! • queries that could not have been done on the individual data sets
  • 55. 55 Predicate logic Predicate logic is the foundation for building facts about the data. Facts are represented by subject predicate object. The predicate is the relationship between the subject and the object. By adding more facts we can link data to other data. By linking data together with facts we are really building a graph that can be used to answer key questions that would be hard to impossible without supporting a semantic web.
  • 56. 56 A simplified bookstore data (dataset “A”) ID Author Title Publisher Year ISBN0-00-651409-X The Glass Palace 2000id_xyz id_qpr ID Name Home Page ID City Harper Collins London id_xyz Ghosh, Amitav http://www.amitavghosh.com Publ. Name id_qpr
  • 57. 57 1st : export your data as a set of relations
  • 58. 58 Some notes on the exporting the data • Relations form a graph • the nodes refer to the “real” data or contain some literal • how the graph is represented in machine is immaterial for now • Data export does not necessarily mean physical conversion of the data • relations can be generated on-the-fly at query time • via SQL “bridges” • scraping HTML pages • extracting data from Excel sheets • etc. • One can export part of the data
  • 59. 59 Another bookstore data (dataset “F”) A B D E 1 ID Titre Original 2 ISBN0 2020386682 A13 ISBN-0-00-651409-X 3 6 ID Auteur 7 ISBN-0-00-651409-X A12 11 12 13 Traducteur Le Palais des miroirs Nom Ghosh, Amitav Besse, Christianne
  • 60. 60 2nd : export your second set of data
  • 62. 62 3rd : start merging your data (cont.)
  • 64. 64 Start making queries… • User of data “F” can now ask queries like: • “give me the title of the original” • well, … « donnes-moi le titre de l’original » • This information is not in the dataset “F”… • …but can be retrieved by merging with dataset “A”!
  • 65. 65 Resource Description Framework RDF and OWL RDF is the way we write the facts. Facts can be added as new information is collected or new relationships are created. A database and relational theory has one big flaw which is it presupposes you know the relationships before you build the database. In a semantic web you can add facts about your data as new information is collected. https://www.w3.org/TR/rdf-concepts/ https://www.w3.org/2001/sw/wiki/OWL
  • 66. 66 Parliament and GeoSPARQL Parliament is a triple store where you store your facts. http://parliament.semwebcentral.org/ GeoSPARQL is a geographic query language that allows you to query the RDF data. This supports spatial temporal reasoning. http://www.opengeospatial.org/standards/geosparql
  • 68. 68 However, more can be achieved… • We “feel” that a:author and f:auteur should be the same • But an automatic merge doest not know that! • Let us add some extra information to the merged data: • a:author same as f:auteur • both identify a “Person” • a term that a community may have already defined: • a “Person” is uniquely identified by his/her name and, say, homepage • it can be used as a “category” for certain type of resources
  • 69. 69 3rd revisited: use the extra knowledge
  • 70. 70 Start making richer queries! • User of dataset “F” can now query: • “donnes-moi la page d’accueil de l’auteur de l’originale” • well… “give me the home page of the original’s ‘auteur’” • The information is not in datasets “F” or “A”… • …but was made available by: • merging datasets “A” and datasets “F” • adding three simple extra statements as an extra “glue”
  • 71. 71 Combine with different datasets • Using, e.g., the “Person”, the dataset can be combined with other sources • For example, data in Wikipedia can be extracted using dedicated tools • e.g., the “dbpedia” project can extract the “infobox” information from Wikipedia already…
  • 75. 75 Is that surprising? • It may look like it but, in fact, it should not be… • What happened via automatic means is done every day by Web users! • The difference: a bit of extra rigour so that machines could do this, too
  • 76. 76 What did we do? • We combined different datasets that • are somewhere on the web • are of different formats (mysql, excel sheet, XHTML, etc) • have different names for relations • We could combine the data because some URI-s were identical (the ISBN-s in this case) • We could add some simple additional information (the “glue”), possibly using common terminologies that a community has produced • As a result, new relations could be found and retrieved
  • 77. 77 It could become even more powerful • We could add extra knowledge to the merged datasets • e.g., a full classification of various types of library data • geographical information • etc. • This is where ontologies, extra rules, etc, come in • ontologies/rule sets can be relatively simple and small, or huge, or anything in between… • Even more powerful queries can be asked as a result
  • 78. 78 What did we do? (cont)
  • 80. 80 RDF triples • Let us begin to formalize what we did! • we “connected” the data… • but a simple connection is not enough… data should be named somehow • hence the RDF Triples: a labelled connection between two resources
  • 81. 81 RDF triples (cont.) • An RDF Triple (s,p,o) is such that: • “s”, “p” are URI-s, ie, resources on the Web; “o” is a URI or a literal • “s”, “p”, and “o” stand for “subject”, “property”, and “object” • here is the complete triple: • RDF is a general model for such triples (with machine readable formats like RDF/XML, Turtle, N3, RXR, …) (<http://…isbn…6682>, <http://…/original>, <http://…isbn…409X>)(<http://…isbn…6682>, <http://…/original>, <http://…isbn…409X>)
  • 82. 82 RDF triples (cont.) • Resources can use any URI, e.g.: • http://www.example.org/file.xml#element(home) • http://www.example.org/file.html#home • http://www.example.org/file2.xml#xpath1(//q[@a=b]) • URI-s can also denote non Web entities: • http://www.ivan-herman.net/me is me • not my home page, not my publication list, but me • RDF triples form a directed, labelled graph
  • 83. 83 A simple RDF example (in RDF/XML) <rdf:Description rdf:about="http://…/isbn/2020386682"> <f:titre xml:lang="fr">Le palais des mirroirs</f:titre> <f:original rdf:resource="http://…/isbn/000651409X"/> </rdf:Description> <rdf:Description rdf:about="http://…/isbn/2020386682"> <f:titre xml:lang="fr">Le palais des mirroirs</f:titre> <f:original rdf:resource="http://…/isbn/000651409X"/> </rdf:Description> (Note: namespaces are used to simplify the URI-s)
  • 84. 84 A simple RDF example (in Turtle) <http://…/isbn/2020386682> f:titre "Le palais des mirroirs"@fr ; f:original <http://…/isbn/000651409X> . <http://…/isbn/2020386682> f:titre "Le palais des mirroirs"@fr ; f:original <http://…/isbn/000651409X> .
  • 85. 85 “Internal” nodes • Consider the following statement: • “the publisher is a «thing» that has a name and an address” • Until now, nodes were identified with a URI. But… • …what is the URI of «thing»?
  • 86. 86 Internal identifier (“blank nodes”) • Syntax is serialization dependent • A234 is invisible from outside (it is not a “real” URI!); it is an internal identifier for a resource <rdf:Description rdf:about="http://…/isbn/000651409X"> <a:publisher rdf:nodeID="A234"/> </rdf:Description> <rdf:Description rdf:nodeID="A234"> <a:p_name>HarpersCollins</a:p_name> <a:city>HarpersCollins</a:city> </rdf:Description> <rdf:Description rdf:about="http://…/isbn/000651409X"> <a:publisher rdf:nodeID="A234"/> </rdf:Description> <rdf:Description rdf:nodeID="A234"> <a:p_name>HarpersCollins</a:p_name> <a:city>HarpersCollins</a:city> </rdf:Description> <http://…/isbn/2020386682> a:publisher _:A234. _:A234 a:p_name "HarpersCollins". <http://…/isbn/2020386682> a:publisher _:A234. _:A234 a:p_name "HarpersCollins".
  • 87. 87 Blank nodes: the system can also do it • Let the system create a “nodeID” internally (you do not really care about the name…) <rdf:Description rdf:about="http://…/isbn/000651409X"> <a:publisher> <rdf:Description> <a:p_name>HarpersCollins</a:p_name> … </rdf:Description> </a:publisher> </rdf:Description> <rdf:Description rdf:about="http://…/isbn/000651409X"> <a:publisher> <rdf:Description> <a:p_name>HarpersCollins</a:p_name> … </rdf:Description> </a:publisher> </rdf:Description>
  • 88. 88 Same in Turtle <http://…/isbn/000651409X> a:publisher [ a:p_name "HarpersCollins"; … ]. <http://…/isbn/000651409X> a:publisher [ a:p_name "HarpersCollins"; … ].
  • 89. 89 Blank nodes: some more remarks • Blank nodes require attention when merging • blanks nodes with identical nodeID-s in different graphs are different • implementations must be careful… • Many applications prefer not to use blank nodes and define new URI-s “on-the-fly”
  • 90. 90 RDF in programming practice • For example, using Java+Jena (HP’s Bristol Lab): • a “Model” object is created • the RDF file is parsed and results stored in the Model • the Model offers methods to retrieve: • triples • (property,object) pairs for a specific subject • (subject,property) pairs for specific object • etc. • the rest is conventional programming… • Similar tools exist in Python, PHP, etc.
  • 91. 91 Jena example // create a model Model model=new ModelMem(); Resource subject=model.createResource("URI_of_Subject") // 'in' refers to the input file model.read(new InputStreamReader(in)); StmtIterator iter=model.listStatements(subject,null,null); while(iter.hasNext()) { st = iter.next(); p = st.getProperty(); o = st.getObject(); do_something(p,o); } // create a model Model model=new ModelMem(); Resource subject=model.createResource("URI_of_Subject") // 'in' refers to the input file model.read(new InputStreamReader(in)); StmtIterator iter=model.listStatements(subject,null,null); while(iter.hasNext()) { st = iter.next(); p = st.getProperty(); o = st.getObject(); do_something(p,o); }
  • 92. 92 Merge in practice • Environments merge graphs automatically • e.g., in Jena, the Model can load several files • the load merges the new statements automatically
  • 93. 93 Example: integrate experimental data • Goal: reuse of older experimental data • Keep data in databases or XML, just export key “fact” as RDF • Use a faceted browser to visualize and interact with the result Courtesy of Nigel Wilkinson, Lee Harland, Pfizer Ltd, Melliyal Annamalai, Oracle (SWEO Case Study)
  • 94. 94 One level higher up (RDFS, Datatypes)
  • 95. 95 Need for RDF schemas • First step towards the “extra knowledge”: • define the terms we can use • what restrictions apply • what extra relationships are there? • Officially: “RDF Vocabulary Description Language” • the term “Schema” is retained for historical reasons…
  • 96. 96 Classes, resources, … • Think of well known traditional ontologies or taxonomies: • use the term “novel” • “every novel is a fiction” • “«The Glass Palace» is a novel” • etc. • RDFS defines resources and classes: • everything in RDF is a “resource” • “classes” are also resources, but… • …they are also a collection of possible resources (i.e., “individuals”) • “fiction”, “novel”, …
  • 97. 97 Classes, resources, … (cont.) • Relationships are defined among classes and resources: • “typing”: an individual belongs to a specific class • “«The Glass Palace» is a novel” • to be more precise: “«http://.../000651409X» is a novel” • “subclassing”: all instances of one are also the instances of the other (“every novel is a fiction”) • RDFS formalizes these notions in RDF
  • 98. 98 Classes, resources in RDF(S) • RDFS defines the meaning of these terms • (these are all special URI-s, we just use the namespace abbreviation)
  • 99. 99 Schema example in RDF/XML • The schema part: <rdf:Description rdf:ID="Novel"> <rdf:type rdf:resource="http://www.w3.org/2000/01/rdf-schema#Class"/> </rdf:Description> <rdf:Description rdf:ID="Novel"> <rdf:type rdf:resource="http://www.w3.org/2000/01/rdf-schema#Class"/> </rdf:Description> • The RDF data on a specific novel: <rdf:Description rdf:about="http://…/isbn/000651409X"> <rdf:type rdf:resource="http://…/bookSchema.rdf#Novel"/> </rdf:Description> <rdf:Description rdf:about="http://…/isbn/000651409X"> <rdf:type rdf:resource="http://…/bookSchema.rdf#Novel"/> </rdf:Description>
  • 100. 100 Further remarks on types • A resource may belong to several classes • rdf:type is just a property… • “«The Glass Palace» is a novel, but «The Glass Palace» is also an «inventory item»…” • i.e., it is not like a datatype! • The type information may be very important for applications • e.g., it may be used for a categorization of possible nodes • probably the most frequently used RDF property… • (remember the “Person” in our example?)
  • 101. 101 Inferred properties • is not in the original RDF data… • …but can be inferred from the RDFS rules • RDFS environments return that triple, too (<http://…/isbn/000651409X> rdf:type #Fiction)(<http://…/isbn/000651409X> rdf:type #Fiction)
  • 102. 102 Inference: let us be formal… • The RDF Semantics document has a list of (33) entailment rules: • “if such and such triples are in the graph, add this and this” • do that recursively until the graph does not change • The relevant rule for our example: If: uuu rdfs:subClassOf xxx . vvv rdf:type uuu . Then add: vvv rdf:type xxx . If: uuu rdfs:subClassOf xxx . vvv rdf:type uuu . Then add: vvv rdf:type xxx .
  • 103. 103 Properties • Property is a special class (rdf:Property) • properties are also resources identified by URI-s • There is also a possibility for a “sub-property” • all resources bound by the “sub” are also bound by the other • Range and domain of properties can be specified • i.e., what type of resources serve as object and subject
  • 104. 104 Property specification serialized • In RDF/XML: <rdf:Property rdf:ID="title"> <rdfs:domain rdf:resource="#Fiction"/> <rdfs:range rdf:resource="http://...#Literal"/> </rdf:Property> <rdf:Property rdf:ID="title"> <rdfs:domain rdf:resource="#Fiction"/> <rdfs:range rdf:resource="http://...#Literal"/> </rdf:Property> • In Turtle: :title rdf:type rdf:Property; rdfs:domain :Fiction; rdfs:range rdfs:Literal. :title rdf:type rdf:Property; rdfs:domain :Fiction; rdfs:range rdfs:Literal.
  • 105. 105 What does this mean? • Again, new relations can be deduced. Indeed, if :title rdf:type rdf:Property; rdfs:domain :Fiction; rdfs:range rdfs:Literal. <http://…/isbn/000651409X> :title "The Glass Palace" . :title rdf:type rdf:Property; rdfs:domain :Fiction; rdfs:range rdfs:Literal. <http://…/isbn/000651409X> :title "The Glass Palace" . • then the system can infer that: <http://…/isbn/000651409X> rdf:type :Fiction .<http://…/isbn/000651409X> rdf:type :Fiction .
  • 106. 106 Literals • Literals may have a data type • floats, integers, booleans, etc, defined in XML Schemas • full XML fragments • (Natural) language can also be specified
  • 107. 107 Examples for datatypes <http://…/isbn/000651409X> :page_number "543"^^xsd:integer ; :publ_date "2000"^^xsd:gYear ; :price "6.99"^^xsd:float . <http://…/isbn/000651409X> :page_number "543"^^xsd:integer ; :publ_date "2000"^^xsd:gYear ; :price "6.99"^^xsd:float .
  • 108. 108 A bit of RDFS can take you far… • Remember the power of merge? • We could have used, in our example: • f:auteur is a subproperty of a:author and vice versa (although we will see other ways to do that…) • Of course, in some cases, more complex knowledge is necessary (see later…)
  • 109. 109 Example: find the right experts at NASA • Expertise locater for nearly 70,000 NASA civil servants, using RDF integration techniques over 6 or 7 geographically distributed databases, data sources, and web services… Michael Grove, Clark & Parsia, LLC, and Andrew Schain, NASA, (SWEO Case Study)
  • 110. 110 How to get RDF Data? (Microformats, GRDDL, RDFa)
  • 111. 111 Simple approach • Write RDF/XML or Turtle “manually” • In some cases that is necessary, but it really does not scale…
  • 112. 112 RDF with XHTML • Obviously, a huge source of information • By adding some “meta” information, the same source can be reused for, eg, data integration, better mashups, etc • typical example: your personal information, like address, should be readable for humans and processable by machines • Two solutions have emerged: • extract the structure from the page and convert the content into RDF • add RDF statements directly into XHTML via RDFa
  • 113. 113 Extract RDF • Use intelligent “scrapers” or “wrappers” to extract a structure (hence RDF) from a Web pages or XML files… • … and then generate RDF automatically (e.g., via an XSLT script)
  • 114. 114 Formalizing the scraper approach: GRDDL • GRDDL formalizes the scraper approach. For example: <html xmlns="http://www.w3.org/1999/"> <head profile="http://www.w3.org/2003/g/data-view"> <title>Some Document</title> <link rel="transformation" href="http:…/dc-extract.xsl"/> <meta name="DC.Subject" content="Some subject"/> ... </head> ... <span class="date">2006-01-02</span> ... </html> <html xmlns="http://www.w3.org/1999/"> <head profile="http://www.w3.org/2003/g/data-view"> <title>Some Document</title> <link rel="transformation" href="http:…/dc-extract.xsl"/> <meta name="DC.Subject" content="Some subject"/> ... </head> ... <span class="date">2006-01-02</span> ... </html> • yields, through dc-extract.xsl: <> dc:subject "Some subject"; dc:date "2006-01-02" . <> dc:subject "Some subject"; dc:date "2006-01-02" .
  • 115. 115 GRDDL • The transformation itself has to be provided for each set of conventions • A more general syntax is defined for XML formats in general (e.g., via the namespace document) • a method to get data in other formats to RDF (e.g., XBRL)
  • 116. 116 Example for “structure”: microformats • Not a Semantic Web specification, originally • there is a separate microformat community • Approach: re-use (X)HTML attributes and elements to add “meta” information • typically @abbr, @class, @title, … • different community agreements for different applications
  • 117. 117 RDFa • RDFa extends (X)HTML a bit by: • defining general attributes to add metadata to any elements • provides an almost complete “serialization” of RDF in XHTML • It is a bit like the microformats/GRDDL approach but fully generic
  • 118. 118 RDFa example • For example: <div about="http://uri.to.newsitem"> <span property="dc:date">March 23, 2004</span> <span property="dc:title">Rollers hit casino for £1.3m</span> By <span property="dc:creator">Steve Bird</span>. See <a href="http://www.a.b.c/d.avi" rel="dcmtype:MovingImage"> also video footage</a>… </div> <div about="http://uri.to.newsitem"> <span property="dc:date">March 23, 2004</span> <span property="dc:title">Rollers hit casino for £1.3m</span> By <span property="dc:creator">Steve Bird</span>. See <a href="http://www.a.b.c/d.avi" rel="dcmtype:MovingImage"> also video footage</a>… </div> • yields, through an RDFa processor: <http://uri.to.newsitem> dc:date "March 23, 2004"; dc:title "Rollers hit casino for £1.3m; dc:creator "Steve Bird"; dcmtype:MovingImage <http://www.a.b.c/d.avi>. <http://uri.to.newsitem> dc:date "March 23, 2004"; dc:title "Rollers hit casino for £1.3m; dc:creator "Steve Bird"; dcmtype:MovingImage <http://www.a.b.c/d.avi>.
  • 119. 119 Example: Yahoo’s SearchMonkey • Search based results may be customized via small applications • Metadata in pages (in RDFa, microformats etc) are reused Courtesy of Peter Mika, Yahoo! Research, (SWEO Case Study)
  • 120. 120 Example: RDFa data by the London Gazette
  • 121. 121 Example: RDFa data by the London Gazette
  • 122. 122 Bridge to relational databases • Data on the Web are mostly stored in databases • “Bridges” are being defined: • a layer between RDF and the relational data • RDB tables are “mapped” to RDF graphs, possibly on the fly • different mapping approaches are being used • a number RDB systems offer this facility already (eg, Oracle, OpenLink, …) • A survey on mapping techniques has been published at W3C • W3C plans to engage in a standardization work in this area
  • 124. 124 Linking Open Data Project • Goal: “expose” open datasets in RDF • Set RDF links among the data items from different datasets • Set up query endpoints • Altogether billions of triples, millions of links…
  • 125. 125 Example data source: DBpedia • DBpedia is a community effort to • extract structured (“infobox”) information from Wikipedia • provide a query endpoint to the dataset • interlink the DBpedia dataset with other datasets on the Web
  • 126. 126 Extracting Wikipedia structured data @prefix dbpedia <http://dbpedia.org/resource/>. @prefix dbterm <http://dbpedia.org/property/>. dbpedia:Amsterdam dbterm:officialName “Amsterdam” ; dbterm:longd “4” ; dbterm:longm “53” ; dbterm:longs “32” ; ... dbterm:leaderTitle “Mayor” ; dbterm:leaderName dbpedia:Job_Cohen ; ... dbterm:areaTotalKm “219” ; ... dbpedia:ABN_AMRO dbterm:location dbpedia:Amsterdam ; ... @prefix dbpedia <http://dbpedia.org/resource/>. @prefix dbterm <http://dbpedia.org/property/>. dbpedia:Amsterdam dbterm:officialName “Amsterdam” ; dbterm:longd “4” ; dbterm:longm “53” ; dbterm:longs “32” ; ... dbterm:leaderTitle “Mayor” ; dbterm:leaderName dbpedia:Job_Cohen ; ... dbterm:areaTotalKm “219” ; ... dbpedia:ABN_AMRO dbterm:location dbpedia:Amsterdam ; ...
  • 127. 127 Automatic links among open datasets <http://dbpedia.org/resource/Amsterdam> owl:sameAs <http://rdf.freebase.com/ns/...> ; owl:sameAs <http://sws.geonames.org/2759793> ; ... <http://dbpedia.org/resource/Amsterdam> owl:sameAs <http://rdf.freebase.com/ns/...> ; owl:sameAs <http://sws.geonames.org/2759793> ; ... <http://sws.geonames.org/2759793> owl:sameAs <http://dbpedia.org/resource/Amsterdam> wgs84_pos:lat “52.3666667” ; wgs84_pos:long “4.8833333” ; geo:inCountry <http://www.geonames.org/countries/#NL> ; ... <http://sws.geonames.org/2759793> owl:sameAs <http://dbpedia.org/resource/Amsterdam> wgs84_pos:lat “52.3666667” ; wgs84_pos:long “4.8833333” ; geo:inCountry <http://www.geonames.org/countries/#NL> ; ... Processors can switch automatically from one to the other…
  • 129. 129 The LOD “cloud”, September 2008
  • 131. 131 Example: mapping application on an iPhone Courtesy of Chris Bizer and Christian Becker, Freie Universität, Berlin
  • 132. 132 Example: mapping application on an iPhone Courtesy of Chris Bizer and Christian Becker, Freie Universität, Berlin
  • 134. 134 RDF data access • How do I query the RDF data? • e.g., how do I get to the DBpedia data?
  • 135. 135 Querying RDF graphs • Remember the Jena idiom: StmtIterator iter=model.listStatements(subject,null,null); while(iter.hasNext()) { st = iter.next(); p = st.getProperty(); o = st.getObject(); do_something(p,o); StmtIterator iter=model.listStatements(subject,null,null); while(iter.hasNext()) { st = iter.next(); p = st.getProperty(); o = st.getObject(); do_something(p,o); • In practice, more complex queries into the RDF data are necessary • something like: “give me the (a,b) pair of resources, for which there is an x such that (x parent a) and (b brother x) holds” (ie, return the uncles) • these rules may become quite complex • The goal of SPARQL (Query Language for RDF)
  • 136. 136 Analyse the Jena example StmtIterator iter=model.listStatements(subject,null,null); while(iter.hasNext()) { st = iter.next(); p = st.getProperty(); o = st.getObject(); do_something(p,o); StmtIterator iter=model.listStatements(subject,null,null); while(iter.hasNext()) { st = iter.next(); p = st.getProperty(); o = st.getObject(); do_something(p,o); • The (subject,?p,?o) is a pattern for what we are looking for (with ?p and ?o as “unknowns”)
  • 137. 137 General: graph patterns • The fundamental idea: use graph patterns • the pattern contains unbound symbols • by binding the symbols, subgraphs of the RDF graph are selected • if there is such a selection, the query returns bound resources
  • 138. 138 Our Jena example in SPARQL SELECT ?p ?o WHERE {subject ?p ?o} SELECT ?p ?o WHERE {subject ?p ?o} • The triples in WHERE define the graph pattern, with ?p and ?o “unbound” symbols • The query returns all p,o pairs
  • 139. 139 Simple SPARQL example SELECT ?isbn ?price ?currency # note: not ?x! WHERE {?isbn a:price ?x. ?x rdf:value ?price. ?x p:currency ?currency.} SELECT ?isbn ?price ?currency # note: not ?x! WHERE {?isbn a:price ?x. ?x rdf:value ?price. ?x p:currency ?currency.}
  • 140. 140 Simple SPARQL example • Returns: [[<..49X>,33,£], [<..49X>,50,€], [<..6682>,60,€], [<..6682>,78,$]] SELECT ?isbn ?price ?currency # note: not ?x! WHERE {?isbn a:price ?x. ?x rdf:value ?price. ?x p:currency ?currency.} SELECT ?isbn ?price ?currency # note: not ?x! WHERE {?isbn a:price ?x. ?x rdf:value ?price. ?x p:currency ?currency.}
  • 141. 141 Pattern constraints SELECT ?isbn ?price ?currency # note: not ?x! WHERE { ?isbn a:price ?x. ?x rdf:value ?price. ?x p:currency ?currency. FILTER(?currency == € } SELECT ?isbn ?price ?currency # note: not ?x! WHERE { ?isbn a:price ?x. ?x rdf:value ?price. ?x p:currency ?currency. FILTER(?currency == € } • Returns: [[<..409X>,50,€], [<..6682>,60,€]]
  • 142. 142 Other SPARQL features • Limit the number of returned results; remove duplicates, sort them, … • Optional branches in the query • Specify several data sources (via URI-s) within the query (essentially, a merge!) • Construct a graph combining a separate pattern and the query results • Use datatypes and/or language tags when matching a pattern
  • 143. 143 SPARQL usage in practice • SPARQL is usually used over the network • separate documents define the protocol and the result format • SPARQL Protocol for RDF with HTTP and SOAP bindings • SPARQL results in XML or JSON formats • Big datasets usually offer “SPARQL endpoints” using this protocol • typical example: SPARQL endpoint to DBpedia
  • 144. 144 SPARQL as a unifying point
  • 145. 145 Example: integrate Chinese medical data • Integration of a large number of TCM databases • around 80 databases, around 200,000 records each • A visual tool to map databases to the semantic layer using a specialized ontology • Form based query interface for end users Courtesy of Huajun Chen, Zhejiang University, (SWEO Case Study)
  • 147. 147 Ontologies • RDFS is useful, but does not solve all possible requirements • Complex applications may want more possibilities: • characterization of properties • identification of objects with different URI-s • disjointness or equivalence of classes • construct classes, not only name them • can a program reason about some terms? E.g.: • “if «Person» resources «A» and «B» have the same «foaf:email» property, then «A» and «B» are identical” • etc.
  • 148. 148 Ontologies (cont.) • The term ontologies is used in this respect: • RDFS can be considered as a simple ontology language • Languages should be a compromise between • rich semantics for meaningful applications • feasibility, implementability “defines the concepts and relationships used to describe and represent an area of knowledge” “defines the concepts and relationships used to describe and represent an area of knowledge”
  • 149. 149 Web Ontology Language = OWL • OWL is an extra layer, a bit like RDF Schemas • own namespace, own terms • it relies on RDF Schemas • It is a separate recommendation • actually… there is a 2004 version of OWL (“OWL 1”) • and there is an update (“OWL 2”) that should be finalized in 2009 • you will surely hear about it at the conference…
  • 150. 150 OWL is complex… • OWL is a large set of additional terms • We will not cover the whole thing here…
  • 151. 151 Term equivalences • For classes: • owl:equivalentClass: two classes have the same individuals • owl:disjointWith: no individuals in common • For properties: • owl:equivalentProperty • remember the a:author vs. f:auteur • owl:propertyDisjointWith • For individuals: • owl:sameAs: two URIs refer to the same concept (“individual”) • owl:differentFrom: negation of owl:sameAs
  • 153. 153 Typical usage of owl:sameAs • Linking our example of Amsterdam from one data set (DBpedia) to the other (Geonames): <http://dbpedia.org/resource/Amsterdam> owl:sameAs <http://sws.geonames.org/2759793>; <http://dbpedia.org/resource/Amsterdam> owl:sameAs <http://sws.geonames.org/2759793>; • This is the main mechanism of “Linking” in the Linking Open Data project
  • 154. 154 Property characterization • In OWL, one can characterize the behaviour of properties (symmetric, transitive, functional, inverse functional…) • One property may be the inverse of another • OWL also separates data and object properties • “datatype property” means that its range are typed literals
  • 155. 155 What this means is… • If the following holds in our triples: :email rdf:type owl:InverseFunctionalProperty. <A> :email "mailto:a@b.c". <B> :email "mailto:a@b.c". :email rdf:type owl:InverseFunctionalProperty. <A> :email "mailto:a@b.c". <B> :email "mailto:a@b.c". • I.e., new relationships were discovered again (beyond what RDFS could do) <A> owl:sameAs <B>.<A> owl:sameAs <B>. then, processed through OWL, the following holds, too:
  • 156. 156 Classes in OWL • In RDFS, you can subclass existing classes… that’s all • In OWL, you can construct classes from existing ones: • enumerate its content • through intersection, union, complement • Etc
  • 157. 157 Classes in OWL (cont) • OWL makes a stronger conceptual distinction between classes and individuals • there is a separate term for owl:Class, to make the difference (a specialization of the RDFS class) • individuals are separated into a special class called owl:Thing • Eg, a precise classification would be: ex:Person rdf:type owl:Class. <uri-for-Amitav-Ghosh> rdf:type owl:Thing; rdf:type owl:Person . ex:Person rdf:type owl:Class. <uri-for-Amitav-Ghosh> rdf:type owl:Thing; rdf:type owl:Person .
  • 158. 158 Classes contents can be enumerated • I.e., the class consists of exactly of those individuals :£ rdf:type owl:Thing. :€ rdf:type owl:Thing. :$ rdf:type owl:Thing. :Currency rdf:type owl:Class; owl:oneOf (:€ :£ :$). :£ rdf:type owl:Thing. :€ rdf:type owl:Thing. :$ rdf:type owl:Thing. :Currency rdf:type owl:Class; owl:oneOf (:€ :£ :$).
  • 159. 159 Union of classes can be defined • Other possibilities: complementOf, intersectionOf, … :Novel rdf:type owl:Class. :Short_Story rdf:type owl:Class. :Poetry rdf:type owl:Class. :Literature rdf:type owl:Class; owl:unionOf (:Novel :Short_Story :Poetry). :Novel rdf:type owl:Class. :Short_Story rdf:type owl:Class. :Poetry rdf:type owl:Class. :Literature rdf:type owl:Class; owl:unionOf (:Novel :Short_Story :Poetry).
  • 160. 160 For example… If: :Novel rdf:type owl:Class. :Short_Story rdf:type owl:Class. :Poetry rdf:type owl:Class. :Literature rdf:type owl:Class; owl:unionOf (:Novel :Short_Story :Poetry). <myWork> rdf:type :Novel . :Novel rdf:type owl:Class. :Short_Story rdf:type owl:Class. :Poetry rdf:type owl:Class. :Literature rdf:type owl:Class; owl:unionOf (:Novel :Short_Story :Poetry). <myWork> rdf:type :Novel . <myWork> rdf:type :Literature .<myWork> rdf:type :Literature . then the following holds, too:
  • 161. 161 It can be a bit more complicated… If: :Novel rdf:type owl:Class. :Short_Story rdf:type owl:Class. :Poetry rdf:type owl:Class. :Literature rdf:type owlClass; owl:unionOf (:Novel :Short_Story :Poetry). fr:Roman owl:equivalentClass :Novel . <myWork> rdf:type fr:Roman . :Novel rdf:type owl:Class. :Short_Story rdf:type owl:Class. :Poetry rdf:type owl:Class. :Literature rdf:type owlClass; owl:unionOf (:Novel :Short_Story :Poetry). fr:Roman owl:equivalentClass :Novel . <myWork> rdf:type fr:Roman . <myWork> rdf:type :Literature .<myWork> rdf:type :Literature . then, through the combination of different terms, the following still holds:
  • 162. 162 What we have so far… • The OWL features listed so far are already fairly powerful • E.g., various databases can be linked via owl:sameAs, functional or inverse functional properties, etc. • Many inferred relationship can be found using a traditional rule engine
  • 163. 163 However… that may not be enough • Very large vocabularies might require even more complex features • typical usage example: definition of all concepts in a health care environment • a major issue: the way classes (i.e., “concepts”) are defined • OWL includes those extra features but… the inference engines become (much) more complex
  • 164. 164 Property value restrictions • Classes are created by restricting the property values on its individuals • For example: how would I characterize a “listed price”? • it is a price (which may be a general term), but one that is given in one of the “allowed” currencies (say, €, £, or $) • more formally: • the value of “p:currency”, when applied to a resource on listed price, must be of one of those values… • …thereby defining the class of “listed price”
  • 165. 165 Restrictions formally • Defines a class of type owl:Restriction with a • reference to the property that is constrained • definition of the constraint itself • One can, e.g., subclass from this node when defining a particular class :Listed_Price rdfs:subClassOf [ rdf:type owl:Restriction; owl:onProperty p:currency; owl:allValuesFrom :Currency. ]. :Listed_Price rdfs:subClassOf [ rdf:type owl:Restriction; owl:onProperty p:currency; owl:allValuesFrom :Currency. ].
  • 166. 166 Possible usage… If: <something> rdf:type :Currency .<something> rdf:type :Currency . then the following holds: :Listed_Price rdfs:subClassOf [ rdf:type owl:Restriction; owl:onProperty p:currency; owl:allValuesFrom :Currency. ]. :price rdf:type :Listed_Price . :price p:currency <something> . :Listed_Price rdfs:subClassOf [ rdf:type owl:Restriction; owl:onProperty p:currency; owl:allValuesFrom :Currency. ]. :price rdf:type :Listed_Price . :price p:currency <something> .
  • 167. 167 Other restrictions • allValuesFrom could be replaced by: • someValuesFrom • e.g., I could have said: there should be a price given in at least one of those currencies • hasValue, when restricted to one specific value • Cardinality restrictions: instead of looking at the values of properties, their number is considered • eg, a specific property should occur exactly once
  • 168. 168 But: OWL is hard! • The combination of class constructions with various restrictions is extremely powerful • What we have so far follows the same logic as before • extend the basic RDF and RDFS possibilities with new features • define their semantics, ie, what they “mean” in terms of relationships • expect to infer new relationships based on those • However… a full inference procedure is hard • not implementable with simple rule engines, for example
  • 169. 169 OWL “species” • OWL species comes to the fore: • restricting which terms can be used and under what circumstances (restrictions) • if one abides to those restrictions, then simpler inference engines can be used • They reflect compromises: expressibility vs. implementability
  • 170. 170 OWL Full • No constraints on any of the constructs • owl:Class is just syntactic sugar for rdfs:Class • owl:Thing is equivalent to rdfs:Resource • this means that: • Class can also be an individual, a URI can denote a property as well as a Class • e.g., it is possible to talk about class of classes, apply properties on them • etc • etc. • Extension of RDFS in all respects • But: no system may exist that infers everything one might expect
  • 171. 171 OWL Full usage • Nevertheless OWL Full is essential • it gives a generic framework to express many things • some application just need to express and interchange terms (with possible scruffiness) • Applications may control what terms are used and how • in fact, they may define their own sub-language via, eg, a vocabulary • thereby ensuring a manageable inference procedure
  • 172. 172 OWL DL • A number of restrictions are defined • classes, individuals, object and datatype properties, etc, are fairly strictly separated • object properties must be used with individuals • i.e., properties are really used to create relationships between individuals • no characterization of datatype properties • … • But: well known inference algorithms exist!
  • 173. 173 Examples for restrictions • The following is not “legal” OWL DL: <q> rdf:type <A>. # A is a class, q is an individual <r> rdf:type <q>. # error: q cannot be used for a class, too <A> ex:something <B>. # error: properties are for individuals only <q> ex:something <s>. # error: same property cannot be used as <p> ex:something “54”. # object and datatype property <q> rdf:type <A>. # A is a class, q is an individual <r> rdf:type <q>. # error: q cannot be used for a class, too <A> ex:something <B>. # error: properties are for individuals only <q> ex:something <s>. # error: same property cannot be used as <p> ex:something “54”. # object and datatype property
  • 174. 174 OWL DL usage • Abiding to the restrictions means that very large ontologies can be developed that require precise procedures • eg, in the medical domain, biological research, energy industry, financial services (eg, XBRL), etc • the number of classes and properties described this way can go up to the many thousands • OWL DL has become a language of choice to define and manage formal ontologies in general • even if their usage is not necessarily on the Web
  • 175. 175 OWL 2 defines further species a.k.a. “profiles” • Further restrictions on how terms can be used and what inferences can be expected • Classification and instance queries in polynomial time: OWL-EL • Implementable on top of conventional relational database engines: OWL-QL • Implementable on top of traditional rule engines: OWL-RL
  • 176. 176 Ontology development • The hard work is to create the ontologies • requires a good knowledge of the area to be described • some communities have good expertise already (e.g., librarians) • OWL is just a tool to formalize ontologies • large scale ontologies are often developed in a community process • Ontologies should be shared and reused • can be via the simple namespace mechanisms… • …or via explicit import
  • 177. 177 Must I use large ontologies? • NO!!! • Many applications are possible with RDFS and a just a little bit of OWL • a few terms, whose meaning is defined in OWL, and that application can handle directly • OWL RL is a step to create such a generic OWL level • Big ontologies can be expensive (both in time and money); use them only when really necessary!
  • 178. 178 Ontologies examples • eClassOwl: eBusiness ontology for products and services, 75,000 classes and 5,500 properties • National Cancer Institute’s ontology: about 58,000 classes • Open Biomedical Ontologies Foundry: a collection of ontologies, including the Gene Ontology to describe gene and gene product attributes in any organism or protein sequence and annotation terminology and data (UniProt) • BioPAX: for biological pathway data
  • 179. 179 Example: improved search via ontology • Search results are re-ranked using ontologies • Related terms are highlighted, usable for further search
  • 180. 180 Example: improved search via ontology • Same dataset, different ontology • (ontology is on non-animal experimentation)
  • 181. 181 Example: Eli Lilly’s target assessment tool • Prioritization of drug target, integrating data from different sources and formats • Integration, search via ontologies (proprietary and public) Courtesy of Susie Stephens, Eli Lilly (SWEO Case Study)
  • 182. 182 What have we achieved? (putting all this together)
  • 183. 183 Other SW technologies • There are other technologies that we do not have time for here • find RDF data associated with general URI-s: POWDER • bridge to thesauri, glossaries, etc: SKOS • use Rule engines on RDF data
  • 185. 185 Same with what we learned
  • 186. 186 Example: personalized tourist itinerary • Integration of relevant data in Zaragoza (using RDF and ontologies) • Use rules on the RDF data to provide a proper itinerary Courtesy of Jesús Fernández, Mun. of Zaragoza, and Antonio Campos, CTIC (SWEO Use Case)
  • 188. 188 Available specifications: Primers, Guides • The “RDF Primer” and the “OWL Guide” give a formal introduction to RDF(S) and OWL • GRDDL and RDFa Primers have also been published • The W3C Semantic Web Activity Homepage has links to all the specifications: • http://www.w3.org/2001/sw/
  • 189. 189 “Core” vocabularies • There are also a number widely used “core vocabularies” • Dublin Core: about information resources, digital libraries, with extensions for rights, permissions, digital right management • FOAF: about people and their organizations • DOAP: on the descriptions of software projects • SIOC: Semantically-Interlinked Online Communities • vCard in RDF • … • One should never forget: ontologies/vocabularies must be shared and reused!
  • 190. 190 Some books • G. Antoniu and F. van Harmelen: Semantic Web Primer, 2nd edition in 2008 • D. Allemang and J. Hendler: Semantic Web for the Working Ontologist, 2008 • Jeffrey Pollock: Semantic Web for Dummies, 2009 • … See the separate Wiki page collecting book references: http://esw.w3.org/topic/SwBooks
  • 191. 191 Further information • Planet RDF aggregates a number of SW blogs: • http://planetrdf.com/ • Semantic Web Interest Group • a forum developers with archived (and public) mailing list, and a constant IRC presence on freenode.net#swig • anybody can sign up on the list: • http://www.w3.org/2001/sw/interest/
  • 192. 192 Lots of Tools (not an exhaustive list!) • Categories: • Triple Stores • Inference engines • Converters • Search engines • Middleware • CMS • Semantic Web browsers • Development environments • Semantic Wikis • … • Some names: • Jena, AllegroGraph, Mulgara, Sesame, flickurl, … • TopBraid Suite, Virtuoso environment, Falcon, Drupal 7, Redland, Pellet, … • Disco, Oracle 11g, RacerPro, IODT, Ontobroker, OWLIM, Tallis Platform, … • RDF Gateway, RDFLib, Open Anzo, DartGrid, Zitgist, Ontotext, Protégé, … • Thetus publisher, SemanticWorks, SWI-Prolog, RDFStore… • …
  • 193. 193 Conclusions • The Semantic Web is about creating a Web of Data • There is a great and very active user and developer community, with new applications • witness the size and diversity of this event
  • 194. 194 Thank you for your attention! These slides are also available on the Web: http://www.w3.org/2009/Talks/0615-SanJose-tutorial-IH/

Editor's Notes

  1. This is just a generic slide set. Should be adapted, reviewed, possibly with slides removed, for a specific event. Rule of thumb: on the average, a slide is a minute…
  2. I know this comment will get things thrown at me but know one cares about a map. It is the information that a map tells that people want not the map itself.
  3. Various types of databases are accessed having an RDF transformation of (part of the data) on the fly. Some of the data may be simple tables, some are the result of continuous background processing analysing the literature (not directly related to the Semantic Web per se). The integration of the data is done on the RDF level, and is viewed via an off-the-shelf (though experimental) faceted browser (Exhibit). Ie, the Semantic Web portion is very simple but allows for a very quick integration of the data on the screen.
  4. The use internal ontologies/vocabularies to describe the knowledge areas, and a combination of the RDF data and that ontology to search through the (integrated) databases for a specific knowledge expertise. The dump is from a faceted browser developed by the company to view result data.
  5. The important point here is that (1) the data becomes available to the World via a unified format (ie, RDF), regardless on how it is stored inside and (2) the various datasets are interlinked together, ie, they are not independent islands. Dbpedia is probably the most important &amp;apos;hub&amp;apos; in the project.
  6. The right hand portion of the wiki side is what is referred to as &amp;apos;infobox&amp;apos;. The left side is the dbpedia content as made available.
  7. This is the important point: this is why this is Linking open data. What the community does (and this is a community project) is to make agreements on how the different datasets can be linked and then the linkage is done automatically by the &amp;apos;bridges&amp;apos; that map the public datasets into RDF. Eg, geonames is a public dataset by geonames.org and dbpedia is, well... wikipedia:-)
  8. The various databases around the country are handled by independent bodies. A visual query generator creates a SPARQL query, this is then decomposed to access the individual databases, on-the-fly transformed into SQL queries, the result are in RDF and recombined for the answer. The system is uses in the Chinese Academy of Sciences&amp;apos; Research institute on traditional Chinese medicine. The university is also working on a nation-wide ontology on traditional Chinese Medicine that can be combined with the search to improve it. Still in development. That ontology might be bound to western medical ontologies, too (eventually).
  9. Pubmed.org is the &amp;apos;google&amp;apos; of the medical profession. The result of a search is re-ranked, a better interface is provided, and related terms are also shown based on public medical ontologies. The left hand side refers to the Gene Ontology and Mesh ontologies; actually (red highlight on the screen) and the user is able to follow up related terms, too. Produced by a German company (transinight)
  10. Pubmed.org is the &amp;apos;google&amp;apos; of the medical profession. The result of a search is re-ranked, a better interface is provided, and related terms are also shown based on public medical ontologies. The left hand side refers to the Gene Ontology and Mesh ontologies; actually (red highlight on the screen) and the user is able to follow up related terms, too. Produced by a German company (transinight)
  11. The important point is that the (subsequent) search is not simply done on a (key)word level, like for a traditional search engine, but through the tree of all related terms, where those relations are determined via internal and public ontologies and vocabularies. The screen snapshot illustrates the user interface of the Target Assessment Tool within Lilly Science Grid. The panels to the left of the screen snapshot show that it is possible to directly search for a term or to navigate the ontology to identify a term of interest. The panel to the right of the screen snapshot demonstrates a graph view of data within the data as it relates to the search term.
  12. The system has been developed and tested for Statoil, which is the largest oil company on the Norwegian Continental Shelf, and covers experiences with over 2,500 drilling operations since the early 90s. The objective of the reuse improvements is to discover relevant experiences that could affect current or planned drilling operations. The shared domain ontology is used for semantic annotation, and for retrieval of information. It is developed collaboratively by the discipline advisors, and covers operations, equipment, events and failure states in drilling operations. It also includes relations between these concepts, for example, to indicate that a particular event may result in a failure state.