Ontology-mediated query answering with data-tractable description logicsINRIA-CEDAR
Recent years have seen an increasing interest in ontology-mediated query answering, in which the semantic knowledge provided by an ontology is exploited when querying data. Adding an ontology has several advantages (e.g. simplifying query formulation, integrating data from different sources, providing more complete answers to queries), but it also makes the query answering task more difficult. In this tutorial, we will give a brief introduction to ontology-mediated query answering using description logic (DL) ontologies. Our focus will be on DLs for which query answering scales polynomially in the size of the data, as these are best suited for applications requiring large amounts of data. We will describe the challenges that arise when evaluating different natural types of queries in the presence of such ontologies, and we will present algorithmic solutions based upon two key concepts, namely, query rewriting and saturation. The lecture will conclude with an overview of recent results and active areas of ongoing research.
The document discusses the RDF data model. The key points are:
1. RDF represents data as a graph of triples consisting of a subject, predicate, and object. Triples can be combined to form an RDF graph.
2. The RDF data model has three types of nodes - URIs to identify resources, blank nodes to represent anonymous resources, and literals for values like text strings.
3. RDF graphs can be merged to integrate data from multiple sources in an automatic way due to RDF's compositional nature.
Web sémantique, Web de données, Web 3.0, Linked Data... Quelques repères pour...Antidot
Diaporama de la présentation faite à l'occasion du Co-lab Semantique organisé par le consortium Scribo. L'enjeu était de présenter en 45-60min les enjeux du Web sémantique.
FIWARE Wednesday Webinars - Introduction to NGSI-LDFIWARE
Introduction to NGSI-LD Webinar - 27th May 2020
Corresponding webinar recording: https://youtu.be/rZ13IyLpAtA
A data-model driven and linked data first introduction for developers to NGSI-LD and JSON-LD.
Chapter: Core
Difficulty: 3
Audience: Any Technical
Presenter: Jason Fox (Senior Technical Evangelist, FIWARE Foundation)
NGSI-LD provides a more complex data model than NGSIv2 by introducing properties, relationships, and additional metadata. It evolves NGSIv2 to support linked data by making payloads valid JSON-LD. This allows for a navigable knowledge graph compared to the simpler NGSIv2 model. The document discusses the differences between the two models and provides examples of creating and reading entity data in each.
A brief presentation outlining the basics of elasticsearch for beginners. Can be used to deliver a seminar on elasticsearch.(P.S. I used it) Would Recommend the presenter to fiddle with elasticsearch beforehand.
This document provides an overview of big data and Hadoop. It introduces big data concepts and architectures, describes the Hadoop ecosystem including its core components of HDFS and MapReduce. It also provides an example of how MapReduce works for a word count problem, splitting the documents, mapping to count word frequencies, and reducing to sum the counts. The document aims to give the reader an understanding of big data and how Hadoop is used for distributed storage and processing of large datasets.
Ontology-mediated query answering with data-tractable description logicsINRIA-CEDAR
Recent years have seen an increasing interest in ontology-mediated query answering, in which the semantic knowledge provided by an ontology is exploited when querying data. Adding an ontology has several advantages (e.g. simplifying query formulation, integrating data from different sources, providing more complete answers to queries), but it also makes the query answering task more difficult. In this tutorial, we will give a brief introduction to ontology-mediated query answering using description logic (DL) ontologies. Our focus will be on DLs for which query answering scales polynomially in the size of the data, as these are best suited for applications requiring large amounts of data. We will describe the challenges that arise when evaluating different natural types of queries in the presence of such ontologies, and we will present algorithmic solutions based upon two key concepts, namely, query rewriting and saturation. The lecture will conclude with an overview of recent results and active areas of ongoing research.
The document discusses the RDF data model. The key points are:
1. RDF represents data as a graph of triples consisting of a subject, predicate, and object. Triples can be combined to form an RDF graph.
2. The RDF data model has three types of nodes - URIs to identify resources, blank nodes to represent anonymous resources, and literals for values like text strings.
3. RDF graphs can be merged to integrate data from multiple sources in an automatic way due to RDF's compositional nature.
Web sémantique, Web de données, Web 3.0, Linked Data... Quelques repères pour...Antidot
Diaporama de la présentation faite à l'occasion du Co-lab Semantique organisé par le consortium Scribo. L'enjeu était de présenter en 45-60min les enjeux du Web sémantique.
FIWARE Wednesday Webinars - Introduction to NGSI-LDFIWARE
Introduction to NGSI-LD Webinar - 27th May 2020
Corresponding webinar recording: https://youtu.be/rZ13IyLpAtA
A data-model driven and linked data first introduction for developers to NGSI-LD and JSON-LD.
Chapter: Core
Difficulty: 3
Audience: Any Technical
Presenter: Jason Fox (Senior Technical Evangelist, FIWARE Foundation)
NGSI-LD provides a more complex data model than NGSIv2 by introducing properties, relationships, and additional metadata. It evolves NGSIv2 to support linked data by making payloads valid JSON-LD. This allows for a navigable knowledge graph compared to the simpler NGSIv2 model. The document discusses the differences between the two models and provides examples of creating and reading entity data in each.
A brief presentation outlining the basics of elasticsearch for beginners. Can be used to deliver a seminar on elasticsearch.(P.S. I used it) Would Recommend the presenter to fiddle with elasticsearch beforehand.
This document provides an overview of big data and Hadoop. It introduces big data concepts and architectures, describes the Hadoop ecosystem including its core components of HDFS and MapReduce. It also provides an example of how MapReduce works for a word count problem, splitting the documents, mapping to count word frequencies, and reducing to sum the counts. The document aims to give the reader an understanding of big data and how Hadoop is used for distributed storage and processing of large datasets.
FIWARE Training: Introduction to Smart Data ModelsFIWARE
The document introduces the Smart Data Models program which provides standardized data models for various domains. It explains that the program aims to enable agile standardization through contributions from the community. It outlines the governance structure and current status of the program, including the available domains, data models, contributors and tools. Participants are then guided through an exercise to turn a data source into a Smart Data Model by generating a JSON schema, example payload and submitting it as a pull request to the incubated repository on GitHub.
This training camp teaches you how FIWARE technologies and iSHARE, brought together under the umbrella of the i4Trust initiative, can be combined to provide the means for creation of data spaces in which multiple organizations can exchange digital twin data in a trusted and efficient manner, collaborating in the development of innovative services based on data sharing and creating value out of the data they share. SMEs and Digital Innovation Hubs (DIHs) will be equipped with the necessary know-how to use the i4Trust framework for creating data spaces!
Creating a Context-Aware solution, Complex Event Processing with FIWARE PerseoFernando Lopez Aguilar
Introduction to Complex Event Processing (CEP). How FIWARE deals with CEP through FIWARE Perseo. How to connect FIWARE Perseo with FIWARE Orion Context Broker. How can we define an event with Event Processing Language (EPL) and what are the predefined actions to include in FIWARE Perseo.
The data lake has become extremely popular, but there is still confusion on how it should be used. In this presentation I will cover common big data architectures that use the data lake, the characteristics and benefits of a data lake, and how it works in conjunction with a relational data warehouse. Then I’ll go into details on using Azure Data Lake Store Gen2 as your data lake, and various typical use cases of the data lake. As a bonus I’ll talk about how to organize a data lake and discuss the various products that can be used in a modern data warehouse.
Presentation by Lorenzo Mangani of QXIP at the October 26 SF Bay Area ClickHouse meetup
https://www.meetup.com/San-Francisco-Bay-Area-ClickHouse-Meetup
https://qxip.net/
An online training course run by the FIWARE Foundation in conjunction with the i4Trust project. The core part of this virtual training camp (21-24 June 2021) covered all the necessary skills to develop smart solutions powered by FIWARE. It introduces the basis of Digital Twin programming using linked data concepts - JSON-LD and NGSI-LD and combines these with common smart data models for the sharing and augmentation of context data.
In addition, it covers the supplementary FIWARE technologies used to implement the common functions typically required when architecting a complete smart solution: Identity and Access Management (IAM) functions to secure access to digital twin data and functions enabling the interface with IoT and 3rd systems, or the connection with different tools for processing and monitoring current and historical big data.
This 12-hour online training course can be used to obtain a good understanding of FIWARE and NGSI Interfaces and form the basis of studying for the FIWARE expert certification.
Extending this core part, the virtual training camp adds introductory and deep-dive sessions on how FIWARE and iSHARE technologies, brought together under the umbrella of the i4Trust initiative, can be combined to provide the means for the creation of data spaces in which multiple organizations can exchange digital twin data in a trusted and efficient manner, collaborating in the creation of innovative services based on data sharing. In addition, SMEs and Digital Innovation Hubs (DIHs) that go through this complete training and are located in countries eligible under Horizon 2020 will be equipped with the necessary know-how to apply to the recently launched i4Trust Open Call.
A look inside pandas design and developmentWes McKinney
This document summarizes Wes McKinney's presentation on pandas, an open source data analysis library for Python. McKinney is the lead developer of pandas and discusses its design, development, and performance advantages over other Python data analysis tools. He highlights key pandas features like the DataFrame for tabular data, fast data manipulation capabilities, and its use in financial applications. McKinney also discusses his development process, tools like IPython and Cython, and optimization techniques like profiling and algorithm exploration to ensure pandas' speed and reliability.
This is the presentation I made on the Hadoop User Group Ireland meetup in Dublin. It covers the main ideas of both MPP, Hadoop and the distributed systems in general, and also how to chose the best option for you
The document discusses the Semantic Web and Resource Description Framework (RDF). It defines the Semantic Web as making web data machine-understandable by describing web resources with metadata. RDF uses triples to describe resources, properties, and relationships. RDF data can be visualized as a graph and serialized in formats like RDF/XML. RDF Schema (RDFS) provides a basic vocabulary for defining classes, properties, and hierarchies to enable reasoning about RDF data.
This training module introduces Resource Description Framework (RDF) for describing data, including representing data as triples, graphs and syntax; it also introduces the SPARQL query language for querying and manipulating RDF data, covering SELECT, CONSTRUCT, DESCRIBE, and ASK query types and the structure of SPARQL queries. The module provides learning objectives and an overview of the content which includes an introduction to RDF and SPARQL with examples and pointers to further resources.
FIWARE Wednesday Webinars - How to Design DataModelsFIWARE
How to Design DataModels - 8th May 2019
Corresponding webinar recording: https://youtu.be/T_1DpKf6C_c
Understanding and applying Standard Data Models.
Chapter: Core
Difficulty: 3
Audience: Technical Domain Specific
Presenter: José Manuel Cantera (Senior Standardization Expert, FIWARE Foundation)
This document introduces linked data and discusses how publishing data as linked RDF triples on the web allows for a global linked database. It explains that linked data uses HTTP URIs to identify things and links data from different sources to be queried using SPARQL. Publishing linked data provides benefits like being able to integrate and discover related data on the web. Tools are available to convert existing data or publish new data as linked open data.
Understanding RDF: the Resource Description Framework in Context (1999)Dan Brickley
Dan Brickley, 3rd European Commission Metadata Workshop, Luxemburg, April 12th 1999
Understanding RDF: the Resource Description Framework in Context
http://ilrt.org/discovery/2001/01/understanding-rdf/
Although RDF is a corner stone of semantic web and knowledge graphs, it has not been embraced by everyday programmers and software architects who need to safely create and access well-structured data. There is a lack of common tools and methodologies that are available in more conventional settings to improve data quality by defining schemas that can later be validated. Two technologies have recently been proposed for RDF validation: Shape Expressions (ShEx) and Shapes Constraint Language (SHACL). In the talk, we will review the history and motivation of both technologies. We will also and enumerate some challenges and future work with regards to RDF validation.
This document provides an overview of advanced operations in NGSI-LD (Next Generation SI-LD), including:
- Specific headers used in NGSI-LD requests
- Supported content types and best practices for JSON-LD payloads
- Examples of temporal queries, geoqueries, and language maps
- Details on pagination, time limiting queries, and supported response formats
In Data Engineer's Lunch #54, we will discuss the data build tool, a tool for managing data transformations with config files rather than code. We will be connecting it to Apache Spark and using it to perform transformations.
Accompanying YouTube: https://youtu.be/dwZlYG6RCSY
Sign Up For Our Newsletter: http://eepurl.com/grdMkn
Join Data Engineer’s Lunch Weekly at 12 PM EST Every Monday:
https://www.meetup.com/Data-Wranglers-DC/events/
Cassandra.Link:
https://cassandra.link/
Follow Us and Reach Us At:
Anant:
https://www.anant.us/
Awesome Cassandra:
https://github.com/Anant/awesome-cassandra
Email:
solutions@anant.us
LinkedIn:
https://www.linkedin.com/company/anant/
Twitter:
https://twitter.com/anantcorp
Eventbrite:
https://www.eventbrite.com/o/anant-1072927283
Facebook:
https://www.facebook.com/AnantCorp/
Join The Anant Team:
https://www.careers.anant.us
The talk covers how Elasticsearch, Lucene and to some extent search engines in general actually work under the hood. We'll start at the "bottom" (or close enough!) of the many abstraction levels, and gradually move upwards towards the user-visible layers, studying the various internal data structures and behaviors as we ascend. Elasticsearch provides APIs that are very easy to use, and it will get you started and take you far without much effort. However, to get the most of it, it helps to have some knowledge about the underlying algorithms and data structures. This understanding enables you to make full use of its substantial set of features such that you can improve your users search experiences, while at the same time keep your systems performant, reliable and updated in (near) real time.
This document provides an overview of Apache NiFi and dataflow. It begins with an introduction to the challenges of moving data effectively within and between systems. It then discusses Apache NiFi's key features for addressing these challenges, including guaranteed delivery, data buffering, prioritized queuing, and data provenance. The document outlines NiFi's architecture and components like repositories and extension points. It also previews a live demo and invites attendees to further discuss Apache NiFi at a Birds of a Feather session.
FIWARE Training: Introduction to Smart Data ModelsFIWARE
The document introduces the Smart Data Models program which provides standardized data models for various domains. It explains that the program aims to enable agile standardization through contributions from the community. It outlines the governance structure and current status of the program, including the available domains, data models, contributors and tools. Participants are then guided through an exercise to turn a data source into a Smart Data Model by generating a JSON schema, example payload and submitting it as a pull request to the incubated repository on GitHub.
This training camp teaches you how FIWARE technologies and iSHARE, brought together under the umbrella of the i4Trust initiative, can be combined to provide the means for creation of data spaces in which multiple organizations can exchange digital twin data in a trusted and efficient manner, collaborating in the development of innovative services based on data sharing and creating value out of the data they share. SMEs and Digital Innovation Hubs (DIHs) will be equipped with the necessary know-how to use the i4Trust framework for creating data spaces!
Creating a Context-Aware solution, Complex Event Processing with FIWARE PerseoFernando Lopez Aguilar
Introduction to Complex Event Processing (CEP). How FIWARE deals with CEP through FIWARE Perseo. How to connect FIWARE Perseo with FIWARE Orion Context Broker. How can we define an event with Event Processing Language (EPL) and what are the predefined actions to include in FIWARE Perseo.
The data lake has become extremely popular, but there is still confusion on how it should be used. In this presentation I will cover common big data architectures that use the data lake, the characteristics and benefits of a data lake, and how it works in conjunction with a relational data warehouse. Then I’ll go into details on using Azure Data Lake Store Gen2 as your data lake, and various typical use cases of the data lake. As a bonus I’ll talk about how to organize a data lake and discuss the various products that can be used in a modern data warehouse.
Presentation by Lorenzo Mangani of QXIP at the October 26 SF Bay Area ClickHouse meetup
https://www.meetup.com/San-Francisco-Bay-Area-ClickHouse-Meetup
https://qxip.net/
An online training course run by the FIWARE Foundation in conjunction with the i4Trust project. The core part of this virtual training camp (21-24 June 2021) covered all the necessary skills to develop smart solutions powered by FIWARE. It introduces the basis of Digital Twin programming using linked data concepts - JSON-LD and NGSI-LD and combines these with common smart data models for the sharing and augmentation of context data.
In addition, it covers the supplementary FIWARE technologies used to implement the common functions typically required when architecting a complete smart solution: Identity and Access Management (IAM) functions to secure access to digital twin data and functions enabling the interface with IoT and 3rd systems, or the connection with different tools for processing and monitoring current and historical big data.
This 12-hour online training course can be used to obtain a good understanding of FIWARE and NGSI Interfaces and form the basis of studying for the FIWARE expert certification.
Extending this core part, the virtual training camp adds introductory and deep-dive sessions on how FIWARE and iSHARE technologies, brought together under the umbrella of the i4Trust initiative, can be combined to provide the means for the creation of data spaces in which multiple organizations can exchange digital twin data in a trusted and efficient manner, collaborating in the creation of innovative services based on data sharing. In addition, SMEs and Digital Innovation Hubs (DIHs) that go through this complete training and are located in countries eligible under Horizon 2020 will be equipped with the necessary know-how to apply to the recently launched i4Trust Open Call.
A look inside pandas design and developmentWes McKinney
This document summarizes Wes McKinney's presentation on pandas, an open source data analysis library for Python. McKinney is the lead developer of pandas and discusses its design, development, and performance advantages over other Python data analysis tools. He highlights key pandas features like the DataFrame for tabular data, fast data manipulation capabilities, and its use in financial applications. McKinney also discusses his development process, tools like IPython and Cython, and optimization techniques like profiling and algorithm exploration to ensure pandas' speed and reliability.
This is the presentation I made on the Hadoop User Group Ireland meetup in Dublin. It covers the main ideas of both MPP, Hadoop and the distributed systems in general, and also how to chose the best option for you
The document discusses the Semantic Web and Resource Description Framework (RDF). It defines the Semantic Web as making web data machine-understandable by describing web resources with metadata. RDF uses triples to describe resources, properties, and relationships. RDF data can be visualized as a graph and serialized in formats like RDF/XML. RDF Schema (RDFS) provides a basic vocabulary for defining classes, properties, and hierarchies to enable reasoning about RDF data.
This training module introduces Resource Description Framework (RDF) for describing data, including representing data as triples, graphs and syntax; it also introduces the SPARQL query language for querying and manipulating RDF data, covering SELECT, CONSTRUCT, DESCRIBE, and ASK query types and the structure of SPARQL queries. The module provides learning objectives and an overview of the content which includes an introduction to RDF and SPARQL with examples and pointers to further resources.
FIWARE Wednesday Webinars - How to Design DataModelsFIWARE
How to Design DataModels - 8th May 2019
Corresponding webinar recording: https://youtu.be/T_1DpKf6C_c
Understanding and applying Standard Data Models.
Chapter: Core
Difficulty: 3
Audience: Technical Domain Specific
Presenter: José Manuel Cantera (Senior Standardization Expert, FIWARE Foundation)
This document introduces linked data and discusses how publishing data as linked RDF triples on the web allows for a global linked database. It explains that linked data uses HTTP URIs to identify things and links data from different sources to be queried using SPARQL. Publishing linked data provides benefits like being able to integrate and discover related data on the web. Tools are available to convert existing data or publish new data as linked open data.
Understanding RDF: the Resource Description Framework in Context (1999)Dan Brickley
Dan Brickley, 3rd European Commission Metadata Workshop, Luxemburg, April 12th 1999
Understanding RDF: the Resource Description Framework in Context
http://ilrt.org/discovery/2001/01/understanding-rdf/
Although RDF is a corner stone of semantic web and knowledge graphs, it has not been embraced by everyday programmers and software architects who need to safely create and access well-structured data. There is a lack of common tools and methodologies that are available in more conventional settings to improve data quality by defining schemas that can later be validated. Two technologies have recently been proposed for RDF validation: Shape Expressions (ShEx) and Shapes Constraint Language (SHACL). In the talk, we will review the history and motivation of both technologies. We will also and enumerate some challenges and future work with regards to RDF validation.
This document provides an overview of advanced operations in NGSI-LD (Next Generation SI-LD), including:
- Specific headers used in NGSI-LD requests
- Supported content types and best practices for JSON-LD payloads
- Examples of temporal queries, geoqueries, and language maps
- Details on pagination, time limiting queries, and supported response formats
In Data Engineer's Lunch #54, we will discuss the data build tool, a tool for managing data transformations with config files rather than code. We will be connecting it to Apache Spark and using it to perform transformations.
Accompanying YouTube: https://youtu.be/dwZlYG6RCSY
Sign Up For Our Newsletter: http://eepurl.com/grdMkn
Join Data Engineer’s Lunch Weekly at 12 PM EST Every Monday:
https://www.meetup.com/Data-Wranglers-DC/events/
Cassandra.Link:
https://cassandra.link/
Follow Us and Reach Us At:
Anant:
https://www.anant.us/
Awesome Cassandra:
https://github.com/Anant/awesome-cassandra
Email:
solutions@anant.us
LinkedIn:
https://www.linkedin.com/company/anant/
Twitter:
https://twitter.com/anantcorp
Eventbrite:
https://www.eventbrite.com/o/anant-1072927283
Facebook:
https://www.facebook.com/AnantCorp/
Join The Anant Team:
https://www.careers.anant.us
The talk covers how Elasticsearch, Lucene and to some extent search engines in general actually work under the hood. We'll start at the "bottom" (or close enough!) of the many abstraction levels, and gradually move upwards towards the user-visible layers, studying the various internal data structures and behaviors as we ascend. Elasticsearch provides APIs that are very easy to use, and it will get you started and take you far without much effort. However, to get the most of it, it helps to have some knowledge about the underlying algorithms and data structures. This understanding enables you to make full use of its substantial set of features such that you can improve your users search experiences, while at the same time keep your systems performant, reliable and updated in (near) real time.
This document provides an overview of Apache NiFi and dataflow. It begins with an introduction to the challenges of moving data effectively within and between systems. It then discusses Apache NiFi's key features for addressing these challenges, including guaranteed delivery, data buffering, prioritized queuing, and data provenance. The document outlines NiFi's architecture and components like repositories and extension points. It also previews a live demo and invites attendees to further discuss Apache NiFi at a Birds of a Feather session.
Chap I : Cours de Modélisation & Simulation des processusMohammed TAMALI
Ce cours est le fruit d'une expérience vécue.
La méthode jugée valable au moment de la prise du système est nettement soupçonnée pour qualifier les fondements des manœuvres et travaux entrepris.
La recherche opérationnelle (R.O.) propose un ensemble de méthodes scientifiques pour résoudre des problèmes d'optimisation liés aux organisations du monde réel : problèmes de logistique, d'emploi du temps, de gestion des flux, de transport...
Ce stage fournit des méthodes d'aide à la décision en prenant en compte des contraintes variées (légales, techniques, budget, etc.)
Ce cours peut être suivi par des notions d'approfondissement vécus dans la société, avec un programme en lien avec des problématiques spécifiques pointues.
Les fondements philosophiques sont discutés dans cette partie pour ouvrir le chemin vers des projections étymologiques.
Voir cette présentation en vidéo sur http://www.youtube.com/watch?v=mLQT_i-Lgsk
Isabelle Chrisment (Inria) présente "L'initiative PLATON (PLATeforme d'Observation de l'interNet) lors de la Journée du Conseil scientifique de l'Afnic 2013 (JCSA2013), le 9 juillet 2013 dans les locaux de Télécom ParisTech.
Chap III : Cours de Modélisation & Simulation des systèmesMohammed TAMALI
Un jugement n’est jamais facile à dévoiler, faute de méconnaissance des causes qui sont à l’origine du sujet objet d’un quelconque jugement.
Notre univers est certes basé sur un ensemble de constantes, c’est les constantes UNIVERSELLES.
L’équilibre en est une, il est bien connu que sans cette caractéristiques, nous ne pouvons utiliser les ÉGALITÉS et les COMPARAISONS.
Chaque observateur peut prétendre connaitre suffisamment cette notion.
Un mémoire de fin d’étude est le document qui atteste et attestera,
dans l’avenir, que l’étudiant a mener à bien une étude introduction à la recherche scientifique à l’université.
Les lecteurs, de ce document, auront à découvrir que le prétendu candidat, a suivi un plan d’études tracer sous la direction d’un encadreur habilité à encadrer de telle expériences.
L’image de marque des deux parties, encadreur et étudiant(s) est trop influencée par la qualité dans laquelle ce document a été rédigé.
En gros, un mémoire traite une PROBLEMATIQUE, bien défini, déterministe, observable, quantifiable, entre autre, réalisable. Ces exigences sont celle du côté encadrement alors que la qualité finale dépend du travail de l’étudiant.
Les technologies numériques pour le pastoralisme : co-conception d’un système d’information pour développer connaissances et références, au service des éleveurs
Chap XI : Outils de Simulation des modes opératoires (Plans d’expériences)Mohammed TAMALI
Les méthodologies utilisées par les humains, en rapport, avec les tentatives de compréhension des phénomènes physiques qui nous entourent, donnent un contrecoup général de la complexité de ces même systèmes que nous manipulons et prenons comme sujets dans nos études.
Le niveau de complexité des systèmes est élevé, à un niveau où toutes les tentatives ou essais de lancement de procédés expérimentaux laissent et obligent à considérer des erreurs. Encore plus, les effets tangents. Selon la théorie de l’évaluation des performances, l’exigence ‘comprendre’ le système n’a de réponse que si :
- Nous avons tellement d’informations que les recommandations des études ultérieures seront satisfaites,
- Nous avons des références, avec quoi on peut comparer,
- Nous avons un historique susceptible d’être retracé,
- Il y a une possibilité pour faire de l’expérimentation.
Les trois premiers cas satisfont à eux-mêmes. Si telle est la situation, ils nous clarifient l’image. Le quatrième critère exige que l’expérience se fasse effectivement pour que toutes les questions, relatives à un problème donné, soient élucidées. Le domaine de définition du modèle régissant le système étudié est plus ou moins profond que ses variables se meuvent d’une manière continue ou discrète dans les espaces position/temps.
Ces variables sont les facteurs du systèmes et peuvent évoluer selon des modalités changeantes.
Intelligent Wireless Sensor Network Simulation: Flood Use Casecatherine roussey
1. The document presents a new formalization of context for adaptive context-aware systems using a flood use case.
2. Key entities in the context model are observable entities, which are directly observed by sensors, and entities of interest, whose characterization is inferred from observable entities.
3. In a simulation of the flood use case, an adaptive context-aware system that reasons about the states of precipitation, watercourse, and outlet observable entities and infers the state of the flood entity of interest more efficiently manages context than a classic system, reducing the number of transmitted data packets.
Irstea Use Case: Integration of Crop Observations using Semantic Web Technolo...catherine roussey
Présentation of AgroTechnopole where Irstea develops a use case of data integration of Crop observation. Participation Panel Session on "Semantics to enable sharing and interoperability of data in agriculture.
What do we need?" 10th International Conference on Metadata and Semantics Research 22-25 November 2016, Göttingen, Germany MTSR 2016
PhD subject of Jie Sun. Simulation tool based on JADE , jess rule engine and ontology. The goal is to prove that a sensor that can adapt its behaviour based on observed phenomenon state will libve longer
Weather Station Data Publication at Irstea: an implementation Report. catherine roussey
This document discusses Irstea's publication of weather station data as linked open data using semantic web standards. It provides an overview of open data and linked open data principles. It then describes Irstea's weather station in Montoldre, France, the sensors that collect data, and the observations made. It details how the data was modeled using the Semantic Sensor Network (SSN) ontology and other related ontologies. Finally, it discusses converting the data from CSV files to RDF and making it available via a SPARQL endpoint.
Présentation faite lors d'une réunion du projet animitex à Montpellier en aôut 2014. Cette présentation brosse un apercu des standards du web sémantique disponible sur le web de données. Puis nous introduisons brièvement les travaux de Fabien Amarger sur la transformation de SKOS en ontologie.
Présentation faite lors d'une réunion du projet animitex à montpellier en aôut 2014. Cette présentation introduit certains formats du web sémantique en particulier ceux accessible sur le web de données . Ensuite les travaux de Fabien Amarger sur la transformation de SKOS en ontologies OWL sont survollés.
Présentation du projet de l'irstea sur l'annotation des bulletins d'alerte ag...catherine roussey
annotation des Bulletins de Santé du Végétal en utilisant les technologies web sémantique. Objectif final développer le web de données agricol en proposant des ontologies dédiées et des méthodes d'enrichissement et de mises à jour propres à ce domaine
Semantic Sensor Network Ontology: Description et usagecatherine roussey
cours à l'école d'Été Web Intelligence 2013 « Le Web des objets » 3 septembre 2013, Saint-Germain-Au-Mont-d'Or, Franc. 67 slides.
ce cours en plus de décrire l'ontology ssn présente certains usages.
Presentation faite pour la formation enitab a partir d'un chapitre d'ouvrage ROUSSEY, C., FRANÇOIS PINET, KANG, M.A., CORCHO, O. - 2009. How ontologies are used for software interoperability. Chapter to appear in: Use of Ontologies to Support Information Interoperability, Springer, 52 pages disponible ici http://www.towntology.net/towntologyreferences.php
Le Comptoir OCTO - Qu’apporte l’analyse de cycle de vie lors d’un audit d’éco...OCTO Technology
Par Nicolas Bordier (Consultant numérique responsable @OCTO Technology) et Alaric Rougnon-Glasson (Sustainable Tech Consultant @OCTO Technology)
Sur un exemple très concret d’audit d’éco-conception de l’outil de bilan carbone C’Bilan développé par ICDC (Caisse des dépôts et consignations) nous allons expliquer en quoi l’ACV (analyse de cycle de vie) a été déterminante pour identifier les pistes d’actions pour réduire jusqu'à 82% de l’empreinte environnementale du service.
Vidéo Youtube : https://www.youtube.com/watch?v=7R8oL2P_DkU
Compte-rendu :
L'IA connaît une croissance rapide et son intégration dans le domaine éducatif soulève de nombreuses questions. Aujourd'hui, nous explorerons comment les étudiants utilisent l'IA, les perceptions des enseignants à ce sujet, et les mesures possibles pour encadrer ces usages.
Constat Actuel
L'IA est de plus en plus présente dans notre quotidien, y compris dans l'éducation. Certaines universités, comme Science Po en janvier 2023, ont interdit l'utilisation de l'IA, tandis que d'autres, comme l'Université de Prague, la considèrent comme du plagiat. Cette diversité de positions souligne la nécessité urgente d'une réponse institutionnelle pour encadrer ces usages et prévenir les risques de triche et de plagiat.
Enquête Nationale
Pour mieux comprendre ces dynamiques, une enquête nationale intitulée "L'IA dans l'enseignement" a été réalisée. Les auteurs de cette enquête sont Le Sphynx (sondage) et Compilatio (fraude académique). Elle a été diffusée dans les universités de Lyon et d'Aix-Marseille entre le 21 juin et le 15 août 2023, touchant 1242 enseignants et 4443 étudiants. Les questionnaires, conçus pour étudier les usages de l'IA et les représentations de ces usages, abordaient des thèmes comme les craintes, les opportunités et l'acceptabilité.
Résultats de l'Enquête
Les résultats montrent que 55 % des étudiants utilisent l'IA de manière occasionnelle ou fréquente, contre 34 % des enseignants. Cependant, 88 % des enseignants pensent que leurs étudiants utilisent l'IA, ce qui pourrait indiquer une surestimation des usages. Les usages identifiés incluent la recherche d'informations et la rédaction de textes, bien que ces réponses ne puissent pas être cumulées dans les choix proposés.
Analyse Critique
Une analyse plus approfondie révèle que les enseignants peinent à percevoir les bénéfices de l'IA pour l'apprentissage, contrairement aux étudiants. La question de savoir si l'IA améliore les notes sans développer les compétences reste débattue. Est-ce un dopage académique ou une opportunité pour un apprentissage plus efficace ?
Acceptabilité et Éthique
L'enquête révèle que beaucoup d'étudiants jugent acceptable d'utiliser l'IA pour rédiger leurs devoirs, et même un quart des enseignants partagent cet avis. Cela pose des questions éthiques cruciales : copier-coller est-il tricher ? Utiliser l'IA sous supervision ou pour des traductions est-il acceptable ? La réponse n'est pas simple et nécessite un débat ouvert.
Propositions et Solutions
Pour encadrer ces usages, plusieurs solutions sont proposées. Plutôt que d'interdire l'IA, il est suggéré de fixer des règles pour une utilisation responsable. Des innovations pédagogiques peuvent également être explorées, comme la création de situations de concurrence professionnelle ou l'utilisation de détecteurs d'IA.
Conclusion
En conclusion, bien que l'étude présente des limites, elle souligne un besoin urgent de régulation. Une charte institutionnelle pourrait fournir un cadre pour une utilisation éthique.
MongoDB in a scale-up: how to get away from a monolithic hell — MongoDB Paris...Horgix
This is the slide deck of a talk by Alexis "Horgix" Chotard and Laurentiu Capatina presented at the MongoDB Paris User Group in June 2024 about the feedback on how PayFit move away from a monolithic hell of a self-hosted MongoDB cluster to managed alternatives. Pitch below.
March 15, 2023, 6:59 AM: a MongoDB cluster collapses. Tough luck, this cluster contains 95% of user data and is absolutely vital for even minimal operation of our application. To worsen matters, this cluster is 7 years behind on versions, is not scalable, and barely observable. Furthermore, even the data model would quickly raise eyebrows: applications communicating with each other by reading/writing in the same MongoDB documents, documents reaching the maximum limit of 16MiB with hundreds of levels of nesting, and so forth. The incident will last several days and result in the loss of many users. We've seen better scenarios.
Let's explore how PayFit found itself in this hellish situation and, more importantly, how we managed to overcome it!
On the agenda: technical stabilization, untangling data models, breaking apart a Single Point of Failure (SPOF) into several elements with a more restricted blast radius, transitioning to managed services, improving internal accesses, regaining control over risky operations, and ultimately, approaching a technical migration when it impacts all development teams.
Ouvrez la porte ou prenez un mur (Agile Tour Genève 2024)Laurent Speyser
(Conférence dessinée)
Vous êtes certainement à l’origine, ou impliqué, dans un changement au sein de votre organisation. Et peut être que cela ne se passe pas aussi bien qu’attendu…
Depuis plusieurs années, je fais régulièrement le constat de l’échec de l’adoption de l’Agilité, et plus globalement de grands changements, dans les organisations. Je vais tenter de vous expliquer pourquoi ils suscitent peu d'adhésion, peu d’engagement, et ils ne tiennent pas dans le temps.
Heureusement, il existe un autre chemin. Pour l'emprunter il s'agira de cultiver l'invitation, l'intelligence collective , la mécanique des jeux, les rites de passages, .... afin que l'agilité prenne racine.
Vous repartirez de cette conférence en ayant pris du recul sur le changement tel qu‘il est généralement opéré aujourd’hui, et en ayant découvert (ou redécouvert) le seul guide valable à suivre, à mon sens, pour un changement authentique, durable, et respectueux des individus! Et en bonus, 2 ou 3 trucs pratiques!
Ouvrez la porte ou prenez un mur (Agile Tour Genève 2024)
Modélisation de la spatialité dans les ontologies de capteurs
1. 1
Modelisation de la
spatialité dans les
ontologies de capteurs:
cas d’usage agricole
Catherine ROUSSEY
Merci à
(PhD) Maria POVEDA-VILLALON
(PhD candidate) Quang-Duy NGUYEN
Atelier EXCES - SAGEO, à Clermont-Ferrand, le 13 Novembre 2019
2. 2
Plan
• Système d’information agricole
• Système contextuel
• Contexte
• Ontologies
• Rôle des ontologies dans les systèmes contextuels
• Rappel définition d’une ontologie
• Ontologies de capteurs
• Semantic Sensor Network Ontologie (SSN)
• Smart Appliences Refence (SAREF)
• Comparaison SSN/SAREF
• Spatialité
• SSN station météo de montoldre
• SAREF, S4ENVI, S4AGRI
• SSN observation d’une parcelle agricole
• Conclusion
3. 3
Système d’information agricole
Besoins des agriculteurs
Leurs prises de décision dépendent d’observations de phénomènes
naturels : sol, pluie, développement des plantes etc.
Système agricole
• Composants:
• Réseau de capteurs sans fil (RCSF)
• Outil d’aide à la décision (OAD)
• Equipement automatisé : réseau d’actionneurs
• Objectifs:
• Automatiser des actions en fonction des mesures de capteurs
• Agriculture de précision : prendre la meilleure décision, au bon
moment, au bon endroit et avec le bon paramétrage.
4. 4
Système contextuel
Un système contextuel est un système qui utilise le contexte pour
fournir des informations et des services appropriés à l’utilisateur.
(Abowd et al., 1999)
5. 5
Contexte
« L’ensemble des information utilisées pour caractériser la situation d’une
entité. Une entité peut être une personne, un lieu ou un objet jugé
pertinent dans les interactions entre l’utilisateur et l’application. » (Abowd
et al., 1999)
« un ensemble d’entités caractérisées par leur états, plus toutes les
informations qui permettent de dériver les changements d’états de ces
entités » (Sun et al., 2016)
Etat « donnée qualitative qui évolue au cours du temps, résumant un
ensemble d’informations » (Bendadouche et al, 2012)
Types de contexte :
• Contexte de bas niveau : contexte contient des données quantitatives.
• Contexte de haut niveau : contexte enrichi avec des données
qualitatives nécessaires à l’application.
Types d’entité :
• Entité observée : entité dont des propriétés sont observées par les
capteurs.
• Entité d’intérêt : entité nécessaire à l’application et dont les propriétés
sont obtenues à partir des propriétés d’une ou plusieurs autres entités.
7. 7
Ontologies
Une ontologie est “une spécification explicite et formelle d’une conceptualisation
partagée” (Studer et al., 1998)
Dans le monde du web sémantique, une ontologie est l’ensemble des concepts et des
relations utilisés pour décrire un domaine d’intéret. Les mots ontologie et vocabulaire
sont utilisés conjointement. Le mot ontologie est employé quand le vocabulaire de
concepts et de relations est assez complexes et peut contenir par exemple des
contraintes: conditions nécessaire et/ou suffisante d’appartenance. (W3C)
Une ontologie sert à (W3C):
• Normaliser les termes du domaine: leur associer un identifiant (URI), un label et une
signification
• Typer les éléments de ce vocabulaire pour définir un schéma, documenté et
réutilisable: des classes, des propriétés etc…
• Aider l’intégration de données multi-sources
• Organiser les connaissances d’un domaine: publication de ressources et de leur
métadonnées descriptives sur le web (Linked Data)
• Produire des inférences
8. 8
Ontologies de capteurs
Ontologies de capteurs : SSN, SAREF, CESN, CSIRO, Sensei
O&M, OOSTethys, MMI, SWAMO, SEEK, SDO, SeReS O&M,
OntoSensor, etc. (Bendadouche et al., 2012)
SSN (Semantic Sensor Network)
• Dernière version de SSN ou SOSA/SSN
• Standard développé par World Wide Web Consortium (W3C) et
OGC
• (sosa) http://www.w3.org/ns/sosa (ssn) http://www.w3.org/ns/ssn
SAREF (Smart Appliances REFerence)
• Standard développé par European Telecommunication
Standardization Institute (ETSI)
• (SAREF) https://www.w3id.org/saref
• SAREF4ENVI: SAREF for the environment domain
• SAREF4AGRI: Extension to SAREF; Smart Agriculture and Food
Chain Domain
9. 9
Une vue d’ensemble de SSN et SAREF
(Poveda et al. 2018)
SSN
- SSN est une ontologie
pour décrire les capteurs,
actionneurs, observations,
actions, procédures
concernées, phénomène
observé...
- Sensor, Observation,
Sample, and Actuator
(SOSA) est le bloque
central de SSN.
SAREF
- SAREF est un modèle pour
décrire des appareils
connectés de tous les
domaines.
- SAREF intégre des modules
de l’ontologie “OneM2M”
- Extensions : SAREF4ENER,
SAREF4ENVI, SAREF4BLDG,
SAREF4AGRI, etc.
Property
10. 10
Une vue d’ensemble de SSN et SAREF
SSN décrit une situation de
mesure : qui, quoi, quand,
comment.
SAREF décrit des appareils
connectés de tous les
domaines.
des ontologies noyaux
auxquelles d’autres
ontologies se connectent
pour définir le schéma de
données de l’application
cible.
Ces deux ontologies
s’appliquent à des
domaines variés:
agriculture, santé,
domotique.
SSN
SAREF
Property
11. 11
La spatialité
La spatialité intervient dans plusieurs éléments descriptifs des
capteurs/actionneurs
• Quelle est la localisation du capteur/ actionneur ?
• Quelle géométrie est associée à la localisation du capteur ?
• Quelle est la localisation du phénomène observé ?
• Quelle géométrie est associée au phénomène?
12. 12
Semantic Sensor Network (SSN): Sensor
SOSA Observation Class: Act of carrying out an (Observation)
Procedure to estimate or calculate a value of a property of a
FeatureOfInterest. Links to a Sensor to describe what made the
Observation and how; links to an ObservableProperty to describe
what the result is an estimate of, and to a FeatureOfInterest to
detail what that property was associated with.
Example: The activity of estimating the intensity of an Earthquake using the
Mercalli intensity scale is an Observation as is measuring the moment
magnitude, i.e., the energy released by said earthquake.
SOSA Sensor Class: Device, agent (including humans), or software
(simulation) involved in, or implementing, a Procedure. Sensors
respond to a Stimulus, e.g., a change in the environment, or Input
data composed from the Results of prior Observations, and
generate a Result. Sensors can be hosted by Platforms.
Example: Accelerometers, gyroscopes, barometers, magnetometers, and so
forth are Sensors that are typically mounted on a modern smart phone
(which acts as Platform). Other examples of Sensors include the human
eyes.
13. 13
SSN: Property and Feature Of Interest Class
SSN Property Class: A quality of an entity. An aspect of an entity
that is intrinsic to and cannot exist without the entity.
SOSA Observable Property Class: An observable quality (property,
characteristic) of a FeatureOfInterest.
Example: The height of a tree, the depth of a water body, or the temperature
of a surface are examples of observable properties, while the value of a
classic car is not (directly) observable but asserted.
SOSA Feature Of Interest : The thing whose property is being
estimated or calculated in the course of an Observation to arrive at
a Result, or whose property is being manipulated by an Actuator,
or which is being sampled or transformed in an act of Sampling.
Example: When measuring the height of a tree, the height is the observed
ObservableProperty, 20m may be the Result of the Observation, and the
tree is the FeatureOfInterest. A window is a FeatureOfInterest for an
automatic window control Actuator.
14. 14
SSN: Sample Class
SOSA Sample Class: Feature which is intended to be representative
of a FeatureOfInterest on which Observations may be made.
Comment: Samples are typically subsets or extracts from the feature of
interest of an observation. They are used in situations where observations
cannot be made directly on the ultimate feature of interest, either because
the entire feature cannot be observed, or because it is more convenient to
use a proxy. Samples are thus artifacts of an observational strategy, and
usually have no significant function outside of their role in the observation
process. The characteristics of the samples themselves are generally of
little interest, except to the manager of a sampling campaign, or sample
curator.
A Sample is intended to sample some FeatureOfInterest, so there is an
expectation of at least one isSampleOf property. However, in some cases
the identity, and even the exact type, of the sampled feature may not be
known when observations are made using the sampling features.
Physical samples are sometimes known as 'specimens'.
15. 15
SSN: station météo
Description d’une station météo de Montoldre (Roussey et al, 2019)
Réutilise le vocabulaire GeoSPARQL défini par OGC
geosp:hasGeometry
geosp:Feature
atpw:platform/VP2lesPalaquins01
sosa:Platform
irstea:organization/irsteaCentreMontoldre
geosp:sfWithin
POINT(3,434657 46,339351) ^^geosp:wktLiteral
geosp:Geometry
atpw:geometry/point_VP2lesPalaquins01
geosp:hasWKT
irstea:commune/montoldre
geosp:sfWithin
16. 16
SSN: capteur baromètre
Description du baromètre de la station météo de Montoldre (Roussey et
al, 2019)
Description du phénomène observé (Ultimate Feature?)
Hypothèse: la localisation de la plateforme est identique à la localisation
du phénomène
sosa:hosts
atpw:platform/VP2lesPalaquins01
sosa:Platform
sosa:ObservableProperty
atpw:observableProperty/air_pressure
sosa:observes
atpw:sensor/VP2lesPalaquins01_barometer01
sosa:Sensor
sosa:FeatureOfInterest
atpw:featureOfInterest/air
ssn:hasProperty
18. 18
SAREF: S4ENVI capteur photomètre TESS
Localisation du capteur (ETSI, 2017)
WGS84 un petit vocabulaire RDF pour décrire les coordonnées d’un
point: altitude, longitude, latitude.
Un standard de fait
WGS84 est parfois prefixé par geo
ex:TESS005-UCM
S4envi:TESS
Ex:LocationTESS005-UCM
40.451
-3.7261
wgs84:latitude
wgs84:longitude
wgs84:locationwgs84:Point
19. 19
SAREF: S4ENVI capteur photomètre TESS
Description du phénomène observé: absence de la localisation
(ETSI, 2017).
Hypothèse: la localisation du capteur est identique à la localisation
du phénomène.
ex:TESS005-UCM
S4envi:TESS
S4envi:LightMagnitude
saref:measuresProperty
S4envi:LightProperty
ex:Measurement2016-10-05T08:15:40TESS005-UCM
saref:Measurement
saref:makesMeasurement
saref:relatesToProperty
0.8 ^^xsd:floatex:mgPerArcsec2
saref:hasValuesaref:isMeasuredIn
saref:hasTimeStamp
2016-10-05T08:15:40^^xsd:dateTime
saref:UnitOfMeasure
20. 20
SAREF: S4AGRI parcel
Description d’une parcelle dans SAREF4AGRI (ETSI 2019)
Reutilise le vocabulaire GeoSPARQL qui contient de nombreuses
relations spatiales
geosp:hasGeometrygeosp:Feature
ex:ArvalisLand07
s4agri:Parcel
geosp:SpatialObject
geosp:Geometry
geosp:sfContains, sfWithin
wgs84:Point
wgs84:location
ex:ArvalisLand07Center
21. 21
SAREF: S4AGRI station de mesure Irrinov
Localisation du capteur (ETSI 2019)
La nouvelle version de SAREF intègre des éléments de SSN
Quel est le lien entre la plateforme et la parcelle?
ex:PlatformArvalisLand07
sosa:Platformsosa:host
ssn:deploymentOnPlatform
ex:ArvalisDeployment20162017Land07
s4agri:Deployment
ex:ArvalisIrrinovStation01
ssn:System
ex:ArvalisIrrinovStation01SoilSensor02
s4agri:SoilTensiometer
ssn:hasSubSystem
ssn:deployedSystem
22. 22
SAREF: S4AGRI station de mesure Irrinov
Localisation du phénomène observé (ETSI 2019)
Hypothèse: la localisation du capteur est identique à la localisation du phénomène…
Problème: on a besoin d’une profondeur!
Le point identifiant la géométrie de la parcelle n’est pas le point de localisation du capteur.
ex:ArvalisIrrinovStation01SoilSensor02
s4agri:SoilTensiometer
s4agri:SoilMoisture
saref:Propertysaref:measuresProperty
ex:ArvalisIrrinovStation01SoilSensor02ObservationAtPT24H2016-06-14T000000_0200
saref:Measurement
saref:makesMeasurement
saref:relatesToProperty
25. 25
Conclusion
SSN trois éléments peuvent être des objets géographique
• Platform : objet ou les capteurs sont installés
• FeatureOfInterest: le phénomène observé
• Sample: l’échantillon observé
• Ces ontologies n’ont pas encore été suffisamment utilisées pour
proposer une modélisation de la spatialité cohérente localisation
du capteur / localisation du phénomène.
• Besoin de mettre en usage ces ontologies pour définir des
bonnes pratiques de modélisation de la spatialité.
• Une station météo agricole se situe à 2m du sol.
• Une station irrinov à 3 tensiometres à 30 cm de profondeur et 3
autres à 60 cm de profondeur
• Comment modéliser les agrégations spatiales?
• Changement de phénomène observé (plan / volume) ou de propriété
du phénomène observé (moyenne)
26. 26
References
• Abowd, G. D., Dey, A. K., Brown, P. J., Davies, N., Smith, M., & Steggles, P. (1999). Towards a better
understanding of context and context-awareness. In H. W. Gellersen (Ed.), Handheld and Ubiquitous
Computing, Proceedings (Vol. 1707, pp. 304–307). Berlin: Springer-Verlag Berlin.
• Bendadouche, R., Roussey, C., De Sousa, G., Chanet, J.-P., & Hou, K. M. (2012). Etat de l’art sur les
ontologies de capteurs pour une intégration intelligente des données. INFORSID 2012, 89–104.
• ETSI 2017: ETSI TR 103 411 v1,1,1, Technical report, february 2017 http://www.etsi.org/standard-search
• ETSI 2019: ETSI TS 103 410-6 v1.1.1, Technical specification, SmartM2M; Extension to SAREF; Part 6:
Smart agriculture and Food Chain Domain, may 2019 http://www.etsi.org/standard-search
• M. Poveda-villalon, Q.-D. Nguyen, C. Roussey, J.-P. Chanet, C. De Vaulx. Ontological requirement
specification for smart irrigation systems: a SOSA/SSN and SAREF comparison. In Proceedings of the 9th
International Semantic Sensor Networks Workshop SSN2018, Monterey, USA, October 9th 2018.
http://ceur-ws.org/Vol-2213/paper1.pdf
• C. Roussey,S. Bernard, G. André, D. Boffety. Weather Data Publication on the LOD using SOSA/SSN
Ontology.Semantic Web Journal, 2019 http://www.semantic-web-journal.net/content/weather-data-
publication-lod-using-sosassn-ontology-0
• Semantic Sensor Network Ontology: W3C Recommendation 19 October 2017 (Link errors corrected 08
December 2017) https://www.w3.org/TR/2017/REC-vocab-ssn-20171019/
• Sun, J., De Sousa, G., Roussey, C., Chanet, J.-P., Pinet, F., & Hou, K. M. (2016). A new formalisation
for wireless sensor network adaptive context-aware system: Application to an environmental use
case. In Tenth International Conference on Sensor Technologies and Applications SENSORCOMM 2016
(pp. 49–55).
• Studer, R., Benjamins, V. R., & Fensel, D. (1998). Knowledge engineering: principles and methods.
Data & Knowledge Engineering, 25(1–2), 161–197.
• W3C https://www.w3.org/standards/semanticweb/ontology