training offers

Web of data and knowledge graphs : an introduction to kickstart your project

Training goals

The Web of data covers several different issues: content structuring, metadata, Linked Open Data, open data reuse, graph databases and knowledge graphs, etc. This training provides a complete overview of the issues, the main standards, the essential “data repositories” and the types of tools that can be used in the context of a project using these technologies. It also gives the keys to properly tackling a project using Web of data technologies.

At the end of this training course, given by a specialist with 20 years' experience in the field, you'll have the answers to the following questions:

  • What is the Web of Data ? What issues does it address ?
  • How to encode information in RDF triplets ? how to produce RDF files ?
  • What is schema.org ? what is semantic SEO ?
  • What is an ontology ? how can I reuse existing ontologies ? how can I specify my application profile using SHACL ?
  • What is a thesaurus, a controlled vocabulary ? how to structure a vocabulary in SKOS ?
  • How to write SPARQL queries ?
  • What is Wikidata ? how to query Wikidata in SPARQL ? how to contribute to Wikidata ?
  • How to deploy your own knowledge graph in a triplestore ?
  • What are the different ways of publishing structured data on the Web ?


3 days. This course can be adapted over 2 days.

Who is this training for ?

At the crossroads of the Web, data and documentary issues, this training course is aimed at information science specialists (documentalists, librarians, watchers) as well as technical profiles (developers) or business profiles (project managers, data architects). Depending on the audience, the emphasis is on the documentation side or the technical side.

This training is a gateway to the other more specific training courses available in the catalog.


Knowledge of HTML and XML essential. Modeling skills (UML or other). Knowledge of databases (SQL). Knowledge and understanding of the Internet (clients, servers, HTTP) and search engines.


Introduction to the Web of Data

  • The challenges of knowledge graph projects
  • Basics of data structuring
  • URIs
  • Linked Open Data, schema.org, Wikidata
  • Application examples


  • Schema.org: the data structuring model for search engines
  • Analyzing an entity in the schema.org model
  • The difference between semantics and syntax

RDF: the data structuring model

  • RDF model - encoding data in triples
  • Exercise in encoding a statement using FOAF
  • The different RDF syntaxes. Decoding Turtle syntax

SKOS thesauri

  • Thesauri and their relevance to the Web of Data
  • Structuring a thesaurus in SKOS
  • Example of thesauri published in SKOS on the Web of Data
  • SKOS tools

SPARQL on Wikidata

  • SPARQL query syntax and operators
  • Exercises in writing queries on Wikidata and/or client data

OWL ontologies

  • Introduction to ontologies. Differences between ontologies and thesauri
  • Documentary conceptual models: CIDOC-CRM, FRBR/LRM, Records In Contexts (RIC-O)
  • Operators for building an ontology in RDFS and OWL

Practical application

  • Populating a knowledge graph from Excel files
  • Installing and manipulating a triplestore
  • Loading and manipulating data

Adapt this training

This training course, given by a specialist with 20 years' experience, can be run on your premises and can be adapted to suit your needs: the format can be adapted (2 days instead of 3), the content can be adapted to suit the trainees, or specific data can be used as support. Please contact us.

Web of data technical training : develop your knowledge graph based system

Training goals

The aim of this technical training course for developers is to enable participants to use the right methods and tools to implement a knowledge graph project. It is aimed at developers and data scientists.

This course will enable participants to answer the following questions:

  • How to take advantage of advanced SPARQL queries (federated queries, data transformation queries, etc.).
  • How to manipulate RDF data in scripts, programs (Java), using RDF4J or Jena, to perform data transformations?
  • How to feed a knowledge graph with relational data, XML, CSV, JSON, etc.?
  • How can I expose RDF data on the Web?
  • What are the main ontology models to be aware of when reusing, processing or publishing data?


1.5 days. This course can be adapted to 1 or 2 days.

Who is this training for ?

This advanced Web of data training course is aimed at developers, data scientists, project managers or consultants who wish to perfect their Web of data skills, or who are in the implementation phase of a project using Web of data technologies.


  • Knowledge of XML and JSON
  • Ability to write SQL queries is a plus
  • notions of RDF (a reminder will be given at the beginning of the course)


RDF structure reminder

  • RDF data model reminder
  • Turtle syntax reminder

RDF and advanced SPARQL

  • SPARQL Update: SPARQL update operations
  • Advanced SPARQL: taking advantage of named graphs
  • SPARQL non-standard operations: full-text search and spatial search

JSON-LD: encoding RDF triplets in JSON

  • JSON basics
  • The notion of JSON-LD context
  • Creating RDF-compatible JSON files with a JSON-LD context
  • JSON-LD Framing: exporting JSON files from a JSON-LD specification

Converting native data to RDF

  • Convert XML to RDF/XML using an XSLT stylesheet
  • Use native data to RDF mapping tools :
    • xls2rdf
    • SPARQLAnything
    • GraphDB's OntoRefine
  • R2RML and Direct Mapping: converting a relational database to RDF with OnTop

SPARQL and SPARQL update

  • SPARQL operators (depending on trainees' level of familiarity)
  • GeoSPARQL for querying geographic data
  • Using SPARQL to update data
  • Using SPARQL to export data

The GraphDB Triplestore

  • Installing and configuring GraphDB
  • Using the administration interface
  • Loading, exploring and visualizing data
  • Data maintenance strategy in a triplestore: named graphs

RDF APIs in Java and Python

  • RDF4J Java API: read/write RDF, execute SPARQL queries
  • The rdflib API in Python
  • Apache Jena: command lines: SPARQL, inference and processing without writing code

Adapt this course

By its very nature, this Web of Data training course for developers requires special adaptation to suit your project and data. If your problem requires the use of particular tools (inference engines, RDF bases, semantic ETLs), or particular data models, the basic training content will be adapted to take them into account. Please contact us.

SPARQL training : query knowledge graphs and the web of data

Training goals

A training course entirely dedicated to the SPARQL language. For teams wishing to perfect their knowledge of this language and get the most out of it. Exercises are typically carried out directly on the participants' RDF data, to meet expectations as closely as possible. Wikidata and DBPedia are also included.


1.5 days. This training can be adapted over 1 or 2 days.

Who is this training for ?

  • Developers who need to write SPARQL queries to interrogate semantic data.
  • Documentalists familiar with "data" issues.
  • Anyone wishing to learn how to query Wikidata.


  • Notions of data structuring in RDF (a reminder is given at the start of the course).
  • Previous experience in writing SQL queries is a plus


RDF structure reminder

  • RDF data model
  • Turtle syntax reminder

My first SPARQL query

  • Basic structure of a SPARQL query
  • Writing your first query on DBPedia
  • Selection using Basic Graph Patterns

Understanding SPARQL operators

  • FILTER and OPTIONAL operators
  • Filter functions: STR, REGEX, STRSTARTS, etc.
  • Assignment mechanism
  • Aggregation mechanism (COUNT and GROUP BY)

Wikidata SPARQL query tutorial

  • Using SPARQL operators to query Wikidata
  • Specific data structuring in Wikidata
  • Specific display of SPARQL search results in Wikidata
  • Integration and retrieval of Wikidata data in SPARQL

Advanced SPARQL operators

  • Using property paths
  • Negation searches
  • Federated queries (SERVICE)

SPARQL for semantic data maintenance

  • Modification operations (INSERT, DELETE)
  • CONSTRUCT queries for data transformation
  • Possibly, depending on the triplestore of choice, non-standard full-text or spatial search operators (GeoSPARQL)

SPARQL tools

  • Transform CSV, JSON and XML data into RDF using SPARQL : SPARQLAnything
  • Sparnatural: Visual query tool in SPARQL

Adapt this training

Contact us to adapt this course to your project. In particular, this course can be adapted to your project's existing data.

SKOS training : thesaurus, controlled vocabularies and alignments

Training goals

Training to become familiar with the structuring of thesauri, taxonomies, controlled vocabularies and authority lists for the Web of Data. Trainees will be able to maintain and publish data from these repositories in SKOS.


1.5 days. This training can be adapted over 1 or 2 days.

Who is this training for ?

  • Information professionals responsible for maintaining or creating controlled vocabularies.
  • Data scientists needing to structure data involving controlled vocabularies.


  • Notions of data structuring in RDF (a reminder is given at the start of the course).
  • Knowledge of what a controlled vocabulary is and what it is used for.


RDF structure reminder

  • Web of data issues, URIs
  • RDF data model
  • Reminder of Turtle syntax

The SKOS data model

  • Structuring SKOS concepts
  • Labels, notes, codes, semantic relations and alignments
  • SKOS-XL for label description
  • Perennial URIs: ARK, DOI

Controlled vocabulary management issues

  • Descriptive metadata for controlled vocabularies
  • Versioning and depreciation of controlled vocabularies
  • Web publishing

Examples of thesauri published on the Web of Data

  • UNESCO Thesaurus
  • Eurovoc
  • INRAE and INIST vocabularies (Loterre)

Tools for editing, aligning and publishing SKOS vocabularies

  • xls2rdf: maintain controlled vocabularies in Excel tables. Examples of Excel table formats for producing SKOS or other RDF structures
  • SKOS Play: visualize and control SKOS vocabularies
  • VocBench: advanced management tool for controlled vocabularies and ontologies
  • Skosmos and Showvoc: publication of controlled vocabularies

Alignment of controlled vocabularies

  • OnaGUI: semi-automatic vocabulary alignment tool
  • Semi-automatic controlled vocabulary alignment exercise

Adapt this training

Contact us to adapt this training to your project. This training can be adapted to a shorter one-day format, and adapted to the repositories already used in the information system.

How to adapt training courses?

Would you like to develop an adapted training course for your teams? The process is as follows:

During a telephone meeting, we define your needs in terms of :

  • content
  • duration
  • number of people trained

I draw up a detailed training plan, outlining the course content hour by hour. At the same time, we check that all trainees have mastered the necessary prerequisites by means of a MCQ questionnaire. If necessary, we adjust the training plan together, and once finalized it is integrated into the training program.

SHACL training : specify the structure of your knowledge graph

Training goals

Specifying the structure of a knowledge graph is at the heart of data-centric projects. This specification, written in SHACL, makes it possible to control data integrity, document the model, generate interfaces, combine ontologies and describe business rules.

The objectives of this training course are to:

  • Position the use of SHACL in relation to OWL, in a knowledge graph architecture
  • Understand the basic elements of the SHACL vocabulary;
  • Learn how to implement SHACL tools:
    • Editing SHACL rules in Excel spreadsheets
    • Data validation with SHACL
    • Generating documentation and diagrams from SHACL
    • Setting up the Sparnatural visual query tool from SHACL
    • Automatic generation of SHACL rules from RDF data analysis
  • Learn about SHACL rule types: structural rules vs. business rules
  • Know how to encode business rules in SHACL


1 day. This course can be adapted to 1.5 days.

Who is this training for?

  • Profiles of data-oriented documentalists who need to document the structure of a knowledge graph
  • Engineers needing to implement a semantic data processing chain (migration, management, publication of RDF data)
  • Data scientists needing to check the integrity of knowledge graphs.


Trainees attending this course should :

  • Be familiar with the basic structure of RDF graphs (a reminder will be given at the start of the course)
  • Be familiar with RDF Turtle syntax
  • OWL notions (know what a class and a property are)
  • Notions of SPARQL querying


Review of RDF structure

  • RDF data model: triplets, URIs, literals, anonymous nodes
  • Reminder of Turtle syntax
  • Positioning SHACL in relation to OWL

The SHACL model

  • SHACL specifications at the heart of a knowledge graph system
  • Notion of shapes, with their targets
  • Notion of constraints
  • Structure of a validation report
  • Closed / open shapes
  • Theoretical SHACL exercise
  • Application of SHACL at several workflow levels:
    • Shapes for data conversion validation
    • Application profile maps
    • Dataset description maps

SHACL editing and data validation

  • Tool handling: SHACL Excel input table
    • Converter principles
    • Entering simple Shapes on a customer model
  • Tool handling: command-line SHACL validator for generating validation reports
  • Tool handling: SHACL support in GraphDB
  • Examples of European Parliament specifications documented in SHACL

Advanced SHACL

  • SHACL advanced:
    • SHACL-AF: SPARQL-based targets
    • Writing business rules based on SPARQL queries
    • Extending SHACL with new constraints
  • DASH vocabulary extensions

Adapt this training

Contact us to adapt this course to your project. In particular, this course can be adapted to your project's existing data.

OWL Training : ontologies and conceptual models for knowledge graphs

Training goals

Ontologies are an important part of the Web of Data ecosystem, and a prerequisite for achieving good semantic interoperability of data ("what is understood from the data is what has been published"). This course aims to:

  • provide an overview of the most common ontologies used on the Web of Data
  • position and understand the main conceptual models for describing heritage data and other sectors
  • understand the RDFS and OWL operators available to describe a domain of knowledge
  • create your own OWL ontology in Protégé


1.5 days. This training can be adapted over 1 or 2 days.

Who is this training for ?

  • Data-oriented documentalists who need to specify a knowledge domain
  • Engineers and developers needing to implement a semantic data processing chain involving automatic reasoning
  • Data scientists needing to migrate data into a knowledge graph


Trainees attending this course should:

  • Be familiar with the basic structure of RDF graphs (a reminder will be given at the start of the course)
  • Be comfortable using IT tools (text editors, query editors, database management tools, etc.).


RDF structure reminder

  • RDF data model
  • Reminder of Turtle syntax

Introduction to ontologies

  • What is an ontology?
  • What's the difference between an OWL ontology, a SKOS controlled vocabulary and a SHACL specification?
  • High-level ontologies versus domain ontologies: principles of ontology extension
  • "Lightweight" ontologies versus "rigid conceptual models": approaches to ontology formalization

Ontology operators

  • Basic RDFS operators: subClassOf, subpropertyOf, domain, range
  • OWL operators: inverse and transitive properties
  • OWL restrictions: cardinalities, domain, range restrictions

Ontologies for the Web of Data

  • DublinCore: generic documentary metadata
  • FOAF: description of people
  • SKOS: controlled vocabularies
  • ORG: description of organizations
  • schema.org: structured data for search engines
  • PROV: description of data provenance and history
  • DCAT: description of datasets

Protégé: an OWL ontology editor

  • Exercise using Protégé to understand OWL operators and edit an ontology
  • Using a reasoner
  • Example ontology using automatic classification: drug interactions
  • Implementing automatic reasoning in a triplestore: manipulating GraphDB

Conceptual models for heritage data

  • History of conceptual models in libraries, museums and archives
  • FRBR / LRM: structuring bibliographic records
  • CIDOC-CRM: description of heritage objects
  • Records In Contexts: structuring archival records
  • Use or extend these conceptual models in your project

Disseminate your ontology on the web

  • Describing your ontology with the right metadata
  • Best practices for publishing ontologies
  • Tools for documenting and automatically publishing ontologies

Adapt this training

Contact us to adapt this course to your project. In particular, this course can be adapted to your project's existing data.