Home

Awesome

<p align="center"> <img src="https://github.com/oeg-upm/lubm4obda/blob/main/logo.png" height="280" alt="morph"> </p>

DOI

The LUBM4OBDA Benchmark is an extension of the popular LUBM Benchmark to evaluate Ontology-Based Data Access (OBDA) engines over relational databases. In addition, LUBM4OBDA considers meta knowledge (also called reification or statement-level metadata) benchmarking. The main characteristics of LUBM4OBDA are:

Citing LUBM4OBDA: please cite the JWE paper:

@article{arenas2024lubm4obda,
  title     = {{LUBM4OBDA: Benchmarking OBDA Systems with Inference and Meta Knowledge}},
  author    = {Arenas-Guerrero, Julián and Pérez, María S. and Corcho, Oscar},
  journal   = {Journal of Web Engineering},
  publisher = {River Publishers},
  issn      = {1544-5976},
  year      = {2024},
  volume    = {22},
  number    = {8},
  pages     = {1163–1186},
  doi       = {10.13052/jwe1540-9589.2284}
}

Data

There are two options to obtain the SQL data dumps:

Mappings

The mappings directory of this GitHub repository contains all the R2RML and RML documents. The following mappings are provided:

Ontology

The Univ-Bench ontology is available in the ontology directory of this GitHub repository.

Queries

The queries are available in the queries directory of this GitHub repository. Keep in mind that original mappings should be used for queries 1-14. There are three different versions of queries 15-18, one for each meta knowledge approach (standard reification, singleton property or RDF-star), with each approach having its corresponding mapping.

CSV & Apache Parquet

It is also possible to run the benchmark with CSV and Apache Parquet files. The resources for these data sources are available in Zenodo and they have been described in an ESWC paper.