Home

Awesome

LargeRDFBench

LargeRDFBench is a comprehensive benchmark encompasses real data and real queries (i.e., showing typical requests) of varying complexities, suite for testing and analyzing both the efficiency and effectiveness of federated query processing over multiple SPARQL endpoints. LargeRDFBench has been published at journal of web semantics. The pdf is available from here. The extension of the LargeRDFBench is available from here.

Citation

Saleem, Muhammad, Ali Hasnain, and Axel-Cyrille Ngonga Ngomo. "LargeRDFBench: A billion triples benchmark for SPARQL endpoint federation." Journal of Web Semantics 48 (2018): 85-125.

Benchmark Datasets Statistics

In the following we provide information about the datasets used in LargeRDFBench along with download links, both for data-dumps and Virtuso-7.10 SPARQL endpoints.

Dataset#Triples#Distinct Subjects#Distinct Predicates#Distinct Objects#Classes#LinksStructuredness
LinkedTCGA-M4150303278300660961661067441-1
LinkedTCGA-E344576146574299047844034221-1
LinkedTCGA-A353298685782962383832939323251.3k0.98
ChEBI477270650477287721381-0.340
DBPedia-Subset42849609949586510631362002824865.8k0.196
DrugBank51702319693119276142810.8k0.726
Geo Names107950085747971426357993921118k0.518
Jamendo104964733592526440686111.7k0.961
KEGG1090830342602193925841.3k0.919
Linked MDB614799669440022220529595363.1k0.729
New York Times3351982166636191538231.7k0.731
Semantic Web Dog Food10359511974118375471032.3k0.426
Affymetrix442071461421763105132402703246.3k0.506
Total10039601761657852122160326209517459792.3kAvg. 0.65

Duan et al. " Apples and Oranges: A Comparison of RDF Benchmarks and Real RDF Datasets" introduced the notion of structuredness or choerence, which indicates whether the instances in a dataset have only a few or all attributes of their types set. They show that artificial datasets are typically highly structured and “real” datasets are less structured. <font color="red"> The complete details along with type coverages can be found here. The LargeRDFBench java utility to calculate the dataset structuredness can be found here along with usage examples. </font>

Datasets Availability

All the datasets and corresponding virtuoso SPARQL endpoints can be downloaded from the links given below. For SPARQL endpoint federation systems, we strongly recommend to directly download the endpoints as some of the datadumps are quite big and require a lot of upload time. You may start a SPARQL endpoint from bin/start.bat (for windows) and bin/start_virtuoso.sh (for linux). Please note that LinkedTCGA-M(Mehtylation), LinkedTCGA-E(Exon), LinkedTCGA-A(All others), and DBpedia-subset are subsets of the live SPARQL endpoints. Further, the TCGA live SPARQL endpoints are not aligned with Affymetrix, Drugbank, and DBpedia.

DatasetData-dumpWindows EndpointLinux EndpointLocal Endpoint UrlLive Endpoint Url
LinkedTCGA-MDownloadDownload Downloadyour.system.ip.address:8887/sparql-
LinkedTCGA-EDownloadDownloadDownloadyour.system.ip.address:8888/sparql-
LinkedTCGA-ADownloadDownloadDownloadyour.system.ip.address:8889/sparql-
ChEBIDownloadDownloadDownloadyour.system.ip.address:8890/sparql-
DBPedia-SubsetDownloadDownloadDownloadyour.system.ip.address:8891/sparqlhttp://dbpedia.org/sparql
DrugBankDownloadDownloadDownloadyour.system.ip.address:8892/sparqlhttp://wifo5-04.informatik.uni-mannheim.de/drugbank/sparql
Geo NamesDownloadDownloadDownloadyour.system.ip.address:8893/sparqlhttp://factforge.net/sparql
JamendoDownloadDownloadDownloadyour.system.ip.address:8894/sparqlhttp://dbtune.org/jamendo/sparql/
KEGGDownloadDownloadDownloadyour.system.ip.address:8895/sparqlhttp://cu.kegg.bio2rdf.org/sparql
Linked MDBDownloadDownloadDownloadyour.system.ip.address:8896/sparqlhttp://www.linkedmdb.org/sparql
New York Times DownloadDownloadDownloadyour.system.ip.address:8897/sparql-
Semantic Web Dog FoodDownloadDownloadDownloadyour.system.ip.address:8898/sparqlhttp://data.semanticweb.org/sparql
AffymetrixDownload DownloadDownloadyour.system.ip.address:8899/sparqlhttp://cu.affymetrix.bio2rdf.org/sparql

Datasets Connectivity

Benchmark Queries

LargeRDFBench comprise of a total of 40 queries (both SPARQL 1.0 and SPARQL 1.1 versions) for SPARQL endpoint federation approaches. The 40 queries are divided into four different types : 14 simple queries (S1-S14 from FedBench), 10 complex queries (C1-C10), 8 large data queries (L1-L8), and 8 complex+high data sources (CH1-CH8) queries. The detail of these queries is given in table below. All of the queries can be downloaded from (SPARQL 1.0, SPARQL 1.1). The queries full results can be downloaded from here.

<div style="text-align:center"><img src ="https://sites.google.com/site/saleemsweb/swsa-award/stats.png" /></div> The highlighted complex + high data sources queries (CH1-CH8) are included in the extension of LargeRDFBench.

Further advanced queries features can be found here and discussed in the LargeRDFBench paper. The mean triple pattern selectivities along with complete details, for all of the LargeRDFBench queries can be found here. The LargeRDFBench java utility to calculate all these queries features can be found here along with usage examples.

Usage Information

In the following we explain how one can setup the LargeRDFBench evaluation framework and measure the performance of the federation engine.

SPARQL Endpoints Setup

Running SPARQL Queries

Provides the list of SPARQL endpoints URL's, and a LargeRDFBench query to the underlying federation engine as input and calculate the LargeRDFBench metrics (explained next). The query evaluation start-up files for the selected systems (which you can checkout from https://github.com/saleem-muhammad/LargeRDFBench) are given below.

----------FedX-original-----------------------

package : package org.aksw.simba.start;

File:QueryEvaluation.java

----------FedX-HiBISCuS-----------------------

package : package org.aksw.simba.fedsum.startup;

File:QueryEvaluation.java

----------SPLENDID-original-----------------------

package : package de.uni_koblenz.west.evaluation;

File:QueryProcessingEval.java

----------SPLENDID-HiBISCuS-----------------------

package : package de.uni_koblenz.west.evaluation;

File:QueryProcessingEval.java

----------ANAPSID-----------------------

Follow the instructions given at https://github.com/anapsid/anapsid to configure the system and then use anapsid/ivan-scripts/runQuery.sh to run a query.

Running SPARQL 1.1 Queries

Both ANAPSID, FedX provides support for SPARQL 1.1 queries. The procedure for running SPARQL 1.1 queries on these two systems remain the same. You can also directly run SPARQL 1.1 queries of LargeRDFBench from SPARQL endpoint online interface (see Local endpoints URL's from second table given above).

While running SPARQL 1.1 federation queries with online interface of Virtuoso SPARQL endpoint, you may encounter the following error

<font color="red">Virtuoso 42000 Error SQ200: Must have select privileges on view DB.DBA.SPARQL_SINV_2 </font>

You can solve this problem by opening Virtuoso conductor from http://your.system.ip.address:portno/conductor/isql.vspx (e.g., http://localhost:8888/conductor/isql.vspx). Type both user id and password as "dba". Once login, execute the following two commands.

<font color="red">grant select on "DB.DBA.SPARQL_SINV_2" to "SPARQL";

grant execute on "DB.DBA.SPARQL_SINV_IMP" to "SPARQL"; </font>

You should be able to run all of the benchmark SPARQL 1.1 queries by using online virtuoso online query interface. Please dont set the default named graph at virtuoso online query interface, otherwise, you may get no results.

How to calculate LargeRDFBench metrics?

LargeRDFBench makes use of 7 -- #ASK, #TP. Sources, Source selection time, Query runtime, Results completeness, Results correctness, Number of endpoint request -- main metrics (See paper for details). The first 4 can directly be computed from the source code (checkout the selected systems to see how we calculated these 4 metrics) of the underlying federation engine. While for the later 2, we provided a java tool which computes the precision, recall, F1-score of the results retrieved by the federation engine for a given benchmark query. We used virtuoso SPARQL endpoints and enabled the http log caching, thus all of the endpoint requests were stored in the query log files and we just count the total number of requests by using a simple java program which reads each log file line by line and count the total number of lines (in all log files from 13 endpoints ) as total endpoints requests.

Evaluation Results and Runtime Errors

We have compared 5 - FedX, SPLENDID, ANAPSID, FedX+HiBISCuS, SPLENDID+HiBISUCuS - state-of-the-art SPARQL endpoint federation systems with LargeRDFBench. Our complete evaluation results can be downloaded from here and the runtime errors thrown by the federation systems can be downloaded from here.

SPARQL Endpoints Specifications

Following are the specifications of the machines used in the evaluation to host SPARQL endpoints.

Benchmark Contributors

We are especially thankful to Helena Deus (Foundations Medicine, Cambridge, MA, USA) and Shanmukha Sampath (Democritus University of Thrace, Alexandroupoli, Greece) for providing real use case large data queries and useful discussions regarding large data sets selection. We are also thankful to Jonas S. Almeida (University of Alabama at Birmingham), Bade Iriaboho (University of Alabama at Birmingham), Sarven Capadisli, Maulik Kamdar (Standford University), and Aftab Iqbal (INSIGHT @ NUI Galway) for their contributions. Finally, we are very much thankful to Andreas Schwarte (fluid Operations, Germany), Maria-Esther Vidal ( Universidad Simón Bolívar), Olaf Görlitz (University Koblenz, Germany), Olaf Hartig (HPI, Germany) and Gabriela Montoya (Nantes M´etropole) for all their email conversations, feedbacks, and explanations.