Materials for our CIKM'22 paper
Demo
Try our context-sensitive autocompletion on a variety of knowledge graphs
Evaluation web application
Click through our evaluation and explore our results in full detail
Code on Github
Note: all the improvements to QLever that are described in the paper have been merged into QLever's master branch over the last months. You can use it to build an index for the complete Wikidata and run the evaluation script to achieve results similar to those in the paper. To reproduce the exact results from the paper, see the next section.
Exact reproducibility materials
The following sections contain the exact version of QLever and the evaluation script that were used for the evaluation of the paper. Note that the binary index files for QLever are only compatible with this version and not with the current Github master, although they yield similar performance.
Evaluation script, queries, AC query templates, and result files
Extended version of QLever
We have provided a pre-compiled docker image of the exact version of QLever that was used for the evaluation in the paper and is compatible with the binary index files and the version of the evaluation script.
Running QLever, Virtuoso and Blazegraph
Machine Requirements
We ran our experiments on a AMD Ryzen 7 3700X CPU (8 cores + SMT), 128 GB of DDR-4 RAM and 4 TB SSD storage (NVME, Raid 0). To roughly reproduce our results you need a similar machine. In particular you need (at least) 128GB of RAM and 3TB of SSD storage (Needed by QLever's Wikidata index). If you only want to run evaluations on the smaller two datasets (Freebase and Fbeasy), 2TB of SSD suffice. Running the Fbeasy evaluations only should also work on a machine with 500GB of SSD and 64GB of RAM. Your machine needs to run Linux and Docker must be installed. (Everything runs inside Docker so the exact Linux version and distribution should not be too important, we used Ubuntu 18.04)
Instructions for Running the Evaluation
Index Files