mixedautocompletion mode on Blazegraph or Virtuoso you also need a running QLever instance with the same knowledge base. This should preferably run on a different machine because of RAM issues, and the port of the QLever instance should be accesible via network.
First download and extract the code
wget http://vldb2021-1807.hopto.org/qlever-evaluation-VLDB-submission.tar.gz tar xvzf qlever-evaluation-VLDB-submission.tar.gz cd qlever-evaluation-VLDB-submission/evaluation-autocompletion
Next you need to edit the
Dockerfile using a text editor of your choice.
Enter the adress + port of your running query engine in lines
33 - 35 (you only need to specify the engines that are actually used correctly.
If you have chosen the ports recommended in the Tutorials for Running the
engines, these ports should already match.
It is important that you use the following formats:
http://<machine-name>:<port>/ (for qlever) http://<machine-name>:<port>/sparql (for virtuoso) http://<machine-name>:<port>/bigdata/sparql (for blazegraph)
Note that you cannot use loopback network devices (127.0.0.1 or localhost) with the docker setup. If you run the engine and the evaluation on the same machine, you can e.g. use the virtual network bridge provided by docker (ip address 172.17.0.1 by default)
You can then run your evaluation.
Specify the values for KB and BACKEND (i.e. the engine) in the second line accordingly.
The results are written to a subfolder
results/0000-00-00.test , change the
OUTPUT_DIR variable in the first line as desired.
export OUTPUT_DIR=0000-00-00.test && docker build -f Dockerfile -t qlever-evaluation . export KB=fbeasy; export BACKEND=blazegraph; export MODE=sensitive; docker run -it --rm -e KB=$KB -e MODE=$MODE -e NUM_QUERIES=0 -e BACKEND_TYPE=$BACKEND -v $(pwd)/results/$OUTPUT_DIR:/output --name qlever-evaluation qlever-evaluation;
The Evaluation should run now for some time and the output is shown.
If you want to run a mixed evaluation, specify
MODE=mixed. Note again, that
in this case you ALWAYS need a running QLever instance.
If you want to evaluate all modes on QLever (agnostic, unranked, sensitive, mixed) you can use the following loop (again specify the KB variable accordingly):
export OUTPUT_DIR=0000-00-00.test; export KB=fbeasy && docker build -f Dockerfile -t qlever-evaluation . for MODE in unranked agnostic mixed sensitive ; do docker run -it --rm -e KB=$KB -e MODE=$MODE -e NUM_QUERIES=0 -v $(pwd)/results/$OUTPUT_DIR:/output --name qlever-evaluation qlever-evaluation; done
OUTPUT_DIRas the different experiments will be named using the knowledge base, the engine, and the mode.
python 3installed to use it. Inside the
python3 -m http.server 9876
9876by a free port on your machine). Open a web browser and navigate to
http://localhost:9876/wwwto access the webapp. You should be able to see your previous runs.