Skip to content

1. Quick start

Yotlan edited this page May 17, 2023 · 16 revisions

In this section, you'll find step-by-step instructions for setting up and running a small benchmark. If you're looking to expand FedShop by adding more engines or incorporating your own use case, we recommend checking out the Quick tutorial section for further guidance.

If you would like to reproduce the results obtained in FedShop, you can refer to this section.

1.1 Setup FedShop

For easy and efficient deployment of FedShop, we offer a containerized version of the platform available on Dockerhub.

  • Run a container from the image
# Run fedshop container. The results will be in `/tmp/experiments`
docker run --detach --privileged --network host --name fedshop \
    --volume /tmp/experiments:/FedShop/experiments \
    minhhoangdang/fedshop:amd64

# If you have an Apple Chip, use `minhhoangdang/fedshop:arm64`
# If you are behind a proxy, add "--env NO_PROXY=auth,localhost,127.0.0.1,192.168.0.1/24" 
  • Open an interactive shell in the container
docker exec -it fedshop /bin/bash
  • IMPORTANT: Update the FedShop repository within the container
git fetch && git reset --hard origin/main --recurse-submodules

1.2 Launch a miniature version of FedShop

For this tutorial, we provide a scaled-down version of FedShop in config_small.yaml. This setup creates:

  • 2 configurations:
    • 20 endpoints (batch0): 10 vendors + 10 rating-sites
    • 40 endpoints (batch1): 20 vendors + 20 rating-sites
  • 24 queries (2 random instantiations of the 12 templates queries)

The benchmark execution comprises three simple steps:

  1. Generate data
python rsfb/benchmark.py generate data experiments/bsbm/config_small.yaml
## see you datasets
# ls /tmp/experiments/bsbm/model/dataset/*.nq
  1. Generate queries
python rsfb/benchmark.py generate queries experiments/bsbm/config_small.yaml
## see your queries
# ls /tmp/experiments/bsbm/benchmark/generation/q*/instance_*/injected.sparql 

More options for calling FedShop generator: python rsfb/benchmark.py generate --help

  1. Evaluate FedShop on some engines (FedX and our Reference Source Assignement (RSA))
# If you are behind a proxy, make sure that you copy `~/.m2/settings.xml` to the container
docker cp ~/.m2/settings.xml fedshop:/root/.m2/

python rsfb/benchmark.py evaluate experiments/bsbm/config_small.yaml
# Results are in /tmp/experiments/bsbm/benchmark/evaluation

More options for calling FedShop runner: python rsfb/benchmark.py evaluate --help

To see the run time metrics for the fedx engine, for all instantiated queries, all config, all attempts:

# Outside docker container
tail -n +1 /tmp/experiments/bsbm/benchmark/evaluation/fedx/q*/instance_*/batch_*/attempt_*/stats.csv

# Inside docker container
tail -n +1 experiments/bsbm/benchmark/evaluation/fedx/q*/instance_*/batch_*/attempt_*/stats.csv

1.3 Interactive Visualization of Miniature FedShop Results with Jupyter Notebook

Before launching our Jupyter Notebook, you need to execute the following command to create a zip file of the benchmark, which is necessary to visualize the plot of the results.

zip -r eval-model.zip /tmp/experiments/bsbm/benchmark/
Clone this wiki locally